Activities

2024/7/11, Guest talks on cognitive developmental robotics

Dear colleagues,

I am delighted to invite you to our guest researchers’ talks on Thursday, July 11th. We will have three excellent researchers from developmental robotics, cognitive robotics, and human-robot/human-computer interaction.

Please join us for this intriguing presentations and a chance to engage with our guest researchers. We look forward to your participation.

Date and time:

13:00-18:00, Thursday, July 11, 2024

Place:

Hybrid (N304-305, Faculty of Medicine Building 1, The University of Tokyo and zoom)
https://ircn.jp/en/access

https://u-tokyo-ac-jp.zoom.us/j/83890409691?pwd=2UM6wy6cHSyKHL1E8cNhCCRobLHGBb.1
Meeting ID: 838 9040 9691
Passcode: 353910

Program:

13:00-14:00 Nguyen Sao Mai (Associate Professor at ENSTA Paris)
14:00-15:00 Mustafa Doga Dogan (Postdoc Researcher at Bogazici University)
15:15-16:15 Sara Incao (Postdoc Fellow at Italian Institute of Technology)
16:30-18:00 Visit to Cognitive Developmental Robotics Lab (Nagai Lab), IRCN, The University of Tokyo (in-person only)

=== Talk 1 ===

Nguyen Sao Mai, Associate Professor at ENSTA Paris
http://nguyensmai.free.fr

Title:
Emerging Symbolic Representation for Hierarchical Learning by Intrinsic Motivation

Abstract :
The same way that general actions by human action analysis can be modelled in a hierarchical manner, robots need to learn to control their environment using a hierarchical model. This learning process needs to be framed in the life-long learning framework, allowing learning agents to learn in a continual manner multiple tasks by devising their own curriculum in a open-ended manner with intrinsic motivation, by interaction with their environment and tutors. How can a symbolic representation of tasks emerge, as seems to indicate neuroscience research, while the sensirimotor space is continuous ?
We present how robots learn compositional relationship between control tasks to allow automatic curriculum learning and transfer of knowledge from simple tasks to more complex tasks. Our 3 models — a sequential [2], a nested model [3] and a goal-conditionned hierarchical reinforcement learning [1] model — enable the emergence of discrete structures to represent a task, relationship between tasks, and the state space.  The learning of the 3 models relies on the theory of intrinsic motivation, which allows learning without a specified goal but in an open-ended process. Intrinsic motivation also allows the robot to automatically devise its learning curriculum to learn simple to complex tasks, and to decide what, when, whom to imitate during its interaction with tutors.

[1] Zadem, M., Mover, S., and Nguyen, S. M. (2024). Reconciling the Spatial and Temporal Abstractions for Goal Representation. International Conference on Learning Representations.
[2] Duminy, N., Nguyen, S. M., and Duhaut, D. (2018). Learning a set of interrelated tasks by using sequences of motor policies for a strategic intrinsically motivated learner. Proceedings of IEEE International Conference on Robotic Computing.
[3] Manoury, A., Nguyen, S. M., and Buche, C. (2019). Hierarchical Affordance Discovery Using Intrinsic Motivation. Proceedings of the 7th International Conference on Human-Agent Interaction(186–193). Association for Computing Machinery.

===  Talk 2 ===

Mustafa Doga Dogan, Postdoc Researcher at Bogazici University
https://www.dogadogan.com

Title:
Ubiquitous Metadata: Embedded Object Markers for HCI and HRI

Abstract:
In this talk, I will present novel embedded markers that integrate into objects for identification & interaction in extended reality (AR/VR/XR) environments. These markers enhance functionality, security, and aesthetics using digital fabrication and computational imaging, enabling interactive experiences in product design, manufacturing, and more, by linking physical and digital worlds. In HCI and HRI, these markers improve machines’ and robots’ ability to understand and interact with their environments, providing precise, real-time data. This leads to advancements in automation, smart manufacturing, and personalized user experiences, bridging the gap between physical and digital realms, and opening new possibilities for intuitive & intelligent interaction systems.

Bio:
Doğa Doğan, a recent PhD graduate from MIT CSAIL, focuses on embedding machine-readable tags into everyday items. His research, recognized at CHI, UIST, and ICRA, combines HCI, digital fabrication, and augmented reality for novel object interactions. He has held positions at Google, Adobe, UCLA, TU Delft, and the Max Planck Institute for Intelligent Systems. He currently works on the EU Horizon project INVERSE with Prof. Emre Uğur and is a part-time lecturer at Boğaziçi University, Istanbul. Website: https://www.dogadogan.com

=== Talk 3 ===

Sara Incao, Postdoc Fellow at Italian Institute of Technology
https://www.iit.it/it/people-details/-/people/sara-incao

Title:
Insights on the self from a cognitive robotics perspective

Abstract:
Despite the high level of performance robots are able to achieve in task-specific contexts, they are still far from reaching intuitiveness in social interactions and versatility in making use of their skills in different contexts. Cognitive robotics is trying to address such issues by providing robots with cognitive architectures inspired by the functioning of human cognition with the aim of improving autonomy in robotic platforms. Furthermore, the use of humanoid robots extends this objective toward the perspective of embodied cognition. The notion of self, conceived as an operational definition to describe the subjective character of human experience, represents a suitable concept to address the problem of autonomy in robots. Indeed, it is precisely the unity and consistency of the self that constitute a reference point to generalize previous experience and to engage in a meaningful and effective contact with the world.