In the Autonomous Systems Lab we aim at innovative research on cognitive robot motor skill learning and control based on human motion understanding. The main focus of the research is two-fold: autonomous learning from observations in daily life and cognitive robot control.

Autonomous learning from observation

While humans possess the remarkable ability to perform complex tasks and learn them through observation, robot agents still face limitations in this area. Our lab's goal is to enable robots to not only learn motion skills but also comprehend the structure of tasks. We believe that by helping robots understand the overarching goal or intention of a task, we can greatly enhance their performance. This is a significant step towards more intuitive human-robot interaction and everyday home robots. Our research is focused on learning task structures, the pre-conditions and effects of skills, motion skills, and motion retargetting. Our cutting-edge work is geared towards creating a more sophisticated and advanced robotic system.

Cognitive Robot Control

One of the most remarkable abilities humans possess is the capacity to predict the thoughts of others. As we progress in our developmental stage, we acquire this skill unconsciously and are able to use it nearly effortlessly in our day-to-day lives. With just a brief glance, we are capable of discerning an individual's gaze direction, actions, intentions, goals, and emotions. Failure to predict the objectives of others can lead to failure and feelings of unease, especially in collaborative human-human interactions. Our lab's objective is to translate these sophisticated cognitive processes into robotic agents. We believe that such agents would be more comfortable and natural to work with, as well as better suited for everyday situations. Our research focuses on human-robot interaction, Theory of Mind implementations in robotics, human motion prediction, and human action prediction.

Human-Robot Interaction

Our lab envisions robots that interact with humans in everyday tasks. However, to facilitate effective Human-Robot Interaction, the robots need to understand both the surrounding environemnt as well as the humans within it. Our lab is focused in understanding the humans around the robot from multiple perspectives and granularity levels: from human motions to actions and intentions. One of the main goal is to be able to estimate the human intention, which is crucial for robots to proactively provide the necessary support to humans in a timely manner. We believe that a robot requires contextual awareness (perceive and interpret the environemnt) for a better intention estimation, as we show in our works like "Human-Object Interaction Anticipation for Assistive roBOTs, opens an external URL in a new window" (CoRL2023).