Research

We work on the theoretical challenges and practical applications of socially-aware systems, i.e., machines that can not only perceive human behavior, but reason with social intelligence in the context of transportation problems and smart spaces. 

We envision a future where intelligent machines are ubiquitous, where self-driving cars, delivery robots, and self-moving Segways are facts of everyday life. Beyond embodied agents, we will also see our living spaces – our homes, buildings, and cities – become equipped with ambient intelligence which can sense and respond to human behavior. However, to realize this future, intelligent machines need to develop social intelligence and the ability to make safe and consistent decisions in unconstrained crowded social scenes. Self-driving vehicles must learn social etiquette in order to navigate cities like Paris or Naples. Social robots need to comply with social conventions and obey (unwritten) common-sense rules to effectively operate in crowded terminals. For instance, they need to respect personal space, yield right-of-way, and “read” the behavior of others to predict future actions.

Our research is centered around understanding and predicting human social behavior with multi-modal visual data. Our work spans multiple aspects of socially-aware systems:


1-  Sensing:
Collecting multi-modal data at scale

RGB, Depth, Thermal, Wireless signals


2- Perceiving:
Extracting coarse-to-fine grained behaviours in real-time

Motion trajectories, 3D poses, Collective activities, and Social Interactions

3- Forecasting:
Predicting future behaviors

Actions, Intentions, and Critical scenarios

4- Acting:
Making decisions in real-world settings
Self-driving cars or  socially-aware robots
that navigates crowded social scenes.

 

On-going projects

Crowd-Robot Interaction

Mobility in an effective and socially-compliant manner is an essential yet challenging task for robots operating in crowded spaces. In this project, we want to go beyond first-order Human-Robot interaction and more explicitly model Crowd-Robot Interaction (CRI). More details here.