We work on the theoretical challenges and practical applications of socially-aware systems, i.e., machines that can not only perceive human behavior, but reason with social intelligence in the context of transportation problems and smart spaces.
We envision a future where intelligent machines are ubiquitous, where self-driving cars, delivery robots, and self-moving Segways are facts of everyday life. Beyond embodied agents, we will also see our living spaces – our homes, buildings, and cities – become equipped with ambient intelligence which can sense and respond to human behavior. However, to realize this future, intelligent machines need to develop social intelligence and the ability to make safe and consistent decisions in unconstrained crowded social scenes. Self-driving vehicles must learn social etiquette in order to navigate cities like Paris or Naples. Social robots need to comply with social conventions and obey (unwritten) common-sense rules to effectively operate in crowded terminals. For instance, they need to respect personal space, yield right-of-way, and “read” the behavior of others to predict future actions.
Our research is centered around understanding and predicting human social behavior with multi-modal visual data. Our work spans multiple aspects of socially-aware systems:
Collecting multi-modal data at scale
RGB, Depth, Thermal, Wireless signals
Extracting coarse-to-fine grained behaviours in real-time
Motion trajectories, 3D poses, Collective activities, and Social Interactions
Predicting future behaviors
Actions, Intentions, and Critical scenarios
Making decisions in real-world settings
Self-driving cars or socially-aware robotsthat navigates crowded social scenes.