Research activities
The ISIA department comprises four main research groups, all based on the analysis, synthesis, recognition and mapping of signals such as speech, music, audio, images, videos, gaze, face, body and language, using artificial intelligence. The aim is to conduct internationally competitive research and integrate these advances into solutions that are accessible to all.
Smart Agents
- Domain-specific language and speech models
- Information retrieval through conversations
- Affective computing
- Agent embodiment
- Multimodal intent recognition
- Grounding language
- Adaptation of collaborative behavior based on context
- Multi-agent collaboration
Immersive Worlds
- Instrumentation for XR
- Markerless Motion capture
- Gesture recognition
- Space calibration
- 3D scanning and reconstruction
- Augmented controllers (game, music, interaction, ...)
- Eye tracking
- Computational attention for XR
- Cognitive mechanisms in XR
- Haptic feedback
- Sound spatialisation
- 3D Asset generation
- Character Animation synthesis
- Video Mapping
Computational Health
- Continuous patient monitoring
- Comfortable outpatient solutions
- Personalized medicine
- Accessible assistive technologies
- Data usability for clinicians
- Patient screening
- Technological integration in the clinical environment
- Reproducible biomedical research
Wild World
- Localization: People/Objects detection, segmentation & tracking
- Activity tracking: Environment & action recognition and detection
- Data scarcity (Generative models, ...)
- Model adaptation (Transfer, finetuning, Liquid NNs, ...)
- Data relevance (Attention, XAI & Bias)
- Data heterogeneity (multimodal data fusion)
- Light AI architectures (embedded, edge)
- Interoperability in model deployment and updates
- AI resource consumption at inference or training