Thesis topic

Expressive avatars

  • Type
    Doctorate

Description

The work will focus on the precise retargeting of a user’s audiovisual expressions on an avatar. Indeed, the current state of the art contains problems of detection accuracy and accumulation of errors to be dealt with and, to our knowledge, the problems of retargeting for the audio modality are neglected since the few works on the subject focus solely on the visual. The thesis will be divided into two objectives. On the one hand, it will focus on improving audiovisual expression detection systems in the context of XR applications with and without occlusions (presence or absence of an HMD for scenarios such as virtual meetings). On the other hand, the aim is to explore flexible retargeting techniques for matching different users to different avatars as seamlessly as possible.

The aim here is to represent the user controlling the avatar as accurately as possible and to express their communication intention as precisely as possible. The most intuitive way of achieving this would be to automatically detect the user’s expressions and retarget them onto the avatar. This poses different problems depending on the medium: occlusions due to head-mounted devices, limitations of human-to-avatar retargeting technologies, etc. There is some work in the literature that tackles these problems [1,2]. But there is still plenty of room for improvement due to problems such as the limitations of expression detection systems, the accumulation of errors in the re-targeting process, the lack of flexibility in retargeting methods, and the need to take multimodality into account in these systems (they are currently mainly focused on facial expressions).

[1] Purps, C.F., Janzer, S., Wölfel, M. (2022). Reconstructing Facial Expressions of HMD Users for Avatars in VR. In: Wölfel, M., Bernhardt, J., Thiel, S. (eds) ArtsIT, Interactivity and Game Creation. ArtsIT 2021. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 422. Springer, Cham. https://doi.org/10.1007/978-3-030-95531-1_5
[2] J. Zhang, K. Chen and J. Zheng, “Facial Expression Retargeting From Human to Avatar Made Easy,” in IEEE Transactions on Visualization and Computer Graphics, vol. 28, no. 2, pp. 1274-1287, 1 Feb. 2022, doi: 10.1109/TVCG.2020.3013876.

About this topic

Related to
Service
ISIA
Promoters
Thierry Dutoit
Kevin El Haddad

Contact us for more info