The goal of this project is to study the timing and brain signals response of emotional recognition in a standardized way thanks to an avatar. Emotional detection is a mechanism that has been widely studied in neuroscience, however the novelty of this protocol lies in the fact of using a 3D avatar.
In the context of emotional detection, the use of an avatar represents two main advantages:
- Compared to a human: the avatar can reproduce the emotions in a standardized way (a real person could not repeat the same facial expression twice).
- Compared to a photo or a video: the avatar is not frozen like a photo and brings a third dimension to the dynamics of a video.
The analysis of the data resulting from this experiment will allow us to answer questions such as:
- Does using a 3D avatar add value compared to 2D avatars?
- Is there an advantage in presenting emotions in a dynamic way compared to conventional protocols in which emotions are shown frozen in photos?
- Can we use a 3D avatar for rehabilitation purposes for emotional recognition (eg For autistic children)?