Author: Jennifer Zheng

EEG signals analysis for emotions recognition.

The goal of this project is to study the timing and brain signals response of emotional recognition in a standardized way thanks to an avatar. Emotional detection is a mechanism that has been widely studied in neuroscience, however the novelty of this protocol lies in the fact of using a 3D avatar.

In the context of emotional detection, the use of an avatar represents two main advantages:

  • Compared to a human: the avatar can reproduce the emotions in a standardized way (a real person could not repeat the same facial expression twice).
  • Compared to a photo or a video: the avatar is not frozen like a photo and brings a third dimension to the dynamics of a video.

The analysis of the data resulting from this experiment will allow us to answer questions such as:

  • Does using a 3D avatar add value compared to 2D avatars?
  • Is there an advantage in presenting emotions in a dynamic way compared to conventional protocols in which emotions are shown frozen in photos?
  • Can we use a 3D avatar for rehabilitation purposes for emotional recognition (eg For autistic children)?

Early prediction and detection of potential pharmaceutical scandals based on the analysis of social networks data: Application to the levothyrox scandal in 2017 on doctissimo forum.

The aim of this project is to analyze users comments in order to detect possible pharmaceutical scandals. A famous pharmaceutical scandal is the Levothyrox drug scandal. The controversy over the drug Levothyrox, a drug administered for hypothyroidism patients, dates back to spring 2017, when a new formula of Levothyrox, produced by Merck, was introduced in France. Very quickly, thousands of patients began to report side-effects, and many even said that their original symptoms – even thyroid cancers – had returned.

In this project, user’s comments are scrapped from the doctissimo forum and analyzed to extract features and build Machine Learning models to automatically detect future similar scandals. 

Invited Talks

Upcoming Talks:

Date Where Speaker(s) Title
10/11/
2021
The AI Journey Hanan Salam Bias in Artificial Intelligence
04/11/2021 Huawei Seeds for the Future Program  Hanan Salam Artificial Social Intelligence and Applications
16/10/2021

 

ICCV 2021. DYAD Challenge Hanan Salam Personalised Models. Automatic Self-reported Personality Recognition.
06/05/2021 ENSEA Hanan Salam Building socially intelligent Machines: Automatic human behaviour understanding in HCI 

Past Talks:

Learning Personalised Models for Automatic Self-Reported Personality Recognition

Smart phones, voice assistants, and home robots are becoming more intelligent every day to support humans in their daily routines and tasks. Achieving the acceptance of such technologies by their users and ensuring their success make it necessary for them to be socially informed, responsive, and responsible. They need to understand human behaviour and socio-emotional states and adapt themselves to their user’s profiles (e.g., personality) and preferences. Motivated by this, there has been a significant effort in recognising personality from multimodal data in the last decade~\cite{survey1,survey2}. However, to the best of our knowledge, the methods so far have focused on one-fits-all approaches only and performed personality recognition without taking into consideration the user’s profiles (e.g., gender and age). In this paper, we took a different approach, and we argued that the one-fits-all approach does not work sufficiently for personality recognition as previous research showed that there are significant gender differences in personality traits. For example, women tend to report higher scores for extraversion, agreeableness and neuroticsm as compared to men. Building upon these findings, we first clustered the participants into two profiles based on their gender, namely, female and male, and then used Neural Architecture Search (NAS) to automatically design a model for each profile to recognise personality. Each network was trained with visual, textual and time-based features separately. The final prediction was obtained by aggregating the results of both video and text modalities.