Damien Dupre, Dublin City University

Damien Dupre, Dublin City University

Assistant Professor at Dublin City University’s Business School, my domain of expertise lies in multivariate time series analysis and machine learning classification of emotions ‘in the wild’. Whereas my thesis aimed to evaluate Emotional User eXperience of innovative technologies and designs, I have collaborated with Sensum Ltd. to analyse emotions with physiological sensors and automatic facial expression recognition. I have also contributed to develop the DynEmo database by recording and assessing dynamic and spontaneous facial expressions of emotions. Recently, I worked at Insight Centre for Data Analytics for marathon runners’ pace predictions from physiological measurements.

Disenchantment with Emotion Recognition Technologies: Implications and Future Directions

Emotions are a main driver of decision making processes; their understanding is essential to produce consumer’s insights. With the development of IoT and Machine Learning, it is now possible to automatically evaluate emotions from various data streams among which facial expressions are one of the most prominent. An exponential number of tech companies are providing commercial systems to infer emotions from facial expressions (Software, API or SDK, see Dupré, Andelic, Morrison, & McKeown, 2018). As shown in the largest benchmark to date (Dupré, Krumhuber, Küster, & McKeown, 2019), results from these technologies are significantly less accurate than human observers. Criticising not only the algorithms’ performance but also the theory underlying these systems, well known scientists in psychology and computer science have called for a halt to the use of these technologies for significant decisions (e.g., in human resources management). However, some encouraging improvements are suggesting solutions to the frailties in emotion recognition technologies. Instead of recognizing a specific set of emotions, some systems are recognizing valence and arousal dimensions which are more generalizable features. Moreover, some research has provided specificity about the lack of accuracy in the technologies. It seems that the prediction of emotions categories does not cope well with blended emotions. Systems have to learn from more diverse and more complex emotions. Finally, with the advancement of machine learning, recognition technologies will use not only the face, but also the entire body and the context in which the expression is produced to accurately infer emotions.

MY SESSIONS

Powered by Khore by Showthemes