Workshop program

Keynote speaker: Itır Önal Ertuğrul

Your face says it all: Automated analysis and synthesis of facial actions

The face is a powerful channel of non-verbal communication. Anatomically-based facial action units, alone and in combinations, can convey nearly all possible facial expressions, regulate social behavior, and communicate emotion and intention. Human-observer based approaches for measuring facial actions are labor-intensive, qualitative, and thus not feasible for real-time applications or large data. For these reasons, objective, reliable, valid, and efficient automated approaches to facial action measurement are needed. Moreover, synthesizing realistic expressions can be useful to generate large and balanced facial expression databases, and to train personalized networks. Recent advances in machine learning and computer vision offer a powerful way to automatically detect and synthesize facial actions. In this talk, I will present our work on novel deep learning based computational approaches for automated facial action detection and synthesis, and their use in two applications: realizing adaptive deep brain stimulation systems for treatment of obsessive-compulsive disorder, and investigating infant response to parent unresponsiveness in mother-infant interaction.

Workshop schedule

09:15 - 10:10 Itır Önal Ertuğrul Keynote: Your face says it all: Automated analysis and synthesis of facial actions
10:10 - 10:30 Fangjun Li, David Hogg, Anthony Cohn Exploring the GLIDE model for Human Action Effect Prediction
10:30 - 11:00   Coffee Break
11:00 - 11:20 Hélène Tran, Issam Falih, Xavier Goblet, Engelbert Mephu Nguifo Do Multimodal Emotion Recognition Models Tackle Ambiguity?
11:20 - 11:40 Erika Loc, Keith Curtis, George Awad, Shahzad Rajput, Ian Soboroff Development of a MultiModal Annotation Framework and Dataset for Deep Video Understanding
11:40 - 12:00 Taiga Mori, Kristiina Jokinen, Yasuharu Den Cognitive States and Types of Nods
12:00 - 12:20 Nikolai Ilinykh, Rafal Černiavski, Eva Elžbieta Sventickaitė, Viktorija Buzaitė, Simon Dobnik Examining the Effects of Language-and-Vision Data Augmentation for Generation of Descriptions of Human Faces
12:20 - 12:40 Marc Tanti, Shaun Abdilla, Adrian Muscat, Claudia Borg, Reuben Farrugia, Albert Gatt Face2Text revisited: Improved data set and baseline results