Abstract
Human emotion interpretation contributes greatly in Human-Machine Interface (HMI) spanning applications in health care, education, and entertainment. Affective interactions can have the most influence when emotional recognition is available to both human and computers. However, developing robust emotion recognizers is a challenging task in terms of modality, feature selection, and classifier and database design. Most leading research uses facial features, yet verbal communication is also fundamental for sensing affective state especially when visual information is occluded or unavailable. Recent work deploys audiovisual data in bi-modal emotion recognizers. Adding more information e.g. gesture analysis, event/scene understanding, and speaker identification, helps increase recognition accuracy. As classification of human emotions can be considered a multi-modal pattern recognition problem, in this paper, we propose the schematics of a multi-dimension system for automatic human emotion recognition.
Original language | English |
---|---|
Title of host publication | Proceedings of SAI Intelligent Systems Conference (IntelliSys) 2016 |
Editors | Yaxin Bi, Supriya Kapoor, Rahul Bhatia |
Place of Publication | Cham |
Publisher | Springer |
Pages | 922-931 |
Number of pages | 10 |
Volume | 1 |
ISBN (Electronic) | 9783319569949 |
ISBN (Print) | 9783319569932 |
DOIs | |
Publication status | Published - 2018 |
Externally published | Yes |
Event | SAI Intelligent Systems Conference 2016 - CentrEd at ExCel , London, United Kingdom Duration: 21 Sep 2016 → 22 Sep 2016 http://saiconference.com/Conferences/IntelliSys2016 |
Publication series
Name | Lecture Notes in Networks and Systems |
---|---|
Publisher | Sringer |
Volume | 15 |
ISSN (Print) | 2367-3370 |
ISSN (Electronic) | 2367-3389 |
Conference
Conference | SAI Intelligent Systems Conference 2016 |
---|---|
Abbreviated title | IntelliSys 2016 |
Country | United Kingdom |
City | London |
Period | 21/09/16 → 22/09/16 |
Internet address |