Human emotion interpretation contributes greatly in human-machine interface (HMI) spanning applications in health care, education, and entertainment. Affective interactions can have the most influence when emotional recognition is available to both human and computers. However, developing robust emotion recognizers is a challenging task in terms of modality, feature selection, and classifier and database design. Most leading research uses facial features, yet verbal communication is also fundamental for sensing affective state especially when visual information is occluded or unavailable. Recent work deploys audiovisual data in bi-modal emotion recognizers. Adding more information e.g. gesture analysis, event/scene understanding, and speaker identification, helps increase recognition accuracy. As classification of human emotions can be considered a multi-modal pattern recognition problem, in this paper, we propose the schematics of a multi-dimension system for automatic human emotion recognition.
|Publication status||Published - 22 Sep 2016|
|Event||SAI Intelligent Systems Conference 2016 - CentrEd at ExCel , London, United Kingdom|
Duration: 21 Sep 2016 → 22 Sep 2016
|Conference||SAI Intelligent Systems Conference 2016|
|Abbreviated title||IntelliSys 2016|
|Period||21/09/16 → 22/09/16|