Abstract
This article presents the idea of empathic stimulation that relies on the power and potential of unconsciously conveyed attentive and emotional information to facilitate human-machine interaction. Starting from a historical review of related work presented at past ACM Multimedia conferences, we discuss challenges that arise when exploiting unconscious human signals for empathic stimulation, such as the real-time analysis of psychological user states and the smooth adaptation of the human-machine interface based on this analysis. A classical application field that might benefit from the idea of unconscious human-computer interaction is the exploration of massive datasets.
- André, E. 2011. Experimental methodology in emotion-oriented computing. IEEE Perv. Comput. 10, 3, 54--57. Google Scholar
Digital Library
- Buscher, G., Dengel, A., Biedert, R., and Elst, L. V. 2012. Attentive documents: Eye tracking as implicit feedback for information retrieval and beyond. ACM Trans. Interact. Intell. Syst. 1, 2, Article 9, 30 pages. Google Scholar
Digital Library
- Cohn, J. F. and Katz, G. S. 1998. Bimodal expression of emotion by face and voice. In Proceedings of the 6th ACM International Conference on Multimedia: Face/Gesture Recognition and their Applications (MULTIMEDIA'98). ACM, New York, NY, 41--44. Google Scholar
Digital Library
- Ekman, P. 1999. Basic emotions. In Handbook of Cognition and Emotion, Wiley, 45--60.Google Scholar
- Gilroy, S. W., Cavazza, M., Chaignon, R., Mäkelä, S.-M., Niranen, M., André, E., Vogt, T., Urbain, J., Billinghurst, M., Seichter, H., and Benayoun, M. 2008. E-tree: Emotionally driven augmented reality art. In Proceedings of the 16th ACM International Conference on Multimedia (MM'08). ACM, New York, NY, 945--948. Google Scholar
Digital Library
- Jaimes, A., Sebe, N., and Gatica-Perez, D. 2006. Human-centered computing: a multimedia perspective. In Proceedings of the 14th Annual ACM International Conference on Multimedia (MULTIMEDIA'06). ACM, New York, NY, 855--864. Google Scholar
Digital Library
- Kapoor, A. and Picard, R. W. 2005. Multimodal affect recognition in learning environments. In Proceedings of the 13th annual ACM International Conference on Multimedia (MULTIMEDIA'05). ACM, New York, NY, 677--682. Google Scholar
Digital Library
- Lisetti, C. L. and Nasoz, F. 2002. MAUI: A multimodal affective user interface. In Proceedings of the 10th ACM International Conference on Multimedia (MULTIMEDIA'02). ACM, New York, NY, 161--170. Google Scholar
Digital Library
- Mower, E., Mataric, M. J., and Narayanan, S. S. 2011. A framework for automatic human emotion classification using emotion profiles. IEEE Trans. Audio Speech Lang. Process. 19, 5, 1057--1070. Google Scholar
Digital Library
- Nakatsu, R. 1998. Nonverbal information recognition and its application to communications. In Proceedings of the 6th ACM International Conference on Multimedia: Face/Gesture Recognition and their Applications (MULTIMEDIA'98). ACM, New York, NY, 2--9. Google Scholar
Digital Library
- Pantic, M., Sebe, N., Cohn, J. F., and Huang, T. 2005. Affective multimodal human-computer interaction. In Proceedings of the 13th Annual ACM International Conference on Multimedia (MULTIMEDIA'05). ACM, New York, NY, 669--676. Google Scholar
Digital Library
- Pentland, A. 2005. Socially aware media. In Proceedings of the 13th Annual ACM International Conference on Multimedia (MULTIMEDIA'05). ACM, New York, NY, 690--695. Google Scholar
Digital Library
- Stiefelhagen, R., Yang, J., and Waibel, A. 1999. Modeling focus of attention for meeting indexing. In Proceedings of the 7th ACM International Conference on Multimedia (Part 1) (MULTIMEDIA'99). ACM, New York, NY, 3--10. Google Scholar
Digital Library
- Vinciarelli, A., Pantic, M., Bourlard, H., and Pentland, A. 2008. Social signal processing: State-of-the-art and future perspectives of an emerging domain. In Proceedings of the 16th ACM International Conference on Multimedia (MM'08). ACM, New York, NY, 1061--1070. Google Scholar
Digital Library
- Vogt, T. and André, E. 2005. Comparing feature sets for acted and spontaneous speech in view of automatic emotion recognition. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME'05). IEEE, Los Alamitos, CA, 474--477.Google Scholar
- Wagner, J., Lingenfelser, F., André, E., Mazzei, D., Tognetti, A., Lanatà, A., De Rossi, D., Betella, A., Zucca, R., Omedas, P., and Verschure, P. F. M. J. 2013. A sensing architecture for empathetic data systems. In Proceedings of the 4th Augmented Human International Conference. ACM, New York, NY, 96--99. Google Scholar
Digital Library
Index Terms
Exploiting unconscious user signals in multimodal human-computer interaction
Recommendations
Evaluating multimodal affective fusion using physiological signals
IUI '11: Proceedings of the 16th international conference on Intelligent user interfacesIn this paper we present an evaluation of an affective multimodal fusion approach utilizing dimensional representations of emotion. The evaluation uses physiological signals as a reference measure of users' emotional states. Surface electromyography (...
Automatic understanding of affective and social signals by multimodal mimicry recognition
ACII'11: Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part IIHuman mimicry is one of the important behavioral cues displayed during social interaction that inform us about the interlocutors' interpersonal states and attitudes. For example, the absence of mimicry is usually associated with negative attitudes. A ...
Using noninvasive wearable computers to recognize human emotions from physiological signals
We discuss the strong relationship between affect and cognition and the importance of emotions in multimodal human computer interaction (HCI) and user modeling. We introduce the overall paradigm for our multimodal system that aims at recognizing its ...






Comments