The main objective of the Action is to develop an advanced acoustical, perceptual and psychological analysis of verbal and non-verbal communication signals originating in spontaneous face-to-face interaction, in order to identify algorithms and automatic procedures capable of identifying human emotional states. Several key aspects will be considered, such as the integration of the developed algorithms and procedures for application in telecommunication, and for the recognition of emotional states, gestures, speech and facial expressions, in anticipation of the implementation of intelligent avatars and interactive dialogue systems that could be exploited to improve user access to future telecommunication services.
The expected results of the Action will be threefold:
-
It will contribute to the establishment of quantitative and qualitative features describing both verbal and non-verbal modalities.
-
It will advance technological support for the development of improved multimodal (i.e. exploiting signals coming from several modalities) systems.
-
It will contribute to new theories to clarify the role of verbal and non-verbal modalities in communication, and their exploitation in telecommunication services.