Speech Emotion Recognition: Two Decades in a Nutshell, Benchmarks, and Ongoing Trends

Communications of the ACM, May 2018, Vol. 61 No. 5, Pages 90-99
Review articles: “Speech Emotion Recognition: Two Decades in a Nutshell, Benchmarks, and Ongoing Trends
By Björn W. Schuller

Communication with computing machinery has become increasingly ‘chatty’ these days: Alexa, Cortana, Siri, and many more dialogue systems have hit the consumer market on a broader basis than ever, but do any of them truly notice our emotions and react to them like a human conversational partner would? In fact, the discipline of automatically recognizing human emotion and affective states from speech, usually referred to as Speech Emotion Recognition or SER for short, has by now surpassed the “age of majority,” celebrating the 22nd anniversary after the seminal work of Daellert et al. in 1996—arguably the first research paper on the topic. However, the idea has existed even longer, as the first patent dates back to the late 1970s.

 

Previously, a series of studies rooted in psychology rather than in computer science investigated the role of acoustics of human emotion. Blanton, for example, wrote that “the effect of emotions upon the voice is recognized by all people. Even the most primitive can recognize the tones of love and fear and anger; and this knowledge is shared by the animals. The dog, the horse, and many other animals can understand the meaning of the human voice. The language of the tones is the oldest and most universal of all our means of communication.” It appears the time has come for computing machinery to understand it as well. This holds true for the entire field of affective computing—Picard’s field-coining book by the same name appeared around the same time as SER, describing the broader idea of lending machines emotional intelligence able to recognize human emotion and to synthesize emotion and emotional behavior.

Read the article »