Human-robot interactions take step forward with 'emotional' chatbot
www.theguardian.com

An “emotional chatting machine” has been developed by scientists, signalling the approach of an era in which human-robot interactions are seamless and go beyond the purely functional.

The chatbot, developed by a Chinese team, is seen as a significant step towards the goal of developing emotionally sophisticated robots.

The ECM, as it is known for short, was able to produce factually coherent answers whilst also imbuing its conversation with emotions such as happiness, sadness or disgust.

Prof Björn Schuller, a computer scientist at Imperial College London who was not involved in the latest advance, described the work as “an important step” towards personal assistants that could read the emotional undercurrent of a conversation and respond with something akin to empathy.

“This will be the next generation of intelligence to be met in daily experience, sooner rather than later,” he said.

The paper found that 61% of humans who tested the machine favoured the emotional versions to the neutral chatbot. Similar results have been found in so-called “Wizard of Oz” studies in which a human typing responses masquerades as advanced AI.

“It is not a question whether they are desirable – they clearly are – but in which applications they make sense and where they don’t,” said Schuller.

Minlie Huang, a computer scientist at Tsinghua University, Beijing and co-author, said: “We’re still far away from a machine that can fully understand the user’s emotion. This is just the first attempt at this problem.”

Huang and colleagues started by creating an “emotion classifying” algorithm that learned to detect emotion from 23,000 posts taken from the Chinese social media site Weibo. The posts had been manually classified by humans as sad, happy and so on.

The emotion classifier was then used to tag millions of social media interactions according to emotional content. This huge dataset served as a training ground for the chatbot to learn both how to answer questions and how to express emotion.

The resulting program could be switched into five possible modes – happy, sad, angry, disgusted, liking – depending on the user’s preference. In one example conversation a user typed in: “Worst day ever. I arrived late because of the traffic.”

In neutral mode, the chatbot droned: “You were late”. Alternative responses were: “Sometimes life just sucks!” (disgust mode), “I am always here to support you” (liking) or “Keep smiling! Things will get better” (happy – or, some might say, annoyingly chipper).

In the future, the team predict the software could also learn the appropriate emotion to express at a given time. “It could be mostly empathic,” said Huang, adding that a challenge would be to avoid the chatbot reinforcing negative feelings such as rage.

Until recently chatbots were widely regarded as a sideshow to more serious attempts at tackling machine intelligence. A chatbot known as Eugene Goostman managed to convince some judges they were talking to a human – but only by posing as a 13-year old Ukrainian boy with a limited grasp of English. Microsoft’s disastrous chatbot Tay was supposed to learn to chat from Twitter interactions, but was terminated after becoming a genocide-supporting Nazi less than 24 hours after being let loose on the internet.

The latest study shows that chatbots, driven by a machine learning approach, are starting to make significant headway. Sandra Wachter, a computer scientist at the Oxford Internet Institute, said that in future such algorithms are likely to be personalised. “Some of us prefer a tough-love pep talk, others prefer someone to rant with,” she said. “Humans often struggle with appropriate responses because of the complexity of emotions, so building technologies that could decipher accurately our ‘emotional code’ would be very impressive.”

As the stilted computer interactions of today are replaced by something approaching friendly chit-chat, new risks could be encountered.

One concern is the potential for technology designed to seduce the user into sharing sensitive personal data. “It could be that children share insights with their ‘artificial friends’ and this data might be stored,” said Wachter. “What if we were to find out that people are more likely to buy more products when they are angry, sad, or bored? The ability to detect these emotions and successfully manipulate them could be a very interesting tool for companies.”

There is also the potential for users to become emotionally dependent, or even romantically involved, with their computers.

“However, there is also a huge potential for good, such as existing software to teach children on the autism spectrum [about] emotional and social interaction,” said Schuller. “One has to carefully balance benefits and risks and ensure the best exploitation.”