Computing scientists at the 海角社区 are taking the first steps toward chatbots that can express and respond to emotion during a conversation-including artificially intelligent companions that could help relieve loneliness for seniors.
"Chatbots like Siri or Alexa are primarily used to look up information or do a task for you-answering questions in the shortest time possible," said computing scientist Osmar Za茂ane, co-author of the study and scientific director of the .
"We envision a device that's emotionally intelligent, where an elderly person can say 'I'm tired' or 'It's beautiful outside,' or tell a story about their day-and receive a response that carries on the conversation and keeps them engaged."
The team's model was able to express responses that matched requested emotions in most cases-though Za茂ane noted some emotions, like surprise and love, were easier to express than others, and this is just one of several steps in turning the vision of a digital companion into a reality.
"In this study, we coached the program by telling it which emotion to express in its response. Our next study will focus on having the program independently decide on what emotion to express, depending on the persona it's talking to," said Za茂ane.
An emotional response
Health experts agree loneliness poses a concern for health and quality of life among seniors, whose numbers are growing in Canada.
"Loneliness leads to boredom and depression, which causes an overall deterioration in health," explained Za茂ane. "Studies show that companionship-a cat, a dog, other people-helps tremendously. The advantage for caregivers of a digital companion like this is it can also collect information on the emotional state of the person, noting if they are frequently feeling sad, for example."
But developing AI capable of understanding when humans are expressing emotion and responding in an appropriate way is no small challenge, Za茂ane noted.
"When an elderly person tells you something that's sad, it's important to respond with empathy," said Za茂ane. "That requires that the device first understand the emotion that is expressed. We can do that by converting the speech to text and looking at the words that are used.
"In this study, we looked at the next step: having the program express emotions-like surprise, sadness, happiness-in its response."
The study, "" was published in the proceedings of the .