Earlier this year, Microsoft dipped a toe into the Artificial Intelligence space with an AI-powered chatbot that it set loose on Twitter. Designed to pass for a conversational teenager, responding to queries and mimicking casual, playful speech patterns familiar to Millennial and Gen Z users, it was supposed to be cool. Unfortunately, it wasn’t long before Tay, the Microsoft-labeled “AI fam from the internet that’s got zero chill” devolved into a racial slur spewing monster.
Of course, the curious case of Tay was somewhat of a fluke, a science experiment gone rogue, hijacked by internet trolls bent on exploiting the software that ran her. Nevertheless, Microsoft’s negative run with Tay highlights an interesting problem facing chatbot developers as well as those who will adopt artificial intelligence technologies for customer service and marketing purposes: How do you make sure your AI chatbot not only stays on the rails, but also operates in a manner that’s sensitive to your customers’ needs?
That’s what Fraser Kelton hopes his co-founded, MIT-born, machine-learning startup called Koko will solve. “We’re working toward providing empathy as a service to any voice or messaging platform,” says Kelton in an article with FastCode. “We think that’s a critical user experience for a world in which you’re conversing with computers.”
Kelton’s not wrong. In fact, it wouldn’t be too far off to say that an empathy injection from Koko is something even the most recognizable AI smartbots are sorely in need of. A study published in JAMA found that smartphone AIs like Siri, Cortana, and Google Now severely underperform in responding to queries involving physical ailments, depression, and even sexual assault. Writer Sara Wachter-Boettcher relates her own experience on her Medium blog, reporting that when she asked for help with rape, sexual assault, and sexual abuse, all she received from Siri was one of her pre-programmed snarky remarks telling her that sexual abuse “is not a problem.” Apple responded almost immediately by reprogramming Siri to send users mentioning sexual assault and rape to RAINN’s National Sexual Assault Hotline.
Koko’s three-person team just received their first major round of funding, and are hoping that they can circumnavigate these situations before they ever happen in the first place. The concept is that Koko’s empathy API would be able to connect to any third-party chatbot and imbue it with the ability to recognize speech patterns or vocal cues indicative of mood, as well as to adjust responses appropriately. Sharp responses when somebody is frustrated or angry could trigger the chatbot’s doling out of calm, patient responses, so as not to incite further emotion, while languid or playful language might be met with more creative or humorous responses.
Many see this as a critical endeavor—a stepping stone or a building block for customer service—AI that may someday be indistinguishable from a living, breathing, human being. I’m definitely in this camp, as AI and machine learning are getting more sophisticated at a rapid clip and there is so much possibility there, and the implications it has on business, on customer service, on processes and simplification, and on our personal lives—is exciting as hell.
Not everyone is crazy about chatbots, and Teckst’s Matt Tumbleson expressed his opinions in an article on VentureBeat, saying that though the buzz around chatbots is growing, his team speaks “on a daily basis with customer service leaders from Fortune 500s who believe chatbots add more problems than they solve.”
Tumbleson argues that conversation trees and nuances in vocal communication are simply too complex for bots to replace customer service agents completely, though he does believe that they can act as great supplements. Automated, rote tasks, such as sending a blanket update to users and responding to the same generic question is a perfect task for bots, in his opinion, while the truly human matters of connecting with another individual on an emotional level is best left to… well, humans.
While Tumbleson is right about the current state of chatbot technology—it’s not yet where it needs to be—it’s hard to say whether he’s right about bots being able to replace humans in conversation. Koko aims to remedy what Tumbleson claims is holding back customer service bots from full serviceability, and, if successful, will bring us one step closer to a world where software is indistinguishable from personality.
One thing is for sure: as A.I. software becomes more affordable to produce, we’ll see more and more Siris, Cortanas, and Alexas. Whether replacements for customer service or supplements to them, we certainly won’t be interacting with chatbots any less in the future–the least we can do is teach them to respond with empathy and manners.
A version of this was first posted on Futurum.xyz