Strategic Insights and Clickworthy Content Development

Month: July 2018

AI Challenge: Achieving Artificial Empathy

Businesses of all kinds are investing in AI to improve operations and customer experience. However, as the average person experiences on a daily basis, interacting with machines can be frustrating when they’re incapable of understanding emotions.

For one thing, humans don’t always communicate clearly with machines, and vice versa. The inefficiencies caused by such miscommunications tend to frustrate end users. Even more unsettling in such scenarios is the failure of the system to recognize emotion and adapt.

To facilitate more effective human-to-machine interactions, artificial intelligence systems need to become more human-like, and to do that, they need the ability to understand emotional states and act accordingly.

A Spectrum of Artificial Emotional Intelligence

Merriam Webster’s primary definition of empathy is:

 “The action of understanding, being aware of, being sensitive to and vicariously experiencing the feelings, thoughts and experience of another of either the past or present without having the feelings, thoughts and experience fully communicated in an objectively explicit manner; alsothe capacity for this.”

To achieve artificial empathy, according to this definition, a machine would have to be capable of experiencing emotion. Before machines can do that, they must first be able to recognize emotion and comprehend it.

Non-profit research institute SRI International and others have succeeded with the recognition aspect, but understanding emotion is more difficult. For one thing, individual humans tend to interpret and experience emotions differently.

“We don’t understand all that much about emotions to begin with, and we’re very far from having computers that really understand that. I think we’re even farther away from achieving artificial empathy,” said Bill Mark, president of Information and Computing Services at SRI International, whose AI team invented Siri. “Some people cry when they’re happy, a lot of people smile when they’re frustrated. So, very simplistic approaches, like thinking that if somebody is smiling they’re happy, are not going to work.”

Emotional recognition is an easier problem to solve than emotional empathy because, given a huge volume of labeled data, machine learning systems can learn to recognize patterns that are associated with a particular emotion. The patterns of various emotions can be gleaned from speech (specifically, word usage in context, voice inflection, etc.), as well as body language, expressions and gestures, again with an emphasis on context. Like humans, the more sensory input a machine has, the more accurately it can interpret emotion.

Recognition is not the same as understanding, however. For example, computer vision systems can recognize cats or dogs based on labeled data, but they don’t understand the behavioral characteristics of cats or dogs, that the animals can be pets or that people tend to love them or hate them.

Similarly, understanding is not empathy. For example, among three people, one person may be angry, which the other two understand. However, the latter two are not empathetic: The second person is dispassionate about the first person’s anger and the third person finds the first person’s anger humorous.

In recent history, Amazon Alexa startled some users by bursting into laughter for no apparent reason. It turns out that, the system heard, “Alexa, laugh” when the user said no such thing. Now imagine a system laughing at a chronically ill, depressed, anxious, or suicidal person who is using the system as a therapeutic aid.

“Siri and systems like Siri are very good at single-shot interactions. You ask for something and it responds,” said Mark. “For banking, shopping or healthcare, you’re going to need an extended conversation, but you won’t be able to state everything you want in one utterance so you’re really in a joint problem-solving situation with the system. Some level of emotional recognition and the ability to act on that recognition is required for that kind of dialogue.”

Personalization versus Generalization

Understanding the emotions of a single individual is difficult enough, because not everyone expresses or interprets emotions in the same way. However, like humans, machines will best understand a person with whom it has extensive experience.

“If there has been continuous contact between a person and a virtual assistant, the virtual assistant can build a much better model,” said Mark. “Is it possible to generalize at all? I think the answer to that is, ‘yes,’ but it’s limited.”

Generalizing is more difficult, given the range of individual differences and all the factors that cause individuals to differ, including nature and nurture, as well as culture and other factors.

Recognizing emotion and understanding emotion are a matter of pattern recognition for both humans and machines. According to Keith Strier, EY Advisory Global and Americas AI leader at professional services firm EY, proofs of concept are now underway in the retail industry to personalize in-store shopping experiences.

“We’re going to see this new layer of machine learning, computer vision and other tools applied to reading humans and their emotions,” said Strier. “[That information] will be used to calibrate interactions with them.”

In the entertainment industry, Strier foresees entertainment companies monitoring the emotional reactions of theater audiences so that directing and acting methods, as well as special effects and music can deliver more impactful entertainment experiences that are scarier, funnier or more dramatic.

“To me it’s all the same thing: math,” said Strier. “It’s really about the specificity of math and what you do with it. You’re going to see a lot of research papers and POCs coming out in the next year.”

Personalization Will Get More Personal

Marketers have been trying to personalize experiences using demographics, personas and other means to improve customer loyalty as well as increase engagement and share of wallet. However, as more digital tools have become available, such as GPS and software usage analytics, marketers have been attempting to understand context so they can improve the economics and impact of campaigns.

“When you add [emotional intelligence], essentially you can personalize not just based on who I am and what my profile says, but my emotional state,” said Strier. “That’s really powerful because you might change the nature of the interaction, by changing what you say or by changing your offer completely based on how I feel right now.”

Artificial Emotional Intelligence Will Vary by Use Case

Neither AI nor emotions are one thing. Similarly, there is not just one use case for artificial emotional intelligence, be it emotional recognition, emotional understanding or artificial empathy.

“The actual use case matters,” said Strier. “Depending on the context, it’s going to be super powerful or maybe not good enough.”

At the present time, a national bank is piloting a smart ATM that uses a digital avatar which reads customers’ expressions. As the avatar interacts with customers, it adapts its responses.

“We can now read emotions in many contexts. We can interpret tone, we can we can triangulate body language and words and eye movements and all sorts of proxies for emotional state. And we can learn over time whether someone is feeling this or feeling that. So now the real question is what do we do with that?” said Strier. “Artificial empathy changes the art of the possible, but I don’t think the world quite knows what to do with it yet. I think the purpose question is probably going to be a big part of what going to occupy our time.”

Bottom Line

Artificial emotional intelligence can improve the quality and outcomes of human-to-machine interactions, but it will take different forms over time, some of which will be more sophisticated and accurate than others.

Artificial empathy raises the question of whether machines are capable of experiencing emotions in the first place, which, itself, is a matter of debate. For now, it’s fair to say that artificial emotional intelligence, and the advancement of it, are both important and necessary to the advancement of AI.

Lisa Morgan Will Address A/IS Ethics at the University of Arizona Eller College of Management’s Annual Executive Ethics Symposium

Lisa Morgan will be speaking at this year’s University of Arizona Eller College of Management’s Annual Executive Ethics Symposium, an invitation-only event in September 2018.   Her presentation will address the need for AI ethics given the current state of AI and innovation, as well as the opportunities and challenges shaping the advancement of AI ethics.

Ethical Tech: Myth or Reality?

New technologies continue to shape society, albeit at an accelerating rate. Decades ago, societal change lagged behind tech innovation by many years, a decade or more. Now, change is occurring much faster as evidenced by the impact of disrupters including Uber and Airbnb.

Central to much of the change is the data being collected, stored and analyzed for various reasons, not all of which are transparent. As the pace of technology innovation and tech-driven societal change accelerate, businesses are wise to think harder about the longer-term impacts of what they’re doing, both good and bad.

Why contemplate ethics?

Technology in all its forms is just a tool that can be used for good or evil. While businesses do not tend to think in those terms, there is some acknowledgement of what is “right” and “wrong.” Doing the right thing tends to be reflected in corporate responsibility programs designed to benefit people, animals, and the environment. Doing the wrong thing often involves irresponsible or inadvertent actions that are harmful to people, whether it’s invading their privacy or exposing their personal data.

While corporate responsibility programs in their current form are “good” on some level, ethics on a societal scale tends to be missing.

In the tech industry, for example, innovators are constantly doing things because they’re possible without considering whether they’re ethical. A blatant recent example is the human-sheep hybrid. Closer to home in high tech are fears about AI gone awry.

Why ethics is a difficult concept

The definition of ethics is simple. According to Merriam Webster, it is “the discipline dealing with what is good and bad and with moral duty and obligation.”

In practical application, particularly in relation to technology, “good” and “bad” coexist. Airbnb is just one example. On one hand, homeowners are able to take advantage of another income stream. However, hotels and motels now face new competition and the residents living next to or near Airbnb properties often face negative quality-of-life impacts.

According to Gartner research, organizations at the beginning stages of a digital strategy rank ethics a Number 7 priority. Organizations establishing a digital strategy rank it Number 5 and organizations that are executing a digital strategy rank it Number 3.

“The [CIOs] who tend to be more enlightened are the ones in regulated environments, such as financial services and public sector, where trust is important,” said Frank Buytendijk, a Gartner research vice president and Gartner fellow.

Today’s organizations tend to approach ethics from a risk avoidance perspective; specifically, for regulatory compliance purposes and to avoid the consequences of operating an unethical business. On the positive side, some view ethics as a competitive differentiator or better yet, the right thing to do.

Unfortunately, it’s regulatory compliance pressure and risk because of all the scandals you see with AI, big data [and] social media, but hey, I’ll take it,” said Buytendijk. “With big data there was a discussion about privacy but too little, too late. We’re hopeful with robotics and the emergence of AI, as there is active discussion about the ethical use of those technologies, not onlyt by academics, but by the engineers themselves.”

IEEE ethics group emerges

In 2016, the IEEE launched the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Its goal is to ensure that those involved in the design and development of autonomous and intelligent systems are educated, trained, and empowered to prioritize ethical considerations so that technologies are advanced for the benefit of humanity.

From a business perspective, the idea is to align corporate values with the values of customers.

“Ethics is the new green,” said Raja Chatila, Executive Committee Member of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. “People value their health so they value products that do not endanger their health. People want to buy technology that respects the values they cherish.”

However, the overarching goal is to serve society in a positive way, not just individuals. Examples of that tend to include education, health, employment and safety.

“As an industry, we could do a better job of being responsible for the technology we’re developing,” said Chatila.

At the present time, 13 different committees involved in the initiative are contemplating ethics from different technological perspectives, including personal data and individual access control, ethical research and design, autonomous weapons, classical ethics in AI, and mixed reality. In December 2017, the group released “Ethically Aligned Design volume 2,” a 266-page document available for public comment. It includes the participation of all 13 committees.

In addition, the initiative has proposed 11 IEEE standards, all of which have been accepted. The standards address transparency, data privacy, algorithmic bias, and more. Approximately 250 individuals are now participating in the initiative.

Society must demand ethics for its own good

Groups within society tend to react to technology innovation differently due to generational differences, cultural differences, and other factors. Generally speaking, early adopters tend to be more interested in a new technology’s capabilities than its potential negative effects. Conversely, laggards are more risk averse. Nevertheless, people in general tend to use services, apps, and websites without bothering to read the associated privacy policies. Society is not protecting itself, in other words. Instead, one individual at a time is acquiescing to the collection, storage and use of data about them without understanding to what they are acquiescing.

“I think the practical aspect comes down to transparency and honesty,” said Bill Franks, chief analytics officer at the International Institute for Analytics(IIA). “However, individuals should be aware of what companies are doing with their data when they sign up, because a lot of the analytics –- both the data and analysis –- could be harmful to you if they got into the wrong hands and were misused.”

Right now, the societal impacts of technology tend to be recognized after the fact, rather than contemplated from the beginning. Arguably, not all impacts are necessarily foreseeable, but with the pace of technology innovation constantly accelerating, the innovators themselves need to put more thought into the positive and negative consequences of bringing their technology to market.

Meanwhile, individuals have a responsibility to themselves to become more informed than they are today.

“Until the public actually sees the need for ethics, and demands it, I just don’t know that it would ever necessarily go mainstream,” said Franks. “Why would you put a lot of time and money into following policies that add overhead to manage and maintain when your customers don’t seem to care? That’s the dilemma.”

Businesses, individuals, and groups need to put more thought into the ethics of technology for their own good and for the good of all. More disruptions are coming in the form of machine intelligence, automation, and digital transformation which will impact society somehow. “How” is the question.