Strategic Insights and Clickworthy Content Development

Category: AI

Why AI is So Brilliant and So Stupid

AI

For all of the promise that artificial intelligence represents, a successful AI initiative still requires all of the right pieces to come together.

AI capabilities are advancing rapidly, but the results are mixed. While chatbots and digital assistants are improving generally, the results can be laughable, perplexing and perhaps even unsettling.

Google’s recent demonstration of Duplex, its natural language technology that completes tasks over the phone, is noteworthy. Whether you love it or hate it, two things are true: It doesn’t sound like your grandfather’s AI; the use case matters.

One of the striking characteristics of the demo, assuming it actually was a demo and not a fake, as some publications have suggested, is the use of filler language in the digital assistant’s speech such as “um” and uh” that make it sound human. Even more impressive, (again, assuming the demo is real), is the fact that Duplex reasons adeptly on-the-fly despite the ambiguous, if not confusing, responses provided by a restaurant hostess on the other end of the line.

Of course, the use case is narrow. In the demo, Duplex is simply making a hair appointment and attempting to make a restaurant reservation. In the May 8 Google Duplex blog introducing the technology, Yaniv Leviathan, principal engineer and Yossi Matias, VP of Engineering explain: “One of the key research insights was to constrain Duplex to closed domains, which are narrow enough to explore extensively. Duplex can only carry out natural conversations after being deeply trained in such domains. It cannot carry out general conversations.”

A common misconception is that there’s a general AI that works for everything. Just point it at raw data and magic happens.

“You can’t plug in an AI tool and it works [because it requires] so much manual tweaking and training. It’s very far away from being plug-and-play in terms of the human side of things,” said Jeremy Warren, CTO of Vivint Smart Home and former CTO of the U.S. Department of Justice. “The success of these systems is driven by dark arts, expertise and fundamentally on data, and these things do not travel well.”

Data availability and quality matter

AI needs training data to learn and improve. Warren said that if someone has mediocre models, processing performance, and machine learning experts, but the data is amazing, the end solution will be very good. Conversely, if they have the world’s best models, processing performance, and machine learning experts but poor data, the result will not be good.

“It’s all in the data, that’s the number one thing to understand, and the feedback loops on truth,” said Warren. “You need to know in a real way what’s working and not working to do this well.”

Daniel Morris, director of product management at real estate company Keller Williams agrees. He and his team have created Kelle, a virtual assistant designed for Keller Williams’ real estate agents that’s available as iPhone and Android apps. Like Alexa, Kelle has been built as a platform so skills can be added to it. For example, Kelle can check calendars and help facilitate referrals between agents.

“We’re using technology embedded in the devices, but we have to do modifications and manipulations to get things right,” said Morris. “Context and meaning are super important.”

One challenge Morris and his team run into as they add new skills and capabilities is handling longtail queries, such as for lead management, lead nurturing, real estate listings, and Keller Williams’ training events. Agents can also ask Kelle for the definitions of terms that are used in the real estate industry or terms that have specific meaning at Keller Williams.

Expectations are or are not managed well

Part of the problem with technology commercialization, including the commercialization of AI, is the age-old problem of over-promising and under-delivering. Vendors solving different types of problems claim that AI is radically improving everything from drug discovery to fraud prevention, which it can, but the implementations and their results can vary considerably, even among vendors focused on the same problem.

“A lot of the people who are really doing this well have access and control over a lot of first-party data,” said Skipper Seabold, director of decision sciences at decision science advisory firm Civis Analytics. “The second thing to note is it’s a really hard problem. What you need to do to deliver a successful AI product is to deliver a system, because you’re delivering software at the end. You need a cross-functional team that’s a mix of researchers and product people.”

Data scientists are often criticized for doing work that’s too academic in nature. Researchers are paid to test the validity of ideas. However, commercial forms of AI ultimately need to deliver value that either feeds bottom line directly, in terms of revenue, cost savings and ultimately profitability or indirectly, such as through data collection, usage and, potentially, the sale of that information to third parties. Either way, it’s important to set end user expectations appropriately.

“You can’t just train AI on raw data and it just works, that’s where things go wrong,” said Seabold. “In a lot of these projects you see the ability for human interaction. They give you an example of how it can work and say there’s more work to be done, including more field testing.”

Decision-making capabilities vary

Data quality affects AI decision-making. If the data is dirty, the results may be spurious. If it’s biased, that bias will likely be emphasized.

“Sometimes you get bad decisions because there are no ethics,” said Seabold. “Also, the decisions a machine makes may not be the same as a human would make. You may get biased outcomes or outcomes you can’t explain.”

Clearly, it’s important to understand what the cause of the bias is and correct for it.

Understanding machine rendered decisions can be difficult, if not impossible, when a black box is involved. Also, human brains and mechanical brains operate differently. An example of that was the Facebook AI Research Lab chatbots that created their own language, which the human researchers were not able to understand. Not surprisingly, the experiment was shut down.

“This idea of general AI is what captures people’s imaginations, but it’s not what’s going on,” said Seabold. “What’s going on in the industry is solving an engineering problem using calculus and algebra.”

Humans are also necessary. For example, when Vivint Smart Homes wants to train a doorbell camera to recognize humans or a person wearing a delivery uniform, it hires people to review video footage and assign labels to what they see. “Data labelling is sometimes an intensely manual effort, but if you don’t do it right, then whatever problems you have in your training data will show up in your algorithms,” said Vivint’s Warren.

Bottom line

AI outcomes vary greatly based on a number of factors which include their scope, the data upon which they’re built, the techniques used, the expertise of the practitioners and whether expectations of the AI implementation are set appropriately. While progress is coming fast and furiously, the progress does not always transfer well from one use case to another or from one company to another because all things, including the availability and cleanliness of data, are not equal.

AI Challenge: Achieving Artificial Empathy

Businesses of all kinds are investing in AI to improve operations and customer experience. However, as the average person experiences on a daily basis, interacting with machines can be frustrating when they’re incapable of understanding emotions.

For one thing, humans don’t always communicate clearly with machines, and vice versa. The inefficiencies caused by such miscommunications tend to frustrate end users. Even more unsettling in such scenarios is the failure of the system to recognize emotion and adapt.

To facilitate more effective human-to-machine interactions, artificial intelligence systems need to become more human-like, and to do that, they need the ability to understand emotional states and act accordingly.

A Spectrum of Artificial Emotional Intelligence

Merriam Webster’s primary definition of empathy is:

 “The action of understanding, being aware of, being sensitive to and vicariously experiencing the feelings, thoughts and experience of another of either the past or present without having the feelings, thoughts and experience fully communicated in an objectively explicit manner; alsothe capacity for this.”

To achieve artificial empathy, according to this definition, a machine would have to be capable of experiencing emotion. Before machines can do that, they must first be able to recognize emotion and comprehend it.

Non-profit research institute SRI International and others have succeeded with the recognition aspect, but understanding emotion is more difficult. For one thing, individual humans tend to interpret and experience emotions differently.

“We don’t understand all that much about emotions to begin with, and we’re very far from having computers that really understand that. I think we’re even farther away from achieving artificial empathy,” said Bill Mark, president of Information and Computing Services at SRI International, whose AI team invented Siri. “Some people cry when they’re happy, a lot of people smile when they’re frustrated. So, very simplistic approaches, like thinking that if somebody is smiling they’re happy, are not going to work.”

Emotional recognition is an easier problem to solve than emotional empathy because, given a huge volume of labeled data, machine learning systems can learn to recognize patterns that are associated with a particular emotion. The patterns of various emotions can be gleaned from speech (specifically, word usage in context, voice inflection, etc.), as well as body language, expressions and gestures, again with an emphasis on context. Like humans, the more sensory input a machine has, the more accurately it can interpret emotion.

Recognition is not the same as understanding, however. For example, computer vision systems can recognize cats or dogs based on labeled data, but they don’t understand the behavioral characteristics of cats or dogs, that the animals can be pets or that people tend to love them or hate them.

Similarly, understanding is not empathy. For example, among three people, one person may be angry, which the other two understand. However, the latter two are not empathetic: The second person is dispassionate about the first person’s anger and the third person finds the first person’s anger humorous.

In recent history, Amazon Alexa startled some users by bursting into laughter for no apparent reason. It turns out that, the system heard, “Alexa, laugh” when the user said no such thing. Now imagine a system laughing at a chronically ill, depressed, anxious, or suicidal person who is using the system as a therapeutic aid.

“Siri and systems like Siri are very good at single-shot interactions. You ask for something and it responds,” said Mark. “For banking, shopping or healthcare, you’re going to need an extended conversation, but you won’t be able to state everything you want in one utterance so you’re really in a joint problem-solving situation with the system. Some level of emotional recognition and the ability to act on that recognition is required for that kind of dialogue.”

Personalization versus Generalization

Understanding the emotions of a single individual is difficult enough, because not everyone expresses or interprets emotions in the same way. However, like humans, machines will best understand a person with whom it has extensive experience.

“If there has been continuous contact between a person and a virtual assistant, the virtual assistant can build a much better model,” said Mark. “Is it possible to generalize at all? I think the answer to that is, ‘yes,’ but it’s limited.”

Generalizing is more difficult, given the range of individual differences and all the factors that cause individuals to differ, including nature and nurture, as well as culture and other factors.

Recognizing emotion and understanding emotion are a matter of pattern recognition for both humans and machines. According to Keith Strier, EY Advisory Global and Americas AI leader at professional services firm EY, proofs of concept are now underway in the retail industry to personalize in-store shopping experiences.

“We’re going to see this new layer of machine learning, computer vision and other tools applied to reading humans and their emotions,” said Strier. “[That information] will be used to calibrate interactions with them.”

In the entertainment industry, Strier foresees entertainment companies monitoring the emotional reactions of theater audiences so that directing and acting methods, as well as special effects and music can deliver more impactful entertainment experiences that are scarier, funnier or more dramatic.

“To me it’s all the same thing: math,” said Strier. “It’s really about the specificity of math and what you do with it. You’re going to see a lot of research papers and POCs coming out in the next year.”

Personalization Will Get More Personal

Marketers have been trying to personalize experiences using demographics, personas and other means to improve customer loyalty as well as increase engagement and share of wallet. However, as more digital tools have become available, such as GPS and software usage analytics, marketers have been attempting to understand context so they can improve the economics and impact of campaigns.

“When you add [emotional intelligence], essentially you can personalize not just based on who I am and what my profile says, but my emotional state,” said Strier. “That’s really powerful because you might change the nature of the interaction, by changing what you say or by changing your offer completely based on how I feel right now.”

Artificial Emotional Intelligence Will Vary by Use Case

Neither AI nor emotions are one thing. Similarly, there is not just one use case for artificial emotional intelligence, be it emotional recognition, emotional understanding or artificial empathy.

“The actual use case matters,” said Strier. “Depending on the context, it’s going to be super powerful or maybe not good enough.”

At the present time, a national bank is piloting a smart ATM that uses a digital avatar which reads customers’ expressions. As the avatar interacts with customers, it adapts its responses.

“We can now read emotions in many contexts. We can interpret tone, we can we can triangulate body language and words and eye movements and all sorts of proxies for emotional state. And we can learn over time whether someone is feeling this or feeling that. So now the real question is what do we do with that?” said Strier. “Artificial empathy changes the art of the possible, but I don’t think the world quite knows what to do with it yet. I think the purpose question is probably going to be a big part of what going to occupy our time.”

Bottom Line

Artificial emotional intelligence can improve the quality and outcomes of human-to-machine interactions, but it will take different forms over time, some of which will be more sophisticated and accurate than others.

Artificial empathy raises the question of whether machines are capable of experiencing emotions in the first place, which, itself, is a matter of debate. For now, it’s fair to say that artificial emotional intelligence, and the advancement of it, are both important and necessary to the advancement of AI.

Machine Learning’s Greatest Weakness Is Humans

Machine learning– deep learning and cognitive computing in particular– attempt to model the human brain. That seems logical because the most effective way to establish bilateral understanding with humans is to mimic them. As we have observed from everyday experiences, machine intelligence isn’t perfect and neither is human intelligence.

Still, understanding human behavior and emotion is critical if machines are going to mimic humans well. Technologists know this, so they’re working hard to improve natural language processing, computer vision, speech recognition, and other things that will enable machines to better understand humans behave more like humans

I imagine that machines will never emulate humans perfectly because they will be able to rapidly identify the flaws in our thinking and behavior and improve upon them. To behave exactly like us would

From an analytical perspective, I find all of this fascinating because human behavior is linear and non-linear, rational and irrational, logical and illogical. If you study us at various levels of aggregation, it’s possible to see patterns in the way humans behave as a species, why we fall into certain groups and why behave the way we do as individuals. I think it would be very interesting to compare what machines have to say about all of that with what psychologists, sociologists, and anthropologists have to say.

Right now we’re at the point where we believe that machines need to understand human intelligence. Conversely, humans need to understand machine intelligence.

Why AI is Flawed

Human brain function is not infallible. Our flaws present challenges for machine learning, namely, machines have the capacity to make the same mistakes we do and exhibit the same biases we do, only faster. Microsoft’s infamous twitter bot is a good example of that.

Then, when you model artificial emotional intelligence based on human emotion, the results can be entertaining, inciting or even dangerous.

Training machines, whether for supervised or unsupervised learning, begins with human input at least for now. In the future, the necessity for that will diminish because a lot of people will be teaching machines the same things. The redundancy will indicate patterns that are easily recognizable, repeatable and reusable. Open source machine learning libraries are already available, but there will be many more that approximate some aspect of human brain function, cognition, decision-making, reasoning, sensing and much more.

Slowly but surely, we’re creating machines in our own image.

Why Automation and AI are Cool, Until They’re Not

Every day, there’s more news about automation, machine learning and AI. Already, some vendors are touting their ability to replace salespeople and even data scientists. Interestingly, the very people promoting these technologies aren’t necessarily considering the impact on their own jobs.

In the past, knowledge jobs were exempt from automation, but with the rise of machine learning and AI, that’s no longer true. In the near future, machines will be able to do even more tasks that have historically been done by humans.

Somewhere between doomsday predictions and automated utopia is a very real world of people, businesses and entire industries that need to adapt or risk obsolescence.

History isn’t simply repeating itself

One difference between yesterday’s automation and today’s automation (besides the level of machine intelligence) is the pace of change. Automating manufacturing was a very slow process because it required major capital investments and significant amounts of time to implement. In today’s software-driven world, change occurs very quickly and the pace of change is accelerating.

The burning existential question is whether organizations and their employees can adapt to change fast enough this time. Will autonomous “things” and bots cause the staggering unemployment levels some foresee a decade from now, or will the number of new jobs compensate for the decline of traditional jobs?

“I think there will be stages where we have the 10 percent digital workforce in the next two years and 20 percent in three to four years,” said Martin Fiore, Americas tax talent leader at professional services firm EY. “Some will say, ‘Wow, that’s scary.’ Others will say, ‘I see the light I’m going to upscale my capabilities.”

Businesses and individuals each need to change the way they think.

Angela Zutavern, VP at management consulting firm Booz Allen Hamilton and co-author of the forthcoming book, The Mathematical Corporation views intelligent automation as a new form of leadership and strategy as opposed to just technology.

“Companies who understand this and get on board with it will be way ahead and I believe that companies who either ignore it or don’t believe it’s real may go out of business,” she said. “I think it’s better to know about it, understand it, and be a part of making the change happen rather than getting caught off-guard and have it happen to you.”

An old company pioneers new tricks

Despite its 100-year history, EY is actively facilitating the adoption of Robotics Process Automation (RPA) and AI within its own four walls and among its customers.

Its RPA group employs a global team of 1,000 robotic engineers and analysts who are creating new applications. In past 18 months, more than 200 bots have been rolled out in tax operations, which includes work for clients. EY is also using RPA processes internally in core business functions to improve quality and performance while enabling a new sense of purpose among its employees.

“RPA helps us increase our ability to handle high levels of transaction volume (e.g.,tax returns), accelerate on-time delivery and improve accuracy,” said Fiore. “Over time, there will be a positive impact on our workforce model, and we’re planning for that now.”

Similarly, an EY innovation lab recently experimented to see if AI could help to analyze contracts faster and better than people.

“We thought we’d make headway and great progress in a year or two, but in the first 90 days [the machines were] three times more effective in the process,” said Jeff Wong, global chief innovation officer at EY. “You’ll see us increasing our efforts there radically in the next 12 to 18 months.”

Last year, EY “hired” 350 bots, although company spends about a half a billion dollars annually on employee training. Job rotation is also common at EY because the company wants to “teach people to learn how to learn.”

Education will change

Young people entering the workforce already need different skills than their predecessors, and the trend will continue. Param Singh, associate professor of business technologies at Carnegie Mellon University expects grade schools to teach fundamental programming skills and high schools to teach machine learning.

“Typically, managers [had] person management jobs. Increasingly those jobs will have to be good on the technology side,” said Singh. “Few people are good at deep learning, probably less than 5,000. We’ll needs hundreds of thousands when we see major adoption happening.”

Meanwhile, working professionals and their employers should not be complacent. As the levels of intelligent automation increase, individuals and companies will need to understand which jobs will be displaced and which jobs will be created, none of which is static.

“Cloud computers, data lakes and in the future, quantum computing are things that every leader should be conversant about or anyone who aspires to a leadership role in this machine learning age,” said Booz Allen Hamilton’s Zutavern. “People should understand what the possibilities are and know when to pull in the deep experts.”

3 Cool AI Projects

AI is all around us, quietly working in the background or interacting with us via a number of different devices. Various industries are using AI for specific reasons such as ensuring that flights arrive on time or irrigating fields better and more economically.

Over time, our interactions with AI are becoming more sophisticated. In fact, in the not-too-distant future we’ll have personal digital assistants that know more about us than we know about ourselves.

For now, there are countless AI projects popping up in commercial, industrial and academic settings. Following are a few examples of projects with an extra cool factor.

Get Credit. Now.

Who among us hasn’t sat in a car dealership, waiting for the finance person to run a credit check and provide us with financing options? We’ve also stood in lines at stores, filling out credit applications, much to the dismay of those standing behind us in line. Experian DataLabs is working to change all that.

Experian created Experian DataLabs to experiment with the help of clients and partners. Located in San Diego, London, and Sao Paulo, Experian DataLabs employ scientists and software engineers, 70% of whom are Ph.Ds. Most of these professionals have backgrounds in machine learning.

“We’re going into the mobile market where we’re pulling together data, mobile, and some analytics work,” said Eric Haller, EVP of Experian’s Global DataLabs. “It’s cutting-edge machine learning which will allow for instant credit on your phone instead of applying for credit at the cash register.”

That goes for getting credit at car dealerships, too. Simply text a code to the car manufacturer and get the credit you need using your smartphone. Experian DataLabs is also combining the idea with Google Home, so you can shop for a car, and when you find one you like, you can ask Google Home for instant credit.

There’s no commercial product available yet, but a pilot will begin this summer.

AI About AI

Vicarious is attempting achieve human-level intelligence in vision, language, and motor control. It is taking advantage of neuroscience to reduce the amount of input machine learning requires to achieve a desired result. At the moment, Vicarious is focusing on mainstream deep learning and computer vision.

It’s concept is compelling to many investors. So far, the company has received $70 million from corporations, venture capitalists and affluent private investors including Ashton Kutcher, Jeff Bezos, and Elon Musk.

On its website, Vicarious wisely points out the downsides of model optimization ad infinitum that results in only incremental improvements. So, instead of trying to beat a state-of-the-art algorithm, Vicarious is to trying to identify and characterize the source of errors.

Draft Better Basketball Players

The Toronto Raptors is working with IBM Watson to identify what skills the team needs and which prospective players can best fill the gap. It is also pre-screening each potential recruits’ personality traits and character.

During the recruiting process, Watson helps select the best players and it also suggests ideal trade scenarios. While prospecting, scouts enter data into a platform to record their observations. The information is later used by Watson to evaluate players.

And, a Lesson in All of This

Vicarious is using unsupervised machine learning. The Toronto Raptors are using supervised learning, but perhaps not exclusively. If you don’t know the difference between the two yet, it’s important to know. Unsupervised learning looks for patterns. Supervised learning is presented with classifications such as these are the characteristics of “good” traits and these are the characteristics of “bad” traits.

Supervised and unsupervised learning are not mutually exclusive since unsupervised learning needs to start somewhere. However, supervised learning is more comfortable to humans with egos and biases because we are used to giving machines a set of rules (programming). It takes a strong ego, curiosity or both to accept that some of the most intriguing findings can come from unsupervised learning because it is not constrained by human biases. For example, we may define the world in terms of red, yellow and blue. Unsupervised learning could point out crimson, vermillion, banana, canary, cobalt, lapis and more.