Strategic Insights and Clickworthy Content Development

Author: misslisa (Page 1 of 10)

I'm a writer, editor, analyst, and writing coach.

Why Enterprises Struggle with Hybrid Cloud and DevOps

Cloud

More enterprises are moving to the cloud and implementing DevOps, containers and microservices, but their efforts are falling short of expectations. A recent study from the Ponemon Institute identifies some of the core challenges they face.

Organizations implementing cloud, DevOps, containers, and microservices are often surprised when the outcomes don’t match expectations. A recent survey by the Ponemon Institute, sponsored by hybrid cloud management platform provider Embotics, revealed that 74% of the 600 survey respondents who are responsible for cloud management believe DevOps enablement capabilities are essential, very important, or important for their organization. But only one-third believe their organization has the ability to deliver those capabilities.

Eighty percent believe that microservices and container enablement are essential, very important, or important, but only a quarter believe their organization can quickly deliver those capabilities. The lagging DevOps and microservices enablement costs the average enterprise $34 million per year, which is 23% of their average annual cloud management budget of $147 million.

“There are so many things that result in the loss of information assets and infrastructure issues that can be very costly for organizations to deal with. This rush to the cloud has created all sorts of issues for organizations,” said Larry Ponemon, chairman and founder of the Ponemon Institute. “The way organizations have implemented DevOps facilitates a rush-to-release mentality that doesn’t necessarily improve the state of security.”

Organizations assume that implementing a DevOps framework will necessarily result in software quality improvement and risk reduction because they can build in security early in the software development lifecycle (SDLC).

“That’s all theoretically true, but organizations are pretty sloppy about the way they’ve implemented software,” said Ponemon. “In other Ponemon studies, we’ve seen that there’s a difference between the DevOps that you want versus the DevOps that you get.”

Why hybrid cloud implementations are so tough

Shadow IT is one reason why cloud implementations are more costly and are less effective than they could be.

“Companies develop silos where different groups of people have different tools that aren’t necessarily compatible,” said Ponemon. “A lot of organizations think a cloud platform tool is fungible whether you buy it from Vendor A, B, or C and you’re going to get the same outcome, but that’s not true at all.”

Forty-six percent of survey respondents said their organizations’ consumption model is “cloud direct,” meaning that end users are bypassing IT and cloud management technologies. Instead, they’re communicating directly with clouds such as AWS or Microsoft Azure via native APIs or their own public cloud accounts.

The patchwork model of cloud adoption results in complex environments that provide little visibility and are difficult to manage. In fact, 70% of survey participants said they have no visibility into the purpose or ownership of the virtual machines in their cloud environment.

Interestingly, one cloud benefit often touted is that one does not have to understand where resources reside.

“You would expect if you’re operating in a cloud environment [that] it’s going to help you gain visibility because you don’t have to look very far to find your data, but it basically doesn’t work that way,” said Ponemon. “A lot of organizations don’t have the tools to understand where their machines are and where the data is located. They don’t have much control over the network layer and sometimes no control over the application layer so the end result becomes kind of messy.”

The lack of visibility and control expose an enterprise to several risks including negligence and cyberattacks that are “a perfect storm” for bad actors. Fifty-seven percent of survey participants believe users are increasing business risk by violating policies about where digital assets can reside.

“A lot of CSOs are angry at the lines of business because they can’t necessarily implement the security protocols they need,” said Ponemon. “The whole idea of getting things out quickly can be more important when a little bit slower delivery can result in a higher quality level.”

Ideally, organizations should have a single user interface that enables them to view their entire environment, although 68% of the survey respondents lack that capability. 

“Having a single user interface is really smart because you have one place where you can do a lot of cool things and have more control and visibility, but not all platforms work very well,” said Ponemon. “Platforms have to be implemented like software. It’s not like an appliance that you plug in. It requires a lot of stuff to make it work the right way.”

When a hybrid cloud management platform is implemented correctly it can enable a successful DevOps environment because there’s greater control over the hybrid cloud.

Hybrid cloud management is evolving

Hybrid cloud management maturity has three phases, according to Ponemon. In the first phase, hybrid IT develops organically because lines of business are procuring their own cloud solutions, typically without the involvement of IT or security.

“I think this whole cloud shadow issue is a real issue because it creates turf wars and silos,” said Ponemon. “Dealing with that can be very tough and that’s why we need to have management with an iron fist. There’s too much at risk so some people need to be more influential than others in the decision-making processes rather than running it organically.”

The companies that have matured to the second phase, use a cloud management platform (CMP 1.0), although they tend to experience disparities between the platform’s capabilities and their actual DevOps requirements. The Ponemon Institute defines CMP 1.0 as “solutions providing provisioning, automation, workflow orchestration, self-service, cloud governance, single-pane-of-glass visibility, capacity rightsizing and cost management across public, private and hybrid clouds.”

By comparison, the newer CMP 2.0 platforms provide those benefits and they reduce the friction and complexity associated with microservices, containers, cloud-native applications, and DevOps. CMP 2.0 also enables corporate governance and compliance without impeding development speed and agility. Sixty-three percent of survey respondents believe CMP 2.0 capabilities would reduce hybrid cloud management costs by an average of $34 million per year. 

Will Containers Replace VMs?

While an across-the-board migration from virtual machines to containers isn’t likely, there are issues developers and operations personnel should consider to ensure the best solution for the enterprise.

Chances are, your company uses virtual machines on premises and in the cloud. Meanwhile, the developers are likely considering containers, if they haven’t adopted them already, to simplify development and deployment, improve application scalability and increase application delivery speed.

Architecturally speaking, VMs and containers have enough architectural similarities that some question the long-term survival of VMs, especially since VMs are already a couple of decades old. There are also serverless cloud options available now that dynamically manage the allocation of machine resources so humans don’t have to do it.

“If you embrace [a serverless] service, then you truly don’t have to worry about the virtual machines or the IaaS instances under the hood. You just turn your containers over to the service and they take care of all the provisioning underneath to run those containers,” said Tony Iams, research director at Gartner. “We’re far away from being able to do that on-premises, but if you’re doing this in the cloud, you no longer worry about [virtual machine] instances in the cloud.”

Most enterprises have been using VMs for quite some time and that will continue to be true for the foreseeable future.

“Most of the container infrastructure deployments are going to be on virtual machines,” said Iams. “If you look at where container runtimes are deployed as well as container orchestration systems such as Kubernetes, more often than not, it’s on virtual infrastructure and there’s a very important reason for that. In most cases, especially in today’s enterprise environments, the basic provisioning processes for infrastructure are going to be based on virtual machines, so even if you wanted to switch to provisioning on bare metal, it’s quite possible that you wouldn’t have any processes in place to do that.”

As organizations graduate to more sophisticated container deployments using orchestration platforms like Kubernetes, they can face significant provisioning challenges.

“Kubernetes has to be upgraded pretty frequently given the rapid pace of development, so you need to have solid provisioning processes in place. Most likely that’s going to be based on virtual machines,” said Iams. “Our guidance is not to make any changes there, to continue using the tools and processes you have in place, which is likely based on virtual machines, and focus on achieving the benefits of Kubernetes that are achieved higher up in the stack.”

Mitch Pirtle, principal at Space Monkey Labs and the creator of Joomla, an open source content management system, agrees that VMs will continue to provide value, although what’s considered a VM will likely change over time.

“The main benefit of the VM is that you can deploy that entire stack fully configured and ready-to-roll. My biggest issue with VMs is that if you want to deploy a stack that has unique scaling needs, you’re going to be in a world of hurt,” said Pirtle. “Containers, on the other hand, have a huge upside, especially in the enterprise space. You can scale different parts of your platform as needed, independent of each other.”

Container interest is fueled by developers

Developers are under constant pressure to deliver higher quality software at lower costs in ever-faster time frames. Containers enable applications to run in different environments, and even across environments, without a lot of extra work. That way, developers can write an application once and make minor modifications to it rather than writing the same application time and again. In addition, containers help facilitate DevOps efforts.

“One of the most important benefits containers provide is that once you have a containerized application, it runs in exactly the same environment at every stage of the lifecycle, from initial development through testing and deployment, so you get mobility of a workload at every stage of its lifecycle,” said Iams. “In the past, you would develop an application and turn it over to production. Any environment they would be running it in would run into problems, so they’d kick it back to developers and you’d have to try to recreate the environment that it was running in. A lot of those issues go away once you containerize a workload.”

Mike Nims, Director Advisory, Workday at KPMG said containers have greatly simplified his job because they abstract so much complexity.

“When I was a DBA, everybody knew I was on the Homer Simpson server, my instance lived on Homer Simpson, and my instance was Bart. In a container situation, I don’t even know where I sit,” said Nims. “Taking that technical view away is extremely beneficial [because] it also allows the developer to focus more on the code, integration or UI they’re working on as opposed to the hardware or the server.”

Containers aren’t a panacea

The use case tends to define whether VMs need to be used in conjunction with containers, or not. Businesses operating in highly regulated environments and others which place a high premium on security want multitenancy capabilities so virtualized workloads can run in isolation.

“With containers, you don’t quite get the same level of isolation,” said Iams. “However, you do get some isolation that may be sufficient for internal use cases.”

One concern is whether workloads running across containers are operating in the same trust domain. If not, extra steps must be taken to ensure isolation.

“That’s what’s underlying some of these new [sandboxed container runtime] initiatives you hear about, such as gVisor. [D]evelopers are trying to come up with new virtualization mechanisms that give you the same kind of isolation you would have between virtual machines and the efficiency and low resource consumption of containers,” said Iams. “That’s early-stage, for the time being. If you want to have that kind of isolation, you need virtual machines.”

However, containers are helping to eliminate application-level friction for end users and IT.

“My developers and I don’t ever think about this. A cloud container is really extending that user experience into an area of IT that for the longest time has been controlled by technical gearheads,” said KPMG’s Nims. “My wife, who doesn’t work in IT, could go to Amazon Cloud and set up a MySQL environment to do database work if she wanted to. That’s a container.”

Meanwhile, he thinks enterprise architects and DBAs should consider how cloud containers can be used to manage applications at scale.

“As a DBA, one [my] biggest pain points was the ancillary applications I was responsible for managing that weren’t necessarily tied to my database. When I did it, it was five to one or 10 to one databases per DBA. Then, each of these customers would have another 20 to 30 applications. I couldn’t manage that so we would hire SAs,” said Nims. “If I have a MySQL environment in a cloud container, I can patch it across all of my instances. Back in the day, I would have to individually patch each instance. That’s huge in terms of scalability and management because now you can have a control room managing hundreds of thousands of applications, potentially, with a couple of guys.”

Developers and operations have to work together

Operations teams have spent the last two decades configuring and provisioning VMs. Now, they need to get familiar with containers and container orchestration platforms, if they’re not already, given the popularity of containers among developers.

“It will take some time before both sides of this initiative come to terms and arrive at consistent operational processes,” said Iams. “If you’re doing this on-premises, it doesn’t get set up in a vacuum. Operations still has to configure storage and networking, and there may be some authentication and security-type mechanisms that have to be configured to make sure that this great new containerized infrastructure works well. Ops can’t do that in isolation.”

Of course, getting container infrastructure to work well isn’t an event, it’s a process, especially given the furious pace of container-related innovation.

“Getting the pieces to work well together is a challenge,” said Iams. “There are upgrades that are going to happen at multiple levels of the stack that are going to require processes to be in place that reflect the priorities of both groups.”

Bottom line

VMs and containers will continue to coexist for some time, mainly because businesses require the benefits of both. Even if containers could replace VMs for every conceivable use case, a mainstream shift wouldn’t happen overnight because today’s businesses are heavily dependent on and extremely familiar with VMs.

Why AI is So Brilliant and So Stupid

AI

For all of the promise that artificial intelligence represents, a successful AI initiative still requires all of the right pieces to come together.

AI capabilities are advancing rapidly, but the results are mixed. While chatbots and digital assistants are improving generally, the results can be laughable, perplexing and perhaps even unsettling.

Google’s recent demonstration of Duplex, its natural language technology that completes tasks over the phone, is noteworthy. Whether you love it or hate it, two things are true: It doesn’t sound like your grandfather’s AI; the use case matters.

One of the striking characteristics of the demo, assuming it actually was a demo and not a fake, as some publications have suggested, is the use of filler language in the digital assistant’s speech such as “um” and uh” that make it sound human. Even more impressive, (again, assuming the demo is real), is the fact that Duplex reasons adeptly on-the-fly despite the ambiguous, if not confusing, responses provided by a restaurant hostess on the other end of the line.

Of course, the use case is narrow. In the demo, Duplex is simply making a hair appointment and attempting to make a restaurant reservation. In the May 8 Google Duplex blog introducing the technology, Yaniv Leviathan, principal engineer and Yossi Matias, VP of Engineering explain: “One of the key research insights was to constrain Duplex to closed domains, which are narrow enough to explore extensively. Duplex can only carry out natural conversations after being deeply trained in such domains. It cannot carry out general conversations.”

A common misconception is that there’s a general AI that works for everything. Just point it at raw data and magic happens.

“You can’t plug in an AI tool and it works [because it requires] so much manual tweaking and training. It’s very far away from being plug-and-play in terms of the human side of things,” said Jeremy Warren, CTO of Vivint Smart Home and former CTO of the U.S. Department of Justice. “The success of these systems is driven by dark arts, expertise and fundamentally on data, and these things do not travel well.”

Data availability and quality matter

AI needs training data to learn and improve. Warren said that if someone has mediocre models, processing performance, and machine learning experts, but the data is amazing, the end solution will be very good. Conversely, if they have the world’s best models, processing performance, and machine learning experts but poor data, the result will not be good.

“It’s all in the data, that’s the number one thing to understand, and the feedback loops on truth,” said Warren. “You need to know in a real way what’s working and not working to do this well.”

Daniel Morris, director of product management at real estate company Keller Williams agrees. He and his team have created Kelle, a virtual assistant designed for Keller Williams’ real estate agents that’s available as iPhone and Android apps. Like Alexa, Kelle has been built as a platform so skills can be added to it. For example, Kelle can check calendars and help facilitate referrals between agents.

“We’re using technology embedded in the devices, but we have to do modifications and manipulations to get things right,” said Morris. “Context and meaning are super important.”

One challenge Morris and his team run into as they add new skills and capabilities is handling longtail queries, such as for lead management, lead nurturing, real estate listings, and Keller Williams’ training events. Agents can also ask Kelle for the definitions of terms that are used in the real estate industry or terms that have specific meaning at Keller Williams.

Expectations are or are not managed well

Part of the problem with technology commercialization, including the commercialization of AI, is the age-old problem of over-promising and under-delivering. Vendors solving different types of problems claim that AI is radically improving everything from drug discovery to fraud prevention, which it can, but the implementations and their results can vary considerably, even among vendors focused on the same problem.

“A lot of the people who are really doing this well have access and control over a lot of first-party data,” said Skipper Seabold, director of decision sciences at decision science advisory firm Civis Analytics. “The second thing to note is it’s a really hard problem. What you need to do to deliver a successful AI product is to deliver a system, because you’re delivering software at the end. You need a cross-functional team that’s a mix of researchers and product people.”

Data scientists are often criticized for doing work that’s too academic in nature. Researchers are paid to test the validity of ideas. However, commercial forms of AI ultimately need to deliver value that either feeds bottom line directly, in terms of revenue, cost savings and ultimately profitability or indirectly, such as through data collection, usage and, potentially, the sale of that information to third parties. Either way, it’s important to set end user expectations appropriately.

“You can’t just train AI on raw data and it just works, that’s where things go wrong,” said Seabold. “In a lot of these projects you see the ability for human interaction. They give you an example of how it can work and say there’s more work to be done, including more field testing.”

Decision-making capabilities vary

Data quality affects AI decision-making. If the data is dirty, the results may be spurious. If it’s biased, that bias will likely be emphasized.

“Sometimes you get bad decisions because there are no ethics,” said Seabold. “Also, the decisions a machine makes may not be the same as a human would make. You may get biased outcomes or outcomes you can’t explain.”

Clearly, it’s important to understand what the cause of the bias is and correct for it.

Understanding machine rendered decisions can be difficult, if not impossible, when a black box is involved. Also, human brains and mechanical brains operate differently. An example of that was the Facebook AI Research Lab chatbots that created their own language, which the human researchers were not able to understand. Not surprisingly, the experiment was shut down.

“This idea of general AI is what captures people’s imaginations, but it’s not what’s going on,” said Seabold. “What’s going on in the industry is solving an engineering problem using calculus and algebra.”

Humans are also necessary. For example, when Vivint Smart Homes wants to train a doorbell camera to recognize humans or a person wearing a delivery uniform, it hires people to review video footage and assign labels to what they see. “Data labelling is sometimes an intensely manual effort, but if you don’t do it right, then whatever problems you have in your training data will show up in your algorithms,” said Vivint’s Warren.

Bottom line

AI outcomes vary greatly based on a number of factors which include their scope, the data upon which they’re built, the techniques used, the expertise of the practitioners and whether expectations of the AI implementation are set appropriately. While progress is coming fast and furiously, the progress does not always transfer well from one use case to another or from one company to another because all things, including the availability and cleanliness of data, are not equal.

Why Quantum Computing Should Be on Your Radar Now

Quantum computer

Boston Consulting Group and Forrester are advising clients to get smart about quantum computing and start experimenting now so they can separate hype from reality.

There’s a lot of chatter about quantum computing, some of which is false and some of which is true. For example, there’s a misconception that quantum computers are going to replace classical computers for every possible use case, which is false. “Quantum computing” is not synonymous with “quantum leap,” necessarily. Instead, quantum computing involves quantum physics which makes it fundamentally different than classical, binary computers. Binary computers can only process 1s and 0s. Quantum computers can process many more possibilities, simultaneously.

If math and physics scare you, a simple analogy (albeit not an entirely correct analogy) involves a light switch and a dimmer switch that represent a classical computer and a quantum computer, respectively. The standard light switch has two states: on and off. The dimmer switch provides many more options, including on, off, and range of states between on and off that are experienced as degrees of brightness and darkness. With a dimmer switch, a light bulb can be on, off, or a combination of both.

If math and physics do not scare you, quantum computing involves quantum superposition, which explains the nuances more eloquently.

One reason quantum computers are not an absolute replacement for classical computers has to do with their physical requirements. Quantum computers require extremely cold conditions in order for quantum bits or qubits to remain “coherent.” For example, much of D-Wave’s Input/Output (I/O) system must function at 15 millikelvin (mK), which is near absolute zero. 15 mK is equivalent to minus 273.135 degrees Celsius or minus 459.643 degrees Fahrenheit. By comparison, the classical computers most individuals own have built-in fans, and they may include heat sinks to dissipate heat. Supercomputers tend to be cooled with circulated water. In other words, the ambient operating environments required by quantum computers and classical computers vary greatly. Naturally, there are efforts that are aimed at achieving quantum coherence in room temperature conditions, one of which is described here.

Quantum computers and classical computers are fundamentally different tools. In a recent report, Brian Hopkins, vice president and principal analyst at Forrester explained, “Quantum computing is a class of emerging hardware and software that exploits subatomic phenomenon to solve computationally hard problems.”

What to expect, when

There’s a lot of confusion about the current state of quantum computing which industry research firms Boston Consulting Group (BCG) and Forrester are attempting to clarify.

In the Forrester report, Hopkins estimates that quantum computing is in the early stages of commercialization, a stage that will persist through 2025 to 2030. The growth stage will begin at the end of that period and continue through the end of the forecast period which is 2050.

A recent BCG report estimates that quantum computing will become a $263 to $295 billion-dollar market given two different forecasting scenarios, both of which span 2025 to 2050. BCG also reasons that the quantum computing market will advance in three distinct phases:

  1. The first generation will be specific to applications that are quantum in nature, similar to what D-Wave is doing.
  2. The second generation will unlock what report co-author and BCG senior partner Massimo Russo calls “more interesting use cases.”
  3. In the third generation, quantum computers will have achieved the number of logical qubits required to achieve Quantum Supremacy. (Note: Quantum Supremacy and logical qubits versus physical qubits are important concepts addressed below.)

“If you consider the number of logical qubits [required for problem-solving], it’s going to take a while to figure out what use cases we haven’t identified yet,” said BCG’s Russo. “Molecular simulation is closer. Pharma company interest is higher than in other industries.”

Life sciences, developing new materials, manufacturing, and some logistics problems are ideal for quantum computers for a couple of possible reasons:

  • A quantum machine is more adept at solving quantum mechanics problems than classical computers, even when classical computers are able to simulate quantum computers
  • The nature of the problem is so difficult that it can’t be solved using classical computers at all, or it can’t be solved using classical computers within a reasonable amount of time, at a reasonable cost.

There are also hybrid use cases in which parts of a problem are best solved by classical computers and other parts of the problem are best solved by quantum computers. In this scenario, the classical computer breaks the problem apart, communicates with the quantum computer via an API, receives the result(s) from the quantum computer and then assembles a final answer to the problem, according to BCG’s Russo.

“Think of it as a coprocessor that will address problems in a quantum way,” he said.

While there is a flurry of quantum computing announcements at present, practically speaking, it may take a decade to see the commercial fruits of some efforts and multiple decades to realize the value of others.

Logical versus physical qubits

All qubits are not equal, which is true in two regards. First, there’s an important difference between logical qubits and physical qubits. Second, the large vendors are approaching quantum computing differently, so their “qubits” may differ.

When people talk about quantum computers or semiconductors that have X number of qubits, they’re referring to physical qubits. The reason the number of qubits matters is that the computational power grows exponentially with the addition of each, individual qubit. According to  Microsoft, a calculator is more powerful than a single qubit, and “simulating a 50-qubit quantum computation would arguably push the limits of existing supercomputers.”

BCG’s Russo said for semiconductors, the number of physical qubits required to create a logical qubit can be as high as 3,000:1. Forrester’s Hopkins stated he’s heard numbers ranging from 10,000 to 1 million or more, generally.

“No one’s really sure,” said Hopkins. “Microsoft thinks [it’s] going to be able to achieve a 5X reduction in the number of physical qubits it takes to produce a logical qubit.”  

The difference between physical qubits and logical qubits is extremely important because physical qubits are so unstable they need the additional qubits to ensure error correction and fault tolerance.

Get a grip on Quantum Supremacy

Quantum Supremacy does not signal the death of classical computers for the reasons stated above. Google cites this definition: “A critical question for the field of quantum computing in the near future is whether quantum devices without error correction can perform a well-defined computational task beyond the capabilities of state-of-the-art classical computers, achieving so-called quantum supremacy.”

“We’re not going to achieve Quantum Supremacy overnight, and we’re not going to achieve it across the board,” said Forrester’s Hopkins. “Supremacy is a stepping stone to delivering a solution. Quantum Supremacy is going to be achieved domain by domain, so we’re going to achieve Quantum Supremacy, which Google is advancing, and then Quantum Value, which IBM is advancing, in quantum chemistry or molecular simulation or portfolio risk management or financial arbitrage.”

The fallacy is believing that Quantum Supremacy means that quantum computers will be better at solving all problems, ergo classical computers are doomed.

Given the proper definition of the term, Google is attempting to achieve Quantum Supremacy with its 72-qubit quantum processor, Bristlecone.

How to get started now

First, understand the fundamental differences between quantum computers and classical computers. This article is merely introductory, given its length.

Next, (before, after and simultaneously with the next piece of advice) find out what others are attempting to do with quantum computers and quantum simulations and consider what use cases might apply to your organization. Do not limit your thinking to what others are doing. Based on a fundamental understanding of quantum computing and your company’s business domain, imagine what might be possible, whether the end result might be a minor percentage optimization that would give your company a competitive advantage or a disruptive innovation such as a new material.

Experimentation is also critical, not only to test hypotheses, but also to better understand how quantum computing actually works. The experimentation may inspire new ideas, and it will help refine existing ideas. From a business standpoint, don’t forget to consider the potential value that might result from your work.

Meanwhile, if you want to get hands-on experience with a real quantum computer, try IBM Q. The “IBM Q Experience” includes user guides, interactive demos, the Quantum Composer which enables the creation of algorithms that run on real quantum computing hardware, and the QISKit software developer kit.

Also check out Quantum Computing Playground which is a browser-based WebGL Chrome experiment that features a GPU-accelerated quantum computer with a simple IDE interface, its own scripting language with debugging and 3D quantum state visualization features.

In addition, the Microsoft Quantum Development Kit Preview is available now. It includes the Q# language and compiler, the Q# standard library, a local quantum machine simulator, a trace quantum simulator which estimates the resources required to run a quantum program and Visual Studio extension.


9 Traits of Emerging Disruptors

Business leaders are understandably concerned about disruption. Business as usual is a dangerous proposition in an age when entire industries can be upended by a disruptor armed with cloud-based computing power, lots of data, and effective ways of leveraging that data.

The typical response to the threat of disruption is digital transformation. However, digital transformation tends to be approached as an if/then statement. Specifically, if we embark on a digital transformation journey, then we’ll be able to compete effectively in the future.

“What they’re not recognizing is you have failed in your business,” said Jay Goldman, co-author of New York Times bestseller THE DECODED COMPANY: Know Your Talent Better Than You Know Your Customers and co-founder and managing director of digital workplace solution provider Sensei Labs, “You’re not being rewarded for doing something right,”

The quantum shifts that disruptions represent don’t happen overnight. A disruptor, like most startups, has an idea it hopes will change the world. It intends to challenge the status quo that has been created by an established order of market leaders with formidable market shares and deep pockets. However, the market leaders don’t serve everyone by design because not all business relationships are equally attractive or profitable, so they tend to focus on the most profitable segments and de-emphasize or ignore the less-profitable segments. Disruptors tend to take advantage of those opportunities, such as by serving niche markets or less-affluent customers.

The incumbents tend to ignore such startups because the new contender is relatively small, lacks resources and tends to have far less brand recognition. Moreover, the new contender has decided to address a market segment the market leaders have consciously decided not to serve. Then, when the new contender succeeds in those markets, it has to expand into other segments to continue growing and improving profitability. Ultimately, when the new contender starts to gain market share in the coveted market segments, the incumbents react, albeit later than they should have. As more market share is lost, the incumbents try to copy what the emerging leader is doing, which may not work well, if at all.

Brought to you by LogMeIn USA, Inc.

“If you say in the next six months we’re going to execute this transformation project and at the end of that we’ll emerge from this cocoon a new butterfly with everything we need to remain competitive from that point forward, you missed the point,” said Goldman. “There isn’t a set transformation that will keep you forever in a competitive state, ready to respond to the business environment. The only way to do that is to transform the fundamental parts of the organization so you are in a constant state of evolution and disruption.”

Achieving that state requires changing the company’s culture, leadership structure and tools.

By comparison, disruptors don’t have to transform because they’re new and have the luxury of creating a culture, leadership structure and tool set. Following are a few other things that separate the disruptors from the disrupted.

#1:  They’re unified

Disruptors are on a mission to affect major economic, business, industry, or societal changes. They have a vision and purpose that are woven into everything they do and the mindsets of their employees.

Incumbents often form a separate innovation group or hire a mover-and-shaker with a C-title, such as a Chief Data Officer (CDO) to lead a separate group. This powerful and brilliant executive, who typically comes from a high-profile tech company or a company in another industry, is given a massive budget, an enviable working environment and the freedom to hire the people necessary for success. However, there is a fundamental flaw in the approach.

“They set the group up [as a separate entity] for a whole bunch of reasons: 1) We know our culture will kill it if we put it inside the business and, 2) The kind of people we need working in that division are never going to work for our company if we try to hire them outright,” said bestselling author and Sensei Labs co-founder and managing director Jay Goldman.

#2:  They have an authentic culture

Every company has a culture by design or by default. Since disruptors lack a decades-plus legacy, they don’t have to transform from something traditional to something modern.

They recognize the importance of culture and the need for everyone in the company to not only buy into the culture but to advocate, promote, and advance it. Having a unified culture enables the realization of a unified vision and the execution of a unified mission.

In contrast, incumbents try to counter the effects of disruptors by attempting to mimic them. In doing so, they miss a very important point, which is what works at Google works because Google is Google. Every company is unique in terms of its people, processes, tools and value proposition.

“The one value that you see coming out of [the tours given by innovative companies] is to come back terrified and convinced of the need to make change,” said Goldman. “[Usually, the CDO and C-suite executives are] going to come back with good notes of how they might do that, but they won’t recognize the depth of the threats they’re facing.”

#3:  They reflect modern values

Startups have the benefit of being born into whatever “modern” era exists at their founding date. Today’s startups reflect the values of the younger generations including Millennials and Generation Z (Gen Z), both of which are highly tech-savvy.

“It’s not just you have a different set of values and priorities,” said Goldman. “You have an intimate level of familiarity with technology that the leaders don’t have because they weren’t born into that age.”

Goldman once met with a group of C-suite executives who couldn’t understand why their successful life sciences company had trouble attracting and retaining younger employees. To better understand the issue, they asked employees for suggestions, many of which they considered ridiculous. For example, they didn’t understand why younger employees would want to wear jeans instead of suits. How would that improve work effectiveness?

“The reality is, that the executives who made a company successful are disconnected from what people want in the workforce today,” said Goldman. “People will take a pay cut to work at a business where they’re deeply aligned with the values of the company and they believe they’re doing good for the world. In my generation and the generation before me, you looked for a well-paying job, and company values were on a poster with a soaring Eagle on it in the break room.”

#4:  They have the latest tools

Cloud-based technologies enable startups to do what was cost-prohibitive in the past. Now, businesses of all sizes have affordable access to massive computer power, storage, data analytics, and AI. More importantly, they can experiment and iterate in low-risk, low-cost environments and scale as necessary to meet the growing requirements of their expanding customer bases.

In contrast, the life sciences company C-suite executives didn’t understand why employees didn’t want to use Lotus Notes!

#5:  Their leaders are enablers

Disruptors attract, hire, and cultivate highly-effective people. Changing the status quo of an industry or society at large not only requires bright, driven people, it requires leaders who are not threatened by other bright, driven people.

In a command-and-control hierarchical structure, power and great ideas may be reserved for the chosen few.

“Traditional roles are managers who are there to make sure things happen on time and on budget, and that you hire the right people to do the job,” said Goldman. “When it comes to topics like transformation, innovation, and disruption, you should be a gardener. Your job as a gardener is to make sure your plants get enough sunlight, water, and nutrients. You can weed out the weeds that would have prevented them from growing and you can protect the garden from being raided by animals.”

Leaders should be enablers instead of managers. Enablers want great people to do great work, so they create an environment that includes the freedom to do that. The traditional management mentality can be stifling by comparison when people can only rise to whatever level of competence or incompetence the manager himself or herself possesses.

#6:  Change drives them

Change is what drives innovation and disruption. It’s about affecting change and also having the agility to change when an experiment or even the entire business model fails.

Goldman said even though incumbents may be out interviewing customers and iterating products rapidly in response, they’re not doing the same internally. Heads of innovation tend to be brilliant at product innovation, but they’re not necessarily change agents,

“The actual MVP customers you should talk to are the P&L holders that will have to sponsor [the innovation lab],” said Goldman. “Don’t present something that’s so radical and transformative [the P&L holders] look at their products and realize they’ll probably lose their job.”

#7:  Their value proposition trumps products

Disruptive companies tend to view the world differently than their incumbent counterparts. The disruptive companies think in terms of value; incumbents tend to have product and solution portfolios that are presented and regarded as such. They articulate use cases, but they’re missing their company’s fundamental value proposition.

For example, when a fertilizer company was going through a transformation, it “did all the right things,” according to Goldman. It changed the business, empowered the leaders and trained all employees to think creatively using tools and modern problem-solving approaches. During the process, the company’s identity shifted from being a fertilizer company to one that improves crop yields. While the distinction may seem slight, the new definition enables the company to imagine and provide other products and services that improve crop yields. It’s now using satellite data to tell farmers about crop issues they’re not aware of so they can remediate the issues with unprecedented precision (arguably using the company’s fertilizer products). The satellite data is also the basis for a new subscription-based service that guarantees a certain level of crop yield improvement,

#8:  They create best practices

Disruptive companies tend to view the world differently than their incumbent counterparts. The disruptive companies think in terms of value; incumbents tend to have product and solution portfolios that are presented and regarded as such. They articulate use cases, but they’re missing their company’s fundamental value proposition.

For example, when a fertilizer company was going through a transformation, it “did all the right things,” according to Goldman. It changed the business, empowered the leaders and trained all employees to think creatively using tools and modern problem-solving approaches. During the process, the company’s identity shifted from being a fertilizer company to one that improves crop yields. While the distinction may seem slight, the new definition enables the company to imagine and provide other products and services that improve crop yields. It’s now using satellite data to tell farmers about crop issues they’re not aware of so they can remediate the issues with unprecedented precision (arguably using the company’s fertilizer products). The satellite data is also the basis for a new subscription-based service that guarantees a certain level of crop yield improvement,

#9:  They have the right talent

Disruptors couldn’t accomplish what they do if they didn’t have “the right” teams in place. Like any other organization, not everyone makes the cut as the company evolves or chooses to stay as circumstances change. However, they’re keenly aware of their goals and what must be done to achieve them, part of which is ensuring the right people are in the right jobs.

“Employee engagement is gaining momentum. How you keep people engaged has got to be front and center to your strategy,” said Randy Mysliviec, Managing Director of the Resource Management Institute. “Not only does talent management need to be more fluid, you can’t expect people to stay at your company for 20 years regardless of how you treat them.”

In the last two years, enterprise IT resource management has shifted from a simple supply and demand model to a more forward-looking strategic model that considers where the company wants to be in six months. So, when it comes to recruitment, hiring managers are now con

How to Prepare for the Machine-Aided Future

Intelligent automation is going to impact companies and individuals in profound ways, some of which are not yet foreseeable. Unlike traditional automation, which lacks an AI element, intelligent automation will automate more kinds of tasks in an organization, at all levels within an organization.

As history has shown, rote, repetitive tasks are ripe for automation. Machines can do them faster and more accurately than humans 24/7/365 without getting bored, distracted or fatigued.

When AI and automation are combined for intelligent automation, the picture changes dramatically. With AI, automated systems are not just capable of doing things; they’re also capable of making decisions. Unlike manufacturing automation which replaced factory-floor workers with robots, intelligent automation can impact highly-skilled, highly-educated specialists as well as their less-skilled, less-educated counterparts.

Intelligent automation will affect everyone

The non-linear impact of intelligent automation should serve as a wakeup call to everyone in an organization from the C-suite down. Here’s why: If the impact of intelligent automation were linear, then the tasks requiring the least skill and education would be automated first and tasks requiring the most skill and education would be automated last. Business leaders could easily understand the trajectory and plan for it accordingly.

However, intelligent automation is impacting industries in a non-linear fashion. For example, legal AI platform provider LawGeex conducted an experiment that was vetted by professors from Duke University School of Law, Stanford University and an independent attorney to determine which could review contracts more accurately: AI or lawyers. In the experiment, 20 lawyers took an average of 92 minutes to review five non-disclosure agreements (NDAs) in which there were 30 legal issues to spot. The average accuracy rating was 85%. The AI completed the same task in 26 seconds with a 94% accuracy level. Similar results were achieved in a study conducted by researchers at the University of California, San Francisco (UCSF). That experiment involved board-certified echocardiographers. In both cases, AI was better than trained experts at pattern recognition.

Interestingly, most jobs involve some rote, repetitive tasks and pattern recognition. CEOs may consider themselves exempt from intelligent automation but Jack Ma, billionaire founder and CEO of ecommerce platform Alibaba disagrees. “AI remembers better than you, it counts faster than you, and it won’t be angry with competitors.”

What the C-Suite Should Consider

Intelligent automation isn’t something that will only affect other people. It will affect you directly and indirectly. How you handle the intelligently automated future will matter to your career and the health of your organization.

You can approach the matter tactically if you choose. If you take this path, you’ll probably set a goal of using automation to reduce the workforce by XX%.

A strategic approach considers the bigger picture, including the potential competitive effects, the economic impact of a divided labor workforce, what “optimized” business processes might look like, and the ramifications for human capital (e.g., job reassignment, new roles, reimagined roles, upskilling).

The latter approach is more constructive because work automation is not an end it itself. The reason business leaders need to think about intelligent automation now is underscored by a recent McKinsey study. It suggested that 30% of the tasks performed in 6 out of 10 jobs could be automated today.

Tomorrow, there will be even more opportunities for intelligent automation as the technology advances, so business leaders should consider its potential impacts on their organizations.

For argument’s sake, if 30% of every job in your organization could be automated today, what tasks do you consider ripe for automation? If those tasks were automated, how would it affect the organization’s structure, operations and value proposition? How would intelligent automation impact specific roles and departments? How might you lead the workforce differently and how might your expectations of the workforce change? What ongoing training are you prepared to provide so your workforce can adapt as more types of tasks are automated?

Granted, business leaders have little spare time to ponder what-if questions, but these aren’t what-if questions, they’re what-when questions. You can either anticipate the impact, observe and adjust or ignore the trend and react after the fact.

The latter strategy didn’t work so well for brick-and-mortar retailers when the ecommerce tidal wave hit…

What Managers Should Consider

The C-suite should set the tone for what the intelligently automated future looks like for the company and its people. Your job will be to manage the day-to-day aspects of the transition.

As a manager, you’re constantly dealing with people issues. In this case, some people will regard automation as a threat even if the C-suite is approaching it responsibly and with compassion. Others will naturally evolve as the people-machine partnership evolves.

The question for managers is how might automation impact their teams? How might the division of labor shift? What parts of which jobs do you think are ripe for automation? If those tasks were automated, how would peoples’ roles change? How would your group change? Likely, new roles would be created, but what would they be? What sort of training would your people need to succeed in their new positions?

You likely haven’t taken the time to ponder these and related questions, perhaps because they haven’t occurred to you yet. As a team leader, you owe it to yourself and your team to think about how the various scenarios might play out, as well as the recommendations you’d have for your people and the C-suite.

What Employees Should Consider

Everyone should consider how automation might affect their jobs, including managers and members of the C-suite, because everyone will be impacted by it somehow.

In this case, think about your current position and allow yourself to imagine what part of your job could be automated. Start with the boring routine stuff you do over and over, the kinds of things you wish you didn’t have to do. Likely, those things could be automated.

Next, consider the parts of your job that require pattern recognition. If your job entails contract review and contract review is automated, what would you do in addition to overseeing the automated system’s work? As the LawGeex experiment showed, AI is highly accurate, but it isn’t perfect.

Your choice is fight or flight. You can give into the fear that you may be automated out of existence and act accordingly, which will likely result in a self-fulling prophecy. Alternatively, consider what parts of your job could be automated and reimagine your future. If you no longer had to do X, what would Y be?  What might your job title be and what your scope responsibilities be?

If you consider how intelligent automation may impact your career, you’ll be in a better position to evolve as things change and you’ll be better prepared to discuss the matter with your superiors.

The Bottom Line

The intelligently automated future is already taking shape. While the future impacts aren’t entirely clear yet, business leaders, managers and professionals can help shape their own future and the future of their companies by understanding what’s possible and how that might affect the business, departments and individual careers. Everyone will have to work together to make intelligent automation work well for the company and its people.

The worst course of action is to ignore it, because it isn’t going away.

Workforce Analytics Move Beyond HR

Workforce analytics have traditionally focused on HR’s use of them when their value can actually have significant overall business impacts. Realizing this, more business leaders are demanding insights into workforce dynamics to unearth insights that weren’t apparent before.

Businesses often claim that talent is their greatest asset, but they’re not always able to track what’s working, what isn’t and why. For example, in Deloitte Consulting’s 2018 Global Human Capital Trends report, 71% of survey participants said their companies consider people analytics a high priority, but only 10% are “very ready” to deal with it. According to David Fineman, specialist leader at  Deloitte Consulting, who co-authored the report, business leaders want insights into six focus areas that include workforce planning and shaping, recruiting and staffing talent optimization, culture and engagement, performance and rewards, and HR service delivery.

“The important distinction between focus areas that are addressed today compared with the focus areas from prior years is the emphasis on issues that are important to business leaders, not limiting analytics recipients to an HR audience,” said Fineman.

In fact, the Deloitte report explicitly states that board members and CEOs want access to people analytics because they’re “impatient with HR teams that can’t deliver actionable information and insights…”

As businesses continue to digitize more tasks and functions, it’s essential for them to understand the current makeup of their workforces, what talent will be needed in the future, and what’s necessary to align the two.

Shebani Patel, People Analytics leader at professional services firm PricewaterhouseCoopers (PwC) said that companies now want to understand employee journeys from onboarding to daily work experiences to exit surveys.

“They’re trying to get more strategic about how all of that comes together to build and deliver an exceptional [employee] experience that ultimately has ROI attached to it,” she said.

What companies are getting right

The availability of more people analytics tools enables businesses to understand their workforces in greater detail than ever before. However, the insights sought are not just insights about people, but rather how those insights directly relate to business value such as achieving higher levels of customer satisfaction or improving product quality. Businesses are also placing more emphasis on organizational network analysis (ONA) which provides insight into the interactions and relationships among people.

While it’s technologically possible to track what individuals do, there are also privacy concerns that are best addressed using clustering techniques. For example, KPMG’s clients are looking at email patterns, chat correspondence and calendared meetings to understand how team behavior correlates with performance, productivity or sales.

“Organizations today are using [the data] to derive various hypotheses and then use analytics to prove them out,” said Paul Krasilnick, director, Advisory Services at KPMG. “They recognize that it needs to be done cautiously in terms of data privacy and access to information, but they also recognize the value of advancing their internal capabilities and maturity from descriptive reporting to more prescriptive [analytics].”

According to Deloitte’s Fineman, high performing people analytics teams are characterized by increasing the analytics acumen within the HR function and among stakeholders.

What needs to improve

Like any other analytics journey, what needs to be improved depends on an organization’s level of mastery.  While all organizations have people data, they don’t necessarily have clean data.  Further, the mere existence of the data does not mean it’s readily usable or necessarily valuable.

MIT Chaplain: Emerging Tech Leaders Care About Ethics

The tech industry’s approach to innovation will likely undergo a major shift as new generations of tech leaders come to power. Historically, innovation has been economically motivated for the benefit of individuals and shareholders, which will continue to be true, although the nature of innovation will likely evolve to consider its impacts on the world in greater depth than has been true historically.

“As an innovator, you may be able to make some term gain without having to worry or be concerned about ethics whatsoever,” said Greg Epstein, the newly-appointed first humanist chaplain at the Massachusetts Institute of Technology (MIT) and executive director of The Humanist Hub. “You may be able to achieve some things that we define as success in this world without caring about or even paying any attention to ethics, but in the long term, [that approach] will likely have some directly damaging consequences in your life. What I’m seeing students prepare for today is not just conventional success but to have an inner life that is meaningful.”

Values change from generation to generation, so it should be no surprise that what fueled the tech industry’s direction to date may change in the future. While it’s true that some of today’s tech leaders demonstrate a capacity for doing something good for society, the general trajectory is to innovate, grow, exit, maybe repeat the last three steps a few times and then focus on something like underprivileged individuals.

Doing something “good” later in life is consistent with mid-life realizations of mortality when one questions the legacy one is leaving behind. According to Epstein, the Millennials and Generation Z are more likely to ponder the societal value of their contributions earlier in their career than Baby Boomers or Generation X.

Tech Innovation for Good Versus Tech Innovation Is Good

Arguably, technology innovation has always focused on the positive, if “the positive” is defined as achieving the art of the possible. For example, cars are safer and more reliable than they once were, as the result of technology innovation.

However, the more technologically dependent people and things become, the more vulnerable they are to attacks. In other words, the negative potential consequences of new technology tend to be an afterthought, with the exception of products and services that are designed to protect people from negative consequences, such as cybersecurity products.

In previous generations, technology impacted society more slowly than it does today, so the mainstream positive and negative effects tended to take longer to realize. For example, adoption of the first mobile phones was relatively slow because they were large and heavy, and cellular service was spotty at best. Now, entire industries are being disrupted seemingly overnight by companies such as Uber and Airbnb.

Generational Differences Matter

Each generation is shaped, in part, by the world in which they mature. Over the past several decades, each subsequent generation has been exposed to not only more technology, but more sophisticated technology. The “new normal” is a connected world of devices, many of which are recording everything, and social media networks through which anything and everything can be shared.

“Increasingly, young people on campus want to create collaborative technology [so] that people can have a fair opportunity in life and human beings can help one another to achieve a better of life than we’ve ever had before,” said Epstein. “I think that people are hungry for conversations about what that could look like [and] what that could mean because human beings have never had this responsibility before to transform our collective lives for the better.”

Innovation for a Higher Purpose

Thus far, technology innovators have followed a pattern, which is to innovate, to capitalize, and to then deal with negative consequences later if and when they arise. In other words, the tech industry has been focused on the art of the possible, generally without regard for the entire spectrum of outcomes that results.

AI Challenge: Achieving Artificial Empathy

Businesses of all kinds are investing in AI to improve operations and customer experience. However, as the average person experiences on a daily basis, interacting with machines can be frustrating when they’re incapable of understanding emotions.

For one thing, humans don’t always communicate clearly with machines, and vice versa. The inefficiencies caused by such miscommunications tend to frustrate end users. Even more unsettling in such scenarios is the failure of the system to recognize emotion and adapt.

To facilitate more effective human-to-machine interactions, artificial intelligence systems need to become more human-like, and to do that, they need the ability to understand emotional states and act accordingly.

A Spectrum of Artificial Emotional Intelligence

Merriam Webster’s primary definition of empathy is:

 “The action of understanding, being aware of, being sensitive to and vicariously experiencing the feelings, thoughts and experience of another of either the past or present without having the feelings, thoughts and experience fully communicated in an objectively explicit manner; alsothe capacity for this.”

To achieve artificial empathy, according to this definition, a machine would have to be capable of experiencing emotion. Before machines can do that, they must first be able to recognize emotion and comprehend it.

Non-profit research institute SRI International and others have succeeded with the recognition aspect, but understanding emotion is more difficult. For one thing, individual humans tend to interpret and experience emotions differently.

“We don’t understand all that much about emotions to begin with, and we’re very far from having computers that really understand that. I think we’re even farther away from achieving artificial empathy,” said Bill Mark, president of Information and Computing Services at SRI International, whose AI team invented Siri. “Some people cry when they’re happy, a lot of people smile when they’re frustrated. So, very simplistic approaches, like thinking that if somebody is smiling they’re happy, are not going to work.”

Emotional recognition is an easier problem to solve than emotional empathy because, given a huge volume of labeled data, machine learning systems can learn to recognize patterns that are associated with a particular emotion. The patterns of various emotions can be gleaned from speech (specifically, word usage in context, voice inflection, etc.), as well as body language, expressions and gestures, again with an emphasis on context. Like humans, the more sensory input a machine has, the more accurately it can interpret emotion.

Recognition is not the same as understanding, however. For example, computer vision systems can recognize cats or dogs based on labeled data, but they don’t understand the behavioral characteristics of cats or dogs, that the animals can be pets or that people tend to love them or hate them.

Similarly, understanding is not empathy. For example, among three people, one person may be angry, which the other two understand. However, the latter two are not empathetic: The second person is dispassionate about the first person’s anger and the third person finds the first person’s anger humorous.

In recent history, Amazon Alexa startled some users by bursting into laughter for no apparent reason. It turns out that, the system heard, “Alexa, laugh” when the user said no such thing. Now imagine a system laughing at a chronically ill, depressed, anxious, or suicidal person who is using the system as a therapeutic aid.

“Siri and systems like Siri are very good at single-shot interactions. You ask for something and it responds,” said Mark. “For banking, shopping or healthcare, you’re going to need an extended conversation, but you won’t be able to state everything you want in one utterance so you’re really in a joint problem-solving situation with the system. Some level of emotional recognition and the ability to act on that recognition is required for that kind of dialogue.”

Personalization versus Generalization

Understanding the emotions of a single individual is difficult enough, because not everyone expresses or interprets emotions in the same way. However, like humans, machines will best understand a person with whom it has extensive experience.

“If there has been continuous contact between a person and a virtual assistant, the virtual assistant can build a much better model,” said Mark. “Is it possible to generalize at all? I think the answer to that is, ‘yes,’ but it’s limited.”

Generalizing is more difficult, given the range of individual differences and all the factors that cause individuals to differ, including nature and nurture, as well as culture and other factors.

Recognizing emotion and understanding emotion are a matter of pattern recognition for both humans and machines. According to Keith Strier, EY Advisory Global and Americas AI leader at professional services firm EY, proofs of concept are now underway in the retail industry to personalize in-store shopping experiences.

“We’re going to see this new layer of machine learning, computer vision and other tools applied to reading humans and their emotions,” said Strier. “[That information] will be used to calibrate interactions with them.”

In the entertainment industry, Strier foresees entertainment companies monitoring the emotional reactions of theater audiences so that directing and acting methods, as well as special effects and music can deliver more impactful entertainment experiences that are scarier, funnier or more dramatic.

“To me it’s all the same thing: math,” said Strier. “It’s really about the specificity of math and what you do with it. You’re going to see a lot of research papers and POCs coming out in the next year.”

Personalization Will Get More Personal

Marketers have been trying to personalize experiences using demographics, personas and other means to improve customer loyalty as well as increase engagement and share of wallet. However, as more digital tools have become available, such as GPS and software usage analytics, marketers have been attempting to understand context so they can improve the economics and impact of campaigns.

“When you add [emotional intelligence], essentially you can personalize not just based on who I am and what my profile says, but my emotional state,” said Strier. “That’s really powerful because you might change the nature of the interaction, by changing what you say or by changing your offer completely based on how I feel right now.”

Artificial Emotional Intelligence Will Vary by Use Case

Neither AI nor emotions are one thing. Similarly, there is not just one use case for artificial emotional intelligence, be it emotional recognition, emotional understanding or artificial empathy.

“The actual use case matters,” said Strier. “Depending on the context, it’s going to be super powerful or maybe not good enough.”

At the present time, a national bank is piloting a smart ATM that uses a digital avatar which reads customers’ expressions. As the avatar interacts with customers, it adapts its responses.

“We can now read emotions in many contexts. We can interpret tone, we can we can triangulate body language and words and eye movements and all sorts of proxies for emotional state. And we can learn over time whether someone is feeling this or feeling that. So now the real question is what do we do with that?” said Strier. “Artificial empathy changes the art of the possible, but I don’t think the world quite knows what to do with it yet. I think the purpose question is probably going to be a big part of what going to occupy our time.”

Bottom Line

Artificial emotional intelligence can improve the quality and outcomes of human-to-machine interactions, but it will take different forms over time, some of which will be more sophisticated and accurate than others.

Artificial empathy raises the question of whether machines are capable of experiencing emotions in the first place, which, itself, is a matter of debate. For now, it’s fair to say that artificial emotional intelligence, and the advancement of it, are both important and necessary to the advancement of AI.

Lisa Morgan Will Address A/IS Ethics at the University of Arizona Eller College of Management’s Annual Executive Ethics Symposium

Lisa Morgan will be speaking at this year’s University of Arizona Eller College of Management’s Annual Executive Ethics Symposium, an invitation-only event in September 2018.   Her presentation will address the need for AI ethics given the current state of AI and innovation, as well as the opportunities and challenges shaping the advancement of AI ethics.

« Older posts