Lisa Morgan's Official Site

Strategic Insights and Clickworthy Content Development

Author: misslisa (page 1 of 10)

9 Traits of Emerging Disruptors

usiness leaders are understandably concerned about disruption. Business as usual is a dangerous proposition in an age when entire industries can be upended by a disruptor armed with cloud-based computing power, lots of data, and effective ways of leveraging that data.

The typical response to the threat of disruption is digital transformation. However, digital transformation tends to be approached as an if/then statement. Specifically, if we embark on a digital transformation journey, then we’ll be able to compete effectively in the future.

“What they’re not recognizing is you have failed in your business,” said Jay Goldman, co-author of New York Times bestseller THE DECODED COMPANY: Know Your Talent Better Than You Know Your Customers and co-founder and managing director of digital workplace solution provider Sensei Labs, “You’re not being rewarded for doing something right,”

The quantum shifts that disruptions represent don’t happen overnight. A disruptor, like most startups, has an idea it hopes will change the world. It intends to challenge the status quo that has been created by an established order of market leaders with formidable market shares and deep pockets. However, the market leaders don’t serve everyone by design because not all business relationships are equally attractive or profitable, so they tend to focus on the most profitable segments and de-emphasize or ignore the less-profitable segments. Disruptors tend to take advantage of those opportunities, such as by serving niche markets or less-affluent customers.

The incumbents tend to ignore such startups because the new contender is relatively small, lacks resources and tends to have far less brand recognition. Moreover, the new contender has decided to address a market segment the market leaders have consciously decided not to serve. Then, when the new contender succeeds in those markets, it has to expand into other segments to continue growing and improving profitability. Ultimately, when the new contender starts to gain market share in the coveted market segments, the incumbents react, albeit later than they should have. As more market share is lost, the incumbents try to copy what the emerging leader is doing, which may not work well, if at all.

Brought to you by LogMeIn USA, Inc.

“If you say in the next six months we’re going to execute this transformation project and at the end of that we’ll emerge from this cocoon a new butterfly with everything we need to remain competitive from that point forward, you missed the point,” said Goldman. “There isn’t a set transformation that will keep you forever in a competitive state, ready to respond to the business environment. The only way to do that is to transform the fundamental parts of the organization so you are in a constant state of evolution and disruption.”

Achieving that state requires changing the company’s culture, leadership structure and tools.

By comparison, disruptors don’t have to transform because they’re new and have the luxury of creating a culture, leadership structure and tool set. Following are a few other things that separate the disruptors from the disrupted.

#1:  They’re unified

Disruptors are on a mission to affect major economic, business, industry, or societal changes. They have a vision and purpose that are woven into everything they do and the mindsets of their employees.

Incumbents often form a separate innovation group or hire a mover-and-shaker with a C-title, such as a Chief Data Officer (CDO) to lead a separate group. This powerful and brilliant executive, who typically comes from a high-profile tech company or a company in another industry, is given a massive budget, an enviable working environment and the freedom to hire the people necessary for success. However, there is a fundamental flaw in the approach.

“They set the group up [as a separate entity] for a whole bunch of reasons: 1) We know our culture will kill it if we put it inside the business and, 2) The kind of people we need working in that division are never going to work for our company if we try to hire them outright,” said bestselling author and Sensei Labs co-founder and managing director Jay Goldman.

#2:  They have an authentic culture

Every company has a culture by design or by default. Since disruptors lack a decades-plus legacy, they don’t have to transform from something traditional to something modern.

They recognize the importance of culture and the need for everyone in the company to not only buy into the culture but to advocate, promote, and advance it. Having a unified culture enables the realization of a unified vision and the execution of a unified mission.

In contrast, incumbents try to counter the effects of disruptors by attempting to mimic them. In doing so, they miss a very important point, which is what works at Google works because Google is Google. Every company is unique in terms of its people, processes, tools and value proposition.

“The one value that you see coming out of [the tours given by innovative companies] is to come back terrified and convinced of the need to make change,” said Goldman. “[Usually, the CDO and C-suite executives are] going to come back with good notes of how they might do that, but they won’t recognize the depth of the threats they’re facing.”

#3:  They reflect modern values

Startups have the benefit of being born into whatever “modern” era exists at their founding date. Today’s startups reflect the values of the younger generations including Millennials and Generation Z (Gen Z), both of which are highly tech-savvy.

“It’s not just you have a different set of values and priorities,” said Goldman. “You have an intimate level of familiarity with technology that the leaders don’t have because they weren’t born into that age.”

Goldman once met with a group of C-suite executives who couldn’t understand why their successful life sciences company had trouble attracting and retaining younger employees. To better understand the issue, they asked employees for suggestions, many of which they considered ridiculous. For example, they didn’t understand why younger employees would want to wear jeans instead of suits. How would that improve work effectiveness?

“The reality is, that the executives who made a company successful are disconnected from what people want in the workforce today,” said Goldman. “People will take a pay cut to work at a business where they’re deeply aligned with the values of the company and they believe they’re doing good for the world. In my generation and the generation before me, you looked for a well-paying job, and company values were on a poster with a soaring Eagle on it in the break room.”

#4:  They have the latest tools

Cloud-based technologies enable startups to do what was cost-prohibitive in the past. Now, businesses of all sizes have affordable access to massive computer power, storage, data analytics, and AI. More importantly, they can experiment and iterate in low-risk, low-cost environments and scale as necessary to meet the growing requirements of their expanding customer bases.

In contrast, the life sciences company C-suite executives didn’t understand why employees didn’t want to use Lotus Notes!

#5:  Their leaders are enablers

Disruptors attract, hire, and cultivate highly-effective people. Changing the status quo of an industry or society at large not only requires bright, driven people, it requires leaders who are not threatened by other bright, driven people.

In a command-and-control hierarchical structure, power and great ideas may be reserved for the chosen few.

“Traditional roles are managers who are there to make sure things happen on time and on budget, and that you hire the right people to do the job,” said Goldman. “When it comes to topics like transformation, innovation, and disruption, you should be a gardener. Your job as a gardener is to make sure your plants get enough sunlight, water, and nutrients. You can weed out the weeds that would have prevented them from growing and you can protect the garden from being raided by animals.”

Leaders should be enablers instead of managers. Enablers want great people to do great work, so they create an environment that includes the freedom to do that. The traditional management mentality can be stifling by comparison when people can only rise to whatever level of competence or incompetence the manager himself or herself possesses.

#6:  Change drives them

Change is what drives innovation and disruption. It’s about affecting change and also having the agility to change when an experiment or even the entire business model fails.

Goldman said even though incumbents may be out interviewing customers and iterating products rapidly in response, they’re not doing the same internally. Heads of innovation tend to be brilliant at product innovation, but they’re not necessarily change agents,

“The actual MVP customers you should talk to are the P&L holders that will have to sponsor [the innovation lab],” said Goldman. “Don’t present something that’s so radical and transformative [the P&L holders] look at their products and realize they’ll probably lose their job.”

#7:  Their value proposition trumps products

Disruptive companies tend to view the world differently than their incumbent counterparts. The disruptive companies think in terms of value; incumbents tend to have product and solution portfolios that are presented and regarded as such. They articulate use cases, but they’re missing their company’s fundamental value proposition.

For example, when a fertilizer company was going through a transformation, it “did all the right things,” according to Goldman. It changed the business, empowered the leaders and trained all employees to think creatively using tools and modern problem-solving approaches. During the process, the company’s identity shifted from being a fertilizer company to one that improves crop yields. While the distinction may seem slight, the new definition enables the company to imagine and provide other products and services that improve crop yields. It’s now using satellite data to tell farmers about crop issues they’re not aware of so they can remediate the issues with unprecedented precision (arguably using the company’s fertilizer products). The satellite data is also the basis for a new subscription-based service that guarantees a certain level of crop yield improvement,

#8:  They create best practices

Disruptive companies tend to view the world differently than their incumbent counterparts. The disruptive companies think in terms of value; incumbents tend to have product and solution portfolios that are presented and regarded as such. They articulate use cases, but they’re missing their company’s fundamental value proposition.

For example, when a fertilizer company was going through a transformation, it “did all the right things,” according to Goldman. It changed the business, empowered the leaders and trained all employees to think creatively using tools and modern problem-solving approaches. During the process, the company’s identity shifted from being a fertilizer company to one that improves crop yields. While the distinction may seem slight, the new definition enables the company to imagine and provide other products and services that improve crop yields. It’s now using satellite data to tell farmers about crop issues they’re not aware of so they can remediate the issues with unprecedented precision (arguably using the company’s fertilizer products). The satellite data is also the basis for a new subscription-based service that guarantees a certain level of crop yield improvement,

#9:  They have the right talent

Disruptors couldn’t accomplish what they do if they didn’t have “the right” teams in place. Like any other organization, not everyone makes the cut as the company evolves or chooses to stay as circumstances change. However, they’re keenly aware of their goals and what must be done to achieve them, part of which is ensuring the right people are in the right jobs.

“Employee engagement is gaining momentum. How you keep people engaged has got to be front and center to your strategy,” said Randy Mysliviec, Managing Director of the Resource Management Institute. “Not only does talent management need to be more fluid, you can’t expect people to stay at your company for 20 years regardless of how you treat them.”

In the last two years, enterprise IT resource management has shifted from a simple supply and demand model to a more forward-looking strategic model that considers where the company wants to be in six months. So, when it comes to recruitment, hiring managers are now con

How to Prepare for the Machine-Aided Future

Intelligent automation is going to impact companies and individuals in profound ways, some of which are not yet foreseeable. Unlike traditional automation, which lacks an AI element, intelligent automation will automate more kinds of tasks in an organization, at all levels within an organization.

As history has shown, rote, repetitive tasks are ripe for automation. Machines can do them faster and more accurately than humans 24/7/365 without getting bored, distracted or fatigued.

When AI and automation are combined for intelligent automation, the picture changes dramatically. With AI, automated systems are not just capable of doing things; they’re also capable of making decisions. Unlike manufacturing automation which replaced factory-floor workers with robots, intelligent automation can impact highly-skilled, highly-educated specialists as well as their less-skilled, less-educated counterparts.

Intelligent automation will affect everyone

The non-linear impact of intelligent automation should serve as a wakeup call to everyone in an organization from the C-suite down. Here’s why: If the impact of intelligent automation were linear, then the tasks requiring the least skill and education would be automated first and tasks requiring the most skill and education would be automated last. Business leaders could easily understand the trajectory and plan for it accordingly.

However, intelligent automation is impacting industries in a non-linear fashion. For example, legal AI platform provider LawGeex conducted an experiment that was vetted by professors from Duke University School of Law, Stanford University and an independent attorney to determine which could review contracts more accurately: AI or lawyers. In the experiment, 20 lawyers took an average of 92 minutes to review five non-disclosure agreements (NDAs) in which there were 30 legal issues to spot. The average accuracy rating was 85%. The AI completed the same task in 26 seconds with a 94% accuracy level. Similar results were achieved in a study conducted by researchers at the University of California, San Francisco (UCSF). That experiment involved board-certified echocardiographers. In both cases, AI was better than trained experts at pattern recognition.

Interestingly, most jobs involve some rote, repetitive tasks and pattern recognition. CEOs may consider themselves exempt from intelligent automation but Jack Ma, billionaire founder and CEO of ecommerce platform Alibaba disagrees. “AI remembers better than you, it counts faster than you, and it won’t be angry with competitors.”

What the C-Suite Should Consider

Intelligent automation isn’t something that will only affect other people. It will affect you directly and indirectly. How you handle the intelligently automated future will matter to your career and the health of your organization.

You can approach the matter tactically if you choose. If you take this path, you’ll probably set a goal of using automation to reduce the workforce by XX%.

A strategic approach considers the bigger picture, including the potential competitive effects, the economic impact of a divided labor workforce, what “optimized” business processes might look like, and the ramifications for human capital (e.g., job reassignment, new roles, reimagined roles, upskilling).

The latter approach is more constructive because work automation is not an end it itself. The reason business leaders need to think about intelligent automation now is underscored by a recent McKinsey study. It suggested that 30% of the tasks performed in 6 out of 10 jobs could be automated today.

Tomorrow, there will be even more opportunities for intelligent automation as the technology advances, so business leaders should consider its potential impacts on their organizations.

For argument’s sake, if 30% of every job in your organization could be automated today, what tasks do you consider ripe for automation? If those tasks were automated, how would it affect the organization’s structure, operations and value proposition? How would intelligent automation impact specific roles and departments? How might you lead the workforce differently and how might your expectations of the workforce change? What ongoing training are you prepared to provide so your workforce can adapt as more types of tasks are automated?

Granted, business leaders have little spare time to ponder what-if questions, but these aren’t what-if questions, they’re what-when questions. You can either anticipate the impact, observe and adjust or ignore the trend and react after the fact.

The latter strategy didn’t work so well for brick-and-mortar retailers when the ecommerce tidal wave hit…

What Managers Should Consider

The C-suite should set the tone for what the intelligently automated future looks like for the company and its people. Your job will be to manage the day-to-day aspects of the transition.

As a manager, you’re constantly dealing with people issues. In this case, some people will regard automation as a threat even if the C-suite is approaching it responsibly and with compassion. Others will naturally evolve as the people-machine partnership evolves.

The question for managers is how might automation impact their teams? How might the division of labor shift? What parts of which jobs do you think are ripe for automation? If those tasks were automated, how would peoples’ roles change? How would your group change? Likely, new roles would be created, but what would they be? What sort of training would your people need to succeed in their new positions?

You likely haven’t taken the time to ponder these and related questions, perhaps because they haven’t occurred to you yet. As a team leader, you owe it to yourself and your team to think about how the various scenarios might play out, as well as the recommendations you’d have for your people and the C-suite.

What Employees Should Consider

Everyone should consider how automation might affect their jobs, including managers and members of the C-suite, because everyone will be impacted by it somehow.

In this case, think about your current position and allow yourself to imagine what part of your job could be automated. Start with the boring routine stuff you do over and over, the kinds of things you wish you didn’t have to do. Likely, those things could be automated.

Next, consider the parts of your job that require pattern recognition. If your job entails contract review and contract review is automated, what would you do in addition to overseeing the automated system’s work? As the LawGeex experiment showed, AI is highly accurate, but it isn’t perfect.

Your choice is fight or flight. You can give into the fear that you may be automated out of existence and act accordingly, which will likely result in a self-fulling prophecy. Alternatively, consider what parts of your job could be automated and reimagine your future. If you no longer had to do X, what would Y be?  What might your job title be and what your scope responsibilities be?

If you consider how intelligent automation may impact your career, you’ll be in a better position to evolve as things change and you’ll be better prepared to discuss the matter with your superiors.

The Bottom Line

The intelligently automated future is already taking shape. While the future impacts aren’t entirely clear yet, business leaders, managers and professionals can help shape their own future and the future of their companies by understanding what’s possible and how that might affect the business, departments and individual careers. Everyone will have to work together to make intelligent automation work well for the company and its people.

The worst course of action is to ignore it, because it isn’t going away.

Workforce Analytics Move Beyond HR

orkforce analytics have traditionally focused on HR’s use of them when their value can actually have significant overall business impacts. Realizing this, more business leaders are demanding insights into workforce dynamics to unearth insights that weren’t apparent before.

Businesses often claim that talent is their greatest asset, but they’re not always able to track what’s working, what isn’t and why. For example, in Deloitte Consulting’s 2018 Global Human Capital Trends report, 71% of survey participants said their companies consider people analytics a high priority, but only 10% are “very ready” to deal with it. According to David Fineman, specialist leader at  Deloitte Consulting, who co-authored the report, business leaders want insights into six focus areas that include workforce planning and shaping, recruiting and staffing talent optimization, culture and engagement, performance and rewards, and HR service delivery.

“The important distinction between focus areas that are addressed today compared with the focus areas from prior years is the emphasis on issues that are important to business leaders, not limiting analytics recipients to an HR audience,” said Fineman.

In fact, the Deloitte report explicitly states that board members and CEOs want access to people analytics because they’re “impatient with HR teams that can’t deliver actionable information and insights…”

As businesses continue to digitize more tasks and functions, it’s essential for them to understand the current makeup of their workforces, what talent will be needed in the future, and what’s necessary to align the two.

Shebani Patel, People Analytics leader at professional services firm PricewaterhouseCoopers (PwC) said that companies now want to understand employee journeys from onboarding to daily work experiences to exit surveys.

“They’re trying to get more strategic about how all of that comes together to build and deliver an exceptional [employee] experience that ultimately has ROI attached to it,” she said.

What companies are getting right

The availability of more people analytics tools enables businesses to understand their workforces in greater detail than ever before. However, the insights sought are not just insights about people, but rather how those insights directly relate to business value such as achieving higher levels of customer satisfaction or improving product quality. Businesses are also placing more emphasis on organizational network analysis (ONA) which provides insight into the interactions and relationships among people.

While it’s technologically possible to track what individuals do, there are also privacy concerns that are best addressed using clustering techniques. For example, KPMG’s clients are looking at email patterns, chat correspondence and calendared meetings to understand how team behavior correlates with performance, productivity or sales.

“Organizations today are using [the data] to derive various hypotheses and then use analytics to prove them out,” said Paul Krasilnick, director, Advisory Services at KPMG. “They recognize that it needs to be done cautiously in terms of data privacy and access to information, but they also recognize the value of advancing their internal capabilities and maturity from descriptive reporting to more prescriptive [analytics].”

According to Deloitte’s Fineman, high performing people analytics teams are characterized by increasing the analytics acumen within the HR function and among stakeholders.

What needs to improve

Like any other analytics journey, what needs to be improved depends on an organization’s level of mastery.  While all organizations have people data, they don’t necessarily have clean data.  Further, the mere existence of the data does not mean it’s readily usable or necessarily valuable.

MIT Chaplain: Emerging Tech Leaders Care About Ethics

The tech industry’s approach to innovation will likely undergo a major shift as new generations of tech leaders come to power. Historically, innovation has been economically motivated for the benefit of individuals and shareholders, which will continue to be true, although the nature of innovation will likely evolve to consider its impacts on the world in greater depth than has been true historically.

“As an innovator, you may be able to make some term gain without having to worry or be concerned about ethics whatsoever,” said Greg Epstein, the newly-appointed first humanist chaplain at the Massachusetts Institute of Technology (MIT) and executive director of The Humanist Hub. “You may be able to achieve some things that we define as success in this world without caring about or even paying any attention to ethics, but in the long term, [that approach] will likely have some directly damaging consequences in your life. What I’m seeing students prepare for today is not just conventional success but to have an inner life that is meaningful.”

Values change from generation to generation, so it should be no surprise that what fueled the tech industry’s direction to date may change in the future. While it’s true that some of today’s tech leaders demonstrate a capacity for doing something good for society, the general trajectory is to innovate, grow, exit, maybe repeat the last three steps a few times and then focus on something like underprivileged individuals.

Doing something “good” later in life is consistent with mid-life realizations of mortality when one questions the legacy one is leaving behind. According to Epstein, the Millennials and Generation Z are more likely to ponder the societal value of their contributions earlier in their career than Baby Boomers or Generation X.

Tech Innovation for Good Versus Tech Innovation Is Good

Arguably, technology innovation has always focused on the positive, if “the positive” is defined as achieving the art of the possible. For example, cars are safer and more reliable than they once were, as the result of technology innovation.

However, the more technologically dependent people and things become, the more vulnerable they are to attacks. In other words, the negative potential consequences of new technology tend to be an afterthought, with the exception of products and services that are designed to protect people from negative consequences, such as cybersecurity products.

In previous generations, technology impacted society more slowly than it does today, so the mainstream positive and negative effects tended to take longer to realize. For example, adoption of the first mobile phones was relatively slow because they were large and heavy, and cellular service was spotty at best. Now, entire industries are being disrupted seemingly overnight by companies such as Uber and Airbnb.

Generational Differences Matter

Each generation is shaped, in part, by the world in which they mature. Over the past several decades, each subsequent generation has been exposed to not only more technology, but more sophisticated technology. The “new normal” is a connected world of devices, many of which are recording everything, and social media networks through which anything and everything can be shared.

“Increasingly, young people on campus want to create collaborative technology [so] that people can have a fair opportunity in life and human beings can help one another to achieve a better of life than we’ve ever had before,” said Epstein. “I think that people are hungry for conversations about what that could look like [and] what that could mean because human beings have never had this responsibility before to transform our collective lives for the better.”

Innovation for a Higher Purpose

Thus far, technology innovators have followed a pattern, which is to innovate, to capitalize, and to then deal with negative consequences later if and when they arise. In other words, the tech industry has been focused on the art of the possible, generally without regard for the entire spectrum of outcomes that results.

AI Challenge: Achieving Artificial Empathy

Businesses of all kinds are investing in AI to improve operations and customer experience. However, as the average person experiences on a daily basis, interacting with machines can be frustrating when they’re incapable of understanding emotions.

For one thing, humans don’t always communicate clearly with machines, and vice versa. The inefficiencies caused by such miscommunications tend to frustrate end users. Even more unsettling in such scenarios is the failure of the system to recognize emotion and adapt.

To facilitate more effective human-to-machine interactions, artificial intelligence systems need to become more human-like, and to do that, they need the ability to understand emotional states and act accordingly.

A Spectrum of Artificial Emotional Intelligence

Merriam Webster’s primary definition of empathy is:

 “The action of understanding, being aware of, being sensitive to and vicariously experiencing the feelings, thoughts and experience of another of either the past or present without having the feelings, thoughts and experience fully communicated in an objectively explicit manner; alsothe capacity for this.”

To achieve artificial empathy, according to this definition, a machine would have to be capable of experiencing emotion. Before machines can do that, they must first be able to recognize emotion and comprehend it.

Non-profit research institute SRI International and others have succeeded with the recognition aspect, but understanding emotion is more difficult. For one thing, individual humans tend to interpret and experience emotions differently.

“We don’t understand all that much about emotions to begin with, and we’re very far from having computers that really understand that. I think we’re even farther away from achieving artificial empathy,” said Bill Mark, president of Information and Computing Services at SRI International, whose AI team invented Siri. “Some people cry when they’re happy, a lot of people smile when they’re frustrated. So, very simplistic approaches, like thinking that if somebody is smiling they’re happy, are not going to work.”

Emotional recognition is an easier problem to solve than emotional empathy because, given a huge volume of labeled data, machine learning systems can learn to recognize patterns that are associated with a particular emotion. The patterns of various emotions can be gleaned from speech (specifically, word usage in context, voice inflection, etc.), as well as body language, expressions and gestures, again with an emphasis on context. Like humans, the more sensory input a machine has, the more accurately it can interpret emotion.

Recognition is not the same as understanding, however. For example, computer vision systems can recognize cats or dogs based on labeled data, but they don’t understand the behavioral characteristics of cats or dogs, that the animals can be pets or that people tend to love them or hate them.

Similarly, understanding is not empathy. For example, among three people, one person may be angry, which the other two understand. However, the latter two are not empathetic: The second person is dispassionate about the first person’s anger and the third person finds the first person’s anger humorous.

In recent history, Amazon Alexa startled some users by bursting into laughter for no apparent reason. It turns out that, the system heard, “Alexa, laugh” when the user said no such thing. Now imagine a system laughing at a chronically ill, depressed, anxious, or suicidal person who is using the system as a therapeutic aid.

“Siri and systems like Siri are very good at single-shot interactions. You ask for something and it responds,” said Mark. “For banking, shopping or healthcare, you’re going to need an extended conversation, but you won’t be able to state everything you want in one utterance so you’re really in a joint problem-solving situation with the system. Some level of emotional recognition and the ability to act on that recognition is required for that kind of dialogue.”

Personalization versus Generalization

Understanding the emotions of a single individual is difficult enough, because not everyone expresses or interprets emotions in the same way. However, like humans, machines will best understand a person with whom it has extensive experience.

“If there has been continuous contact between a person and a virtual assistant, the virtual assistant can build a much better model,” said Mark. “Is it possible to generalize at all? I think the answer to that is, ‘yes,’ but it’s limited.”

Generalizing is more difficult, given the range of individual differences and all the factors that cause individuals to differ, including nature and nurture, as well as culture and other factors.

Recognizing emotion and understanding emotion are a matter of pattern recognition for both humans and machines. According to Keith Strier, EY Advisory Global and Americas AI leader at professional services firm EY, proofs of concept are now underway in the retail industry to personalize in-store shopping experiences.

“We’re going to see this new layer of machine learning, computer vision and other tools applied to reading humans and their emotions,” said Strier. “[That information] will be used to calibrate interactions with them.”

In the entertainment industry, Strier foresees entertainment companies monitoring the emotional reactions of theater audiences so that directing and acting methods, as well as special effects and music can deliver more impactful entertainment experiences that are scarier, funnier or more dramatic.

“To me it’s all the same thing: math,” said Strier. “It’s really about the specificity of math and what you do with it. You’re going to see a lot of research papers and POCs coming out in the next year.”

Personalization Will Get More Personal

Marketers have been trying to personalize experiences using demographics, personas and other means to improve customer loyalty as well as increase engagement and share of wallet. However, as more digital tools have become available, such as GPS and software usage analytics, marketers have been attempting to understand context so they can improve the economics and impact of campaigns.

“When you add [emotional intelligence], essentially you can personalize not just based on who I am and what my profile says, but my emotional state,” said Strier. “That’s really powerful because you might change the nature of the interaction, by changing what you say or by changing your offer completely based on how I feel right now.”

Artificial Emotional Intelligence Will Vary by Use Case

Neither AI nor emotions are one thing. Similarly, there is not just one use case for artificial emotional intelligence, be it emotional recognition, emotional understanding or artificial empathy.

“The actual use case matters,” said Strier. “Depending on the context, it’s going to be super powerful or maybe not good enough.”

At the present time, a national bank is piloting a smart ATM that uses a digital avatar which reads customers’ expressions. As the avatar interacts with customers, it adapts its responses.

“We can now read emotions in many contexts. We can interpret tone, we can we can triangulate body language and words and eye movements and all sorts of proxies for emotional state. And we can learn over time whether someone is feeling this or feeling that. So now the real question is what do we do with that?” said Strier. “Artificial empathy changes the art of the possible, but I don’t think the world quite knows what to do with it yet. I think the purpose question is probably going to be a big part of what going to occupy our time.”

Bottom Line

Artificial emotional intelligence can improve the quality and outcomes of human-to-machine interactions, but it will take different forms over time, some of which will be more sophisticated and accurate than others.

Artificial empathy raises the question of whether machines are capable of experiencing emotions in the first place, which, itself, is a matter of debate. For now, it’s fair to say that artificial emotional intelligence, and the advancement of it, are both important and necessary to the advancement of AI.

Lisa Morgan Will Address A/IS Ethics at the University of Arizona Eller College of Management’s Annual Executive Ethics Symposium

Lisa Morgan will be speaking at this year’s University of Arizona Eller College of Management’s Annual Executive Ethics Symposium, an invitation-only event in September 2018.   Her presentation will address the need for AI ethics given the current state of AI and innovation, as well as the opportunities and challenges shaping the advancement of AI ethics.

Ethical Tech: Myth or Reality?

New technologies continue to shape society, albeit at an accelerating rate. Decades ago, societal change lagged behind tech innovation by many years, a decade or more. Now, change is occurring much faster as evidenced by the impact of disrupters including Uber and Airbnb.

Central to much of the change is the data being collected, stored and analyzed for various reasons, not all of which are transparent. As the pace of technology innovation and tech-driven societal change accelerate, businesses are wise to think harder about the longer-term impacts of what they’re doing, both good and bad.

Why contemplate ethics?

Technology in all its forms is just a tool that can be used for good or evil. While businesses do not tend to think in those terms, there is some acknowledgement of what is “right” and “wrong.” Doing the right thing tends to be reflected in corporate responsibility programs designed to benefit people, animals, and the environment. Doing the wrong thing often involves irresponsible or inadvertent actions that are harmful to people, whether it’s invading their privacy or exposing their personal data.

While corporate responsibility programs in their current form are “good” on some level, ethics on a societal scale tends to be missing.

In the tech industry, for example, innovators are constantly doing things because they’re possible without considering whether they’re ethical. A blatant recent example is the human-sheep hybrid. Closer to home in high tech are fears about AI gone awry.

Why ethics is a difficult concept

The definition of ethics is simple. According to Merriam Webster, it is “the discipline dealing with what is good and bad and with moral duty and obligation.”

In practical application, particularly in relation to technology, “good” and “bad” coexist. Airbnb is just one example. On one hand, homeowners are able to take advantage of another income stream. However, hotels and motels now face new competition and the residents living next to or near Airbnb properties often face negative quality-of-life impacts.

According to Gartner research, organizations at the beginning stages of a digital strategy rank ethics a Number 7 priority. Organizations establishing a digital strategy rank it Number 5 and organizations that are executing a digital strategy rank it Number 3.

“The [CIOs] who tend to be more enlightened are the ones in regulated environments, such as financial services and public sector, where trust is important,” said Frank Buytendijk, a Gartner research vice president and Gartner fellow.

Today’s organizations tend to approach ethics from a risk avoidance perspective; specifically, for regulatory compliance purposes and to avoid the consequences of operating an unethical business. On the positive side, some view ethics as a competitive differentiator or better yet, the right thing to do.

Unfortunately, it’s regulatory compliance pressure and risk because of all the scandals you see with AI, big data [and] social media, but hey, I’ll take it,” said Buytendijk. “With big data there was a discussion about privacy but too little, too late. We’re hopeful with robotics and the emergence of AI, as there is active discussion about the ethical use of those technologies, not onlyt by academics, but by the engineers themselves.”

IEEE ethics group emerges

In 2016, the IEEE launched the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Its goal is to ensure that those involved in the design and development of autonomous and intelligent systems are educated, trained, and empowered to prioritize ethical considerations so that technologies are advanced for the benefit of humanity.

From a business perspective, the idea is to align corporate values with the values of customers.

“Ethics is the new green,” said Raja Chatila, Executive Committee Member of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. “People value their health so they value products that do not endanger their health. People want to buy technology that respects the values they cherish.”

However, the overarching goal is to serve society in a positive way, not just individuals. Examples of that tend to include education, health, employment and safety.

“As an industry, we could do a better job of being responsible for the technology we’re developing,” said Chatila.

At the present time, 13 different committees involved in the initiative are contemplating ethics from different technological perspectives, including personal data and individual access control, ethical research and design, autonomous weapons, classical ethics in AI, and mixed reality. In December 2017, the group released “Ethically Aligned Design volume 2,” a 266-page document available for public comment. It includes the participation of all 13 committees.

In addition, the initiative has proposed 11 IEEE standards, all of which have been accepted. The standards address transparency, data privacy, algorithmic bias, and more. Approximately 250 individuals are now participating in the initiative.

Society must demand ethics for its own good

Groups within society tend to react to technology innovation differently due to generational differences, cultural differences, and other factors. Generally speaking, early adopters tend to be more interested in a new technology’s capabilities than its potential negative effects. Conversely, laggards are more risk averse. Nevertheless, people in general tend to use services, apps, and websites without bothering to read the associated privacy policies. Society is not protecting itself, in other words. Instead, one individual at a time is acquiescing to the collection, storage and use of data about them without understanding to what they are acquiescing.

“I think the practical aspect comes down to transparency and honesty,” said Bill Franks, chief analytics officer at the International Institute for Analytics(IIA). “However, individuals should be aware of what companies are doing with their data when they sign up, because a lot of the analytics –- both the data and analysis –- could be harmful to you if they got into the wrong hands and were misused.”

Right now, the societal impacts of technology tend to be recognized after the fact, rather than contemplated from the beginning. Arguably, not all impacts are necessarily foreseeable, but with the pace of technology innovation constantly accelerating, the innovators themselves need to put more thought into the positive and negative consequences of bringing their technology to market.

Meanwhile, individuals have a responsibility to themselves to become more informed than they are today.

“Until the public actually sees the need for ethics, and demands it, I just don’t know that it would ever necessarily go mainstream,” said Franks. “Why would you put a lot of time and money into following policies that add overhead to manage and maintain when your customers don’t seem to care? That’s the dilemma.”

Businesses, individuals, and groups need to put more thought into the ethics of technology for their own good and for the good of all. More disruptions are coming in the form of machine intelligence, automation, and digital transformation which will impact society somehow. “How” is the question.

Machine Learning’s Greatest Weakness Is Humans

Machine learning– deep learning and cognitive computing in particular– attempt to model the human brain. That seems logical because the most effective way to establish bilateral understanding with humans is to mimic them. As we have observed from everyday experiences, machine intelligence isn’t perfect and neither is human intelligence.

Still, understanding human behavior and emotion is critical if machines are going to mimic humans well. Technologists know this, so they’re working hard to improve natural language processing, computer vision, speech recognition, and other things that will enable machines to better understand humans behave more like humans

I imagine that machines will never emulate humans perfectly because they will be able to rapidly identify the flaws in our thinking and behavior and improve upon them. To behave exactly like us would

From an analytical perspective, I find all of this fascinating because human behavior is linear and non-linear, rational and irrational, logical and illogical. If you study us at various levels of aggregation, it’s possible to see patterns in the way humans behave as a species, why we fall into certain groups and why behave the way we do as individuals. I think it would be very interesting to compare what machines have to say about all of that with what psychologists, sociologists, and anthropologists have to say.

Right now we’re at the point where we believe that machines need to understand human intelligence. Conversely, humans need to understand machine intelligence.

Why AI is Flawed

Human brain function is not infallible. Our flaws present challenges for machine learning, namely, machines have the capacity to make the same mistakes we do and exhibit the same biases we do, only faster. Microsoft’s infamous twitter bot is a good example of that.

Then, when you model artificial emotional intelligence based on human emotion, the results can be entertaining, inciting or even dangerous.

Training machines, whether for supervised or unsupervised learning, begins with human input at least for now. In the future, the necessity for that will diminish because a lot of people will be teaching machines the same things. The redundancy will indicate patterns that are easily recognizable, repeatable and reusable. Open source machine learning libraries are already available, but there will be many more that approximate some aspect of human brain function, cognition, decision-making, reasoning, sensing and much more.

Slowly but surely, we’re creating machines in our own image.

The Trouble with Data About Data

Two people looking at the same analytical result can come to different conclusions. The same goes for the collection of data and its presentation. A couple of experiences underscore how the data about data — even from authoritative sources — may not be as accurate as the people working on the project or the audience believe. You guessed it: Bias can turn a well-meaning, “objective” exercise into a subjective one. In my experience, the most nefarious thing about bias is the lack of awareness or acknowledgement of it.

The Trouble with Research

I can’t speak for all types of research, but I’m very familiar with what happens in the high-tech industry. Some of it involves considerable primary and secondary research, and some of it involves one or the other.

Let’s say we’re doing research about analytics. The scope of our research will include a massive survey of a target audience (because higher numbers seem to indicate statistical significance). The target respondents will be a subset of subscribers to a mailing list or individuals chosen from multiple databases based on pre-defined criteria. Our errors here most likely will include sampling bias (a non-random sample) and selection bias (aka cherry-picking).

The survey respondents will receive a set of questions that someone has to define and structure. That someone may have a personal agenda (confirmation bias), may be privy to an employer’s agenda (funding bias), and/or may choose a subset of the original questions (potentially selection bias).

The survey will be supplemented with interviews of analytics professionals who represent the audience we survey, demographically speaking. However, they will have certain unique attributes — a high profile or they work for a high-profile company (selection bias). We likely won’t be able to use all of what a person says so we’ll omit some stuff — selection bias and confirmation bias combined.

We’ll also do some secondary research that bolsters our position — selection bias and confirmation bias, again.

Then, we’ll combine the results of the survey, the interviews, and the secondary research. Not all of it will be usable because it’s too voluminous, irrelevant, or contradicts our position. Rather than stating any of that as part of the research, we’ll just omit those pieces — selection bias and confirmation bias again. We can also structure the data visualizations in the report so they underscore our points (and misrepresent the data).

Bias is not something that happens to other people. It happens to everyone because it is natural, whether consciously or unconsciously. Rather than dismiss it, it’s prudent to acknowledge the tendency and attempt to identify what types of bias may be involved, why, and rectify them, if possible.

I recently worked on a project for which I did some interviews. Before I began, someone in power said, “This point is [this] and I doubt anyone will say different.” Really? I couldn’t believe my ears. Personally, I find assumptions to be a bad thing because unlike hypotheses, there’s no room for disproof or differing opinions.

Meanwhile, I received a research report. One takeaway was that vendors are failing to deliver “what end customers want most.” The accompanying infographic shows, on average, that 15.5% of end customers want what 59% of vendors don’t provide. The information raised more questions than it answered on several levels, at least for me, and I know I won’t get access to the raw data.

My overarching point is that bias is rampant and burying our heads in the sand only makes matters worse. Ethically speaking, I think as an industry, we need to do more.

 

Analytics Leaders and Laggards: Which Fits Your Company?

Different companies and industries are at different levels of analytical maturity. There are still businesses that don’t use analytics at all and businesses that are masters by today’s standards. Most organizations are somewhere in between.

So, who are the leaders and laggards anyway? The International Institute for Analytics (IIA) asked that question in 2016 and found that digital natives are the most mature and the insurance industry is the least mature.

How Industries and Sectors Stack Up

IIA’s research included 11 different industries and sectors, in addition to digital natives. The poster children included Google, Facebook, Amazon, and Netflix. From Day 1, data has been their business and analytics has been critical to their success.

The report shows the descending order of industries in terms of analytical maturity, with insurance falling behind because its IT and finance analytics are the weakest of all.

Another report, from business and technology consultants West Monroe Partners found that only 11% of the 122 insurance executives they surveyed think their companies are realizing the full benefits of advanced analytics. “Advanced analytics” in this report is defined as identifying new revenue opportunities, improving customer and agent experience, performing operational diagnostics, and improving control mechanisms.

Two of the reasons West Monroe cited for the immaturity of the insurance industry are the inability to quantify the ROI and poor data quality.

Maturity is a Journey

Different organizations and individuals have different opinions about what an analytics maturity model looks like. IIA defines five stages ranging from “analytically impaired” (organizations that make decisions by gut feel) to “analytical nirvana” (using enterprise analytics).

“Data-first companies haven’t had to invest in becoming data-driven since they are, but for the companies that aren’t data-first, understanding the multi-faceted nature of the journey is a good thing,” said Daniel Magestro, research director at IIA. “There’s no free lunch, no way to circumvent this. The C-suite can’t just say that we’re going to be data-driven in 2017.”

Others look at the types of analytics companies are doing: descriptive, predictive, and prescriptive. However, looking at the type of analytics doesn’t tell the entire story.

What’s interesting is that different companies at different stages of maturity are stumped by different questions: Do you think you need analytics? If the answer is no, then it’s going to be a long and winding road.

Why do you think you need analytics? What would you use analytics to improve? Those two related questions require serious thought. Scope and priorities are challenges here.

How would you define success? That can be a tough question because the answers have to be quantified and realistic to be effective. “Increase sales” doesn’t cut it. How much and when are missing.

One indicator of maturity is what companies are doing with their analytics. The first thing everyone says is, “make better business decisions,” which is always important. However, progressive companies are also using analytics to identify risks and opportunities that weren’t apparent before.

The degree to which analytics are siloed in an organization also impacts maturity as can the user experience. Dashboards can be so complicated they’re ineffective versus simple to prioritize and expedite decision-making.

Time is another element. IT-created reports have fallen out of favor. Self-service is where it’s at. At the same time, it makes no sense to pull the same information in the same format again and again, such as weekly sales reports. That should simply be automated and pushed to the user.

The other time element — timeliness whether real-time, near real-time, or batch — is not an indication of maturity in my mind because what’s timely depends on what’s actually necessary.

Older posts