Lisa Morgan's Official Site

Strategic Insights and Clickworthy Content Development

Month: January 2018

Quantum Computing Brings Promise and Threats

Digital computing has some serious limitations. While the technology advances made over the past few decades are impressive such as smaller footprints, faster processors, better UIs and more memory and storage, some problems could be solved better by quantum computers.

For one thing, quantum computers are faster than classical (traditional) computers. They are also able to solve problems that classical computers can’t do well or can’t do within a reasonable amount of time.

“Quantum computing exploits fundamental laws of physics to solve complex computing problems in new ways, problems like discovering how diseases develop and creating more effective drugs to battle them,” said Jim Clarke, director of quantum hardware at Intel Labs.”Once quantum systems are available commercially, they can be used to simulate nature to advance research in chemistry, materials science and molecular modeling. For instance, they can be used to help create a new catalyst to sequester carbon dioxide or a room temperature superconductor.”

Quantum computing will also drive new levels of business optimization, benefit machine learning and artifical intelligence, and change the cryptography landscape.

David Schatsky, managing director at Deloitte, said the common thread is optimization problems where there are multiple probable answers and the task is to find the right one. Examples include investment management, portfolio management, risk mitigation and the design of communication systems and transportation systems. Logistics companies are already exploring route optimization while the defense industry is considering communications applications.

“A year ago [quantum computing] was thought of more of as a physics experiment [but] the perception has changed quickly,” said Schatsky.  “In the last 3 months there have been a flurry of breakthroughs including fundamental engineering breakthroughs and commercial product announcements.”

Test drive a quantum computer today

It’s probably safe to say that none of us will have a quantum computer sitting on our desks anytime soon, but just about anyone with a browser can get access to IBM’s 5 and 16 quantum bit (qubit) computers via the cloud.  Earlier this year, the company announced IBM Q, an initiative intended to result in commercially available quantum computing systems.  IBM also announced that it had built and tested two quantum computing processors including the 16 qubit open processor for use by the public and the 17-qubit commercial processor for customers.

According to an IBM paper in Nature, scientists successfully used a seven-qubit quantum processor to address a molecular structure problem for beryllium hydride (BeH2), the largest molecule simulated on a quantum computer to date.

“It is early days, but it’s going to scale rapidly,” said Scott Crowder, vice president and CTO, Quantum Computing, Technical Strategy & Transformation at IBM Systems. “When you start talking about hundreds or low thousands of qubits, you can start exploring business value problems that [can’t be addressed well using] classical computers such as quantum chemistry [and] certain types of optimization problems that are also exponential problems.”

An exponential problem is one that scales exponentially with the number of elements in it. For example, planning a route involving 50 locations could be optimized in a number of ways depending on the objective, such as identifying the fastest route. That seemingly simple problem actually involves one quadrillion different possibilities, which is too many possibilities for a classical computer to handle, Crowder said.

Intel is making progress too

Intel teamed up with QuTech, an academic partner in the Netherlands in 2015. Since then, Intel has achieved milestones such as demonstrating key circuit blocks for an integrated cryogenic-CMOS control system, developing a spin qubit fabrication flow on Intel’s 300mm process technology and developing a unique packaging solution for superconducting qubits that it demonstrated in the 17-qubit superconducting test chip introduced on October 10, 2017. A week later, at the Wall Street Journal D.Live conference in Laguna, Calif., Intel CEO Brian Krzanich said he expects Intel to deliver a 49-qubit quantum chipby the end of 2017.

“Ultimately the goal is to develop a commercially relevant quantum computer, one that is relevant for many applications and one that impacts Intel’s bottom line,” said Intel’s Clarke.

Toward that end, Intel’s work with QuTech spans the entire quantum stack from the qubit devices to the overall hardware architecture, software architecture, applications and complementary electronics that workable quantum systems will require.

“Quantum computing, in essence, is the ultimate in parallel computing, with the potential to tackle problems conventional computers can’t handle,” said Clarke. “But, realizing the promise of quantum computing will require a combination of excellent science, advanced engineering and the continued development of classical computing technologies, which Intel is working towards through our various partnerships and R&D programs.”

Decryption and other threats

There is a debate about whether quantum computers will render current encryption methods obsolete or not. Take a brute force attack, for example. In a brute force attack, hackers continually guess passwords and use computers to accelerate that work. Quantum computing would accelerate such an attack even further.

“Virtually all security protocols that are used and deployed today are vulnerable to an attack by a quantum computer,” said William “Whurley” Hurley) Chair of the Quantum Standards Working Group at the IEEE. “Quantum information allows us to secure information in ways that are completely unbreakable even against a quantum attack.”

Along those lines, there are efforts to develop a new type of security protocol that doesn’t necessarily leverage quantum mechanics. Hurley said they’re using extremely difficult mathematical problems that even quantum computers won’t be able to solve, which is referred to as “Quantum-ibmSafe Cryptography” or “Post-Quantum Cryptography).

The IEEE Quantum Standards Working Group is working on other quantum technologies including, quantum sensors and quantum materials. The research institute has brought together physicists, chemists, engineers, mathematicians and computer scientists to ensure that the institute can adapt rapidly to change.

Deloitte’s Schatsky said synthetic biology and gene editing are also potentially dangerous, mainly because capabilities can be developed faster than one’s ability to understand how to apply such technologies wisely. The same could be said for many emerging technologies.

Quantum computing should be on your radar

Quantum computing is advancing rapidly now so it’s wise to ponder how the capabilities might benefit your business.  The reality is that no one knows all the ways quantum computing can be used, but it will eventually impact businesses in many different industries.

Will quantum computers overtake classical computers, following the same evolutionary path we’ve seen over the past several decades or will the two co-exist? For the foreseeable future, co-existence is the answer because binary and quantum computers each solve different kinds of problems better than the other.

 

 

5 Cross-functional Analytics Challenges

More businesses are attempting to optimize their operations with the help of analytics, although most of the activity still takes place at the departmental level. Additional value can be gained from cross-functional analytics, but it represents a different kind of challenge because the functional units tend to use different systems and data owners often want to maintain control of their data.

According to recent research by EY and Forbes Insights, 60% to 70% of companies now use analytics at a departmental level, up from 30% to 40% in 2015.

“Companies have had success in one part of the business, they then try to replicate that in other departments,” said Chris Mazzei, global chief analytics officer and emerging technology leader at EY. “The companies that are more mature across a number of different dimensions, those we would put into the “leading” category, are out-performing the others. They’re reporting higher revenue growth, better operating margins and more effective risk management, so there’s at least there’s a correlation between analytics adoption and driving better business outcomes.”

Here are a few things that can hold cross-functional analytics back.

Analytics Isn’t Part of the Business Strategy

Cross-functional analytics is more likely to yield competitive advantages and drive more business value when the analytics are an integral part of the business model and strategy.

“The vast majority of organizations still are not able to say that their business strategy has really reflected the role analytics plays in how they’re trying to compete,” said Mazzei. “There’s this fundamental misalignment that can occur when across the leadership team is not able to have a consistent view of where and how analytics is making the biggest impact on the business strategy.”

Operating Models Don’t Facilitate Cross-Functional Analytics

Executing an analytics strategy at a departmental level such as finance or marketing is relatively easy because it’s clear that resources need to be dedicated to the effort. When it’s a cross-functional endeavor, who’s responsible for providing, funding and managing those resources? What should the data flow look like and how can that be facilitated?

“If you’re trying to deploy analytics across the organization, the operating model becomes much more important,” said Mazzei. “Do we have a centralized team? Do we distribute analytics resources in the individual business units or functions? What’s the relationship between those teams?”

Like bimodal IT, bimodal analytics services benefit the enterprise and the departments simultaneously. The centralized group helps facilitate best practices and ensures appropriate governance while dedicated resources tend to have specialized knowledge of that particular function and its analytics requirements.

The Initiatives Aren’t Designed Well

Analytics efforts should drive business value. There’s a lot to do, but not everything will have the same level of impact or necessarily achieve the desired results, so the desired business outcomes should drive the prioritization of analytics efforts.

“Initiative design is really important and are there competent frameworks/processes you use for that,” said Mazzei.

Not surprisingly, companies are still at very different stages of maturity in terms of having any kind of consistent process for designing an analytics initiative. The more analytically mature a company is, the greater the likelihood is that they have common frameworks. There is also a common understanding of what the term, “analytics initiative” means and common tools for executing that, Mazzei said.Analytics Isn’t Part of Business Operations

As companies embrace analytics and mature in their use of analytics, business processes tend to change. It’s wise to think about that and other impacts early on.

“The more mature companies are thinking about that earlier in the process and using an initial point of view about what that intervention needs to be to inform how you design the analytics themselves,” said Mazzei. “A lot of companies don’t think about that early enough.”

According to the report, design intervention is “Translating all the upfront goal-setting, modeling, and methodology into action— making analytics insights an integral part of business operations.”

The True Value of Analytics Isn’t Understood

Interestingly, analytics enables organizations to measure all kinds of things and yet success metrics may not have been defined for the analytics initiatives themselves.

“That really matters because [if] you can learn, what’s working and what’s not earlier on, you can change the nature of the intervention or the analytic you’re building,” said Mazzei. “It’s that feedback loop you have in place.”

Your Data Is Biased. Here’s Why.

Bias is everywhere, including in your data. A little skew here and there may be fine if the ramifications are minimal, but bias can negatively affect your company and its customers if left unchecked, so you should make an effort to understand how, where and why it happens.

“Many [business leaders] trust the technical experts but I would argue that they’re ultimately responsible if one of these models has unexpected results or causes harm to people’s lives in some way,” said Steve Mills, a principal and director of machine intelligence at technology and management consulting firm Booz Allen Hamilton.

In the financial industry, for example, biased data may cause results that offend the Equal Credit Opportunity Act (fair lending). That law, enacted in 1974, prohibits credit discrimination based on race, color, religion, national origin, sex, marital status, age or source of income. While lenders will take steps not to include such data in a loan decision, it may be possible to infer race in some cases using a zip code, for example.

“The best example of [bias in data] is the 2008 crash in which the models were trained on a dataset,” said Shervin Khodabandeh, a partner and managing director of Boston Computing Group (BCG) Los Angeles, a management consulting company. “Everything looked good, but the datasets changed and the models were not able to pick that up, [so] the model collapsed and the financial system collapsed.”

What Causes Bias in Data

A considerable amount of data has been generated by humans, whether it’s the diagnosis of a patient’s condition or the facts associated with an automobile accident.  Quite often, individual biases are evident in the data, so when such data is used for machine learning training purposes, the machine intelligence reflects that bias.  A prime example of that was Microsoft’s infamous AI bot, Tay, which in less than 24 hours adopted the biases of certain Twitter members. The results were a string of shocking, offensive and racist posts.

“There’s a famous case in Broward County, Florida, that showed racial bias,” said Mills. “What appears to have happened is there was historically racial bias in sentencing so when you base a model on that data, bias flows into the model. At times, bias can be extremely hard to detect and it may take as much work as building the original model to tease out whether that bias exists or not.”

What Needs to Happen

Business leaders need to be aware of bias and the unintended consequences biased data may cause.  In the longer-term view, data-related bias is a governance issue that needs to be addressed with the appropriate checks and balances which include awareness, mitigation and a game plan should matters go awry.

“You need a formal process in place, especially when you’re impacting people’s lives,” said Booz Allen Hamilton’s Mills. “If there’s no formal process in place, it’s a really bad situation. Too many times we’ve seen these cases where issues are pointed out, and rather than the original people who did the work stepping up and saying, ‘I see what you’re seeing, let’s talk about this,’ they get very defensive and defend their approach so I think we need to have a much more open dialog on this.”

As a matter of policy, business leaders need to consider which decisions they’re comfortable allowing algorithms to make, the safeguards which ensure the algorithms remain accurate over time, and model transparency, meaning that the reasoning behind an automated decision or recommendation can be explained.  That’s not always possible, but still, business leaders should endeavor to understand the reasoning behind decisions and recommendations.

“The tough part is not knowing where the biases are there and not taking the initiative to do adequate testing to find out if something is wrong,” said Kevin Petrasic, a partner at law firm White & Case.  “If you have a situation where certain results are being kicked out by a program, it’s incumbent on the folks monitoring the programs to do periodic testing to make sure there’s appropriate alignment so there’s not fair lending issues or other issues that could be problematic because of key datasets or the training or the structure of the program.”

Data scientists know how to compensate for bias, but they often have trouble explaining what they did and why they did it, or the output of a model in simple terms. To bridge that gap, BCG’s Khodabandeh uses two models: one that’s used to make decisions and a simpler model that explains the basics in a way that clients can understand.

Drexel University’s online MS in Data Science will set you on the path to success in one of today’s fastest growing fields. Learn how to examine and manipulate data to solve problems by creating machine learning algorithms and emerge from the program work-place ready.

Brought to you by Drexel University

BCG also uses two models to identify and mitigate bias.  One is the original model, the other is used to test extreme scenarios.

“We have models with an opposite hypothesis in mind which forces the model to go to extremes,” said Khodabandeh. “We also force models to go to extremes. That didn’t happen in the 2008 collapse. They did not test extreme scenarios. If they had tested extreme scenarios, there would have been indicators coming in in 2007 and 2008 that would allow the model to realize it needs to adjust itself.”

A smart assumption is that bias is present in data, regardless.  What the bias is, where it stems from, what can be done about it and what the potential outcomes of it may be are all things to ponder.

Conclusion

All organizations have biased data.  The questions are whether the bias can be identified, what effect that bias may have, and what the organization is going to do about it.

To minimize the negative effects of bias, business leaders should make a point of understanding the various types and how they can impact data, analysis and decisions. They should also ensure there’s a formal process in place for identifying and dealing with bias, which is likely best executed as a formal part of data governance.

Finally, the risks associated with data bias vary greatly, depending on the circumstances. While it’s prudent to ponder all the positive things machine learning and AI can do for an organization, business leaders are wise to understand the weaknesses also, one of which is data bias.

Computer History May Not Be What You Think

When many of us ponder computer history, we think of Steve Jobs, Bill Gates, and other high-profile white men who changed the course of history through innovation and shrewd business tactics. Of course, there have been other significant contributors along the way who are not white including Guy Kawasaki and Jerry Yang, but how much have we really heard about the computer industry contributions made by African-Americans?

For the most part, precious little, if anything. However, that may change with the help of Arvid Nelsen, IEEE Computer Society member, Southern Methodist University rare books and manuscripts librarian, and contributor to the IEEE Annals of the History of Computing.

“I look at historical understanding as something that evolves over time as new information comes to light or as we examine the past through the lens of different priorities and values,” said Nelsen, in an email interview. “Scholars are just beginning to scratch the surface in respect to persons of color. I think these efforts add to history by examining previously ignored, overlooked, invisible, and perhaps devalued evidence. I hope that means the development of a more complete, complex, and nuanced understanding of history.”

Is Computer History Revisionist History?

What if everything we know about the computer industry isn’t entirely correct?  In today’s global business environment, innovation, disruption and contributions can come from anywhere. However, it may be that African-Americans still remain in the shadows rather than the limelight, at least in the US.

But what, exactly, have we in the computer industry missed? More work needs to be done to answer those and other questions.

Unearthing the African-Americans’ computer industry contributions won’t be an easy task because there’s a lack of archival source material. In Nelsen’s recent IEEE Annals of the History of Computing article, he writes,”Archives and libraries should undertake to identify and collect materials from persons of color. Meanwhile, scholars may find material in nontraditional sources, and prosopography may prove useful for examining computer professionals of color.”

One non-traditional source is Ebony magazine, which lists at least 57 African-Americans working in various computing fields between 1959 and 1996.

I hope that the article encourages historians who are interested in critically examining race in computing simply to start looking for stories. They are out there,” said Nelsen. “I provide a number of examples and specifically encourage the examination of publications by and for particular communities, publications which may have been previously considered out-of-scope in contrast to scientific and professional publications.”

Why Computer Industry History Lacks Color

Racism was rampant in the computer industry’s early days. Perhaps it’s less obvious to some of us now, given the diversity of today’s high-tech workforce. However, racism is still alive and well, despite greater workforce diversity.

To align contributions with contributors, Nelsen thinks historians need to understand the development of the computer industry, as well as the specific technologies that comprise the computer industry.

One of Nelsen’s articles inspired a letter from a retired professor who had worked for Burroughs Corp. While at Burroughs, some of his African-American colleagues developed new hardware and software, including the operating system for the Burroughs B5000 and B5500 mainframe computers.

“I hope my article will inspire readers to reach out with their own stories to scholars and to archives like the Charles Babbage Institute with papers and other source materials,” said Nelsen.

The Time is Ripe for Change

The movie Hidden Figures, based on the book by the same name written by Margot Lee Shetterly, helped raise at least partial awareness that the accomplishments of African Americans in the computer industry have indeed been ignored, forgotten or overlooked. The book and the movie focus on mathematicians Mary Jackson, Katherine Johnson and Dorothy Vaughn, all of whom worked for the National Aeronautics and Space Administration (NASA).

“The contributions of these three women were essential to both the Space Race and the development of the computing disciplines, and have been shamefully neglected ” wrote Nathan Ensmenger, Editor in Chief, IEEE Annals of the History of Computing.

in his own commentary he said. “[A]s we begin to incorporate race and ethnicity into our scholarship, we will discover new insights, methods, and perspectives that will radically reshape the focus of our discipline.”

How the focus of our discipline may change as the result of such research remains to be seen. As both Nelsen and Ensmenger note, the task won’t be easy, but it’s a necessary endeavor.

How to Teach Executives About Analytics

If your data is failing to persuade executives, maybe it’s not the data that is the problem. Here’s how to change your approach to fit the audience.

One of the biggest challenges data analysts and data scientists face is educating executives about analytics. The general tendency is to nerd out on data and fail to tell a story in a meaningful way to the target audience.

Sometimes data analytics professionals get so wrapped up in the details of what they do that they forget not everyone has the same background or understanding. As a result, they may use technical terms, acronyms, or jargon and then wonder why no one “got” their presentations or what they were saying.

They didn’t anything wrong, per se, it’s how they’re saying it and to whom.

If you find yourself in such a situation, following are a few simple things you can do to facilitate better understanding.

Discover What Matters

What matters most to your audience? Is it a competitive issue? ROI? Building your presence in a target market? Pay attention to the clues they give you and don’t be afraid to ask about their priorities. Those will clue you in to how you should teach them about analytics within the context of what they do and what they want to achieve.

Understand Your Audience

Some executives are extremely data-savvy, but the majority aren’t just yet. Dialogs between executives and data analysts or data scientists can be uncomfortable and even frustrating when the parties speak different languages. Consider asking what your target audience would like to learn about and why. That will help you choose the content you need to cover and the best format for presenting that content.

For example, if the C-suite wants to know how the company can use analytics for competitive advantage, then consider a presentation. If one of them wants to understand how to use a certain dashboard, that’s a completely different conversation and one that’s probably best tackled with some 1:1 hands-on training.

Set Realistic Expectations

Each individual has a unique view of the world. Someone who isn’t a data analyst or a data scientist probably doesn’t understand what that role actually does, so they make up their own story which becomes their reality. Their reality probably involves some unrealistic expectations about what data-oriented roles can do or accomplish or what analytics can accomplish generally.

One of the best ways to deal with unrealistic expectations is to acknowledge them and then explain what is realistic and why. For example, a charming and accomplished data scientist I know would be inclined to say, “You’d think we could accomplish that in a week, right? Here’s why it actually takes three weeks.”

Stories can differ greatly, but the one thing good presentations have in common is a beginning, a middle, and an end. One of the mistakes I see brilliant people making is focusing solely on the body of a presentation, immediately going down some technical rabbit hole that’s fascinating for people who understand it and confusing for others.

A good beginning gets everyone on the same page about what the presentation is about, why the topic of discussion is important, and what you’re going to discuss. The middle should explain the meat of the story in a logical way that flows from beginning to end. The end should briefly recap the highlights and help bring your audience to same conclusion you’re stating in your presentation.

Consider Using Options

If the executive(s) you’re presenting to hold the keys to an outcome you desire, consider giving them options from which to choose. Doing that empowers them as the decision-makers they are. Usually, that approach also helps facilitate a discussion about tradeoffs. The more dialog you have, the better you’ll understand each other.

Another related tip is make sure your options are within the realm of the reasonable. In a recent scenario, a data analyst wanted to add two people to her team. Her A, B, and C options were A) if we do nothing, then you can expect the same results, B) if we hire these two roles we’ll be able to do X and Y, which we couldn’t do before, and C) if we hire 5 people we’ll be able to do even more stuff, but it will cost this much. She came prepared to discuss the roles, the interplay with the existing team and where she got her salary figures. If they asked what adding 1, 3, or 4 people looked like, she was prepared to answer that too.

Speak Plainly

Plain English is always a wise guide. Choose simple words and concepts, keeping in mind how the meaning of a single word can differ. For example, if you say, “These two variables have higher affinity,” someone may not understand what you mean by variables or affinity.

Also endeavor to simplify what you say, using concise language. For example, “The analytics of the marketing department has at one time or another tended overlook the metrics of the customer service department” can be consolidated into, “Our marketing analytics sometimes overlooks customer service metrics.”

Why You Business May Not Be Ready for Analytics

Artificial intelligence is on the minds of business leaders everywhere because they’ve either heard or believe that AI will change the way companies do business.

What we’re seeing now is just the beginning. For everyone’s sake, more thought needs to be given to the workforce impact and how humans and machines will complement each other.

Recently, professional services company Genpact and FORTUNE Knowledge Group surveyed 300 senior executives from companies in the North American, European and Asia-Pacific regions with annual revenues of $1 billion per year or more. According to the report, “AI leaders expect that the modern workforce will be comfortable working alongside robots by 2020.”

However, getting there will require a different approach to organizational change.

“A bunch of people are thinking about AI as a technology. What they’re not thinking about is AI as the enabler of new enterprise processes, AI as an augmenter of humans in enterprise processes,” said Genpact Senior Vice President Gianni Giacomelli. “Right now, 70% of the effort is spent on technology, 20% on processes and 10% on humans as a process piece. I think that’s the wrong way to look at it.”

What is the right way to think about AI? At one end of the spectrum, people are touting all the positive things AI will enable, such as tackling some of our world’s biggest social problems. On the other end of the spectrum are Elon Musk, Stephen Hawking and others who foresee a dark future that involves unprecedented job losses if not human extermination.

Regardless of one’s personal view of the matter, business leaders need to be thinking harder and differently about the impact AI may have on their businesses and their workforces. Now.

How to think about the problem

The future’s trajectory is not set. It changes and evolves with technology and culture. Since AI’s end game is not completely foreseeable, one way to approach the problem, according to the survey, is to begin with the desired outcome, think about the processes required to achieve that outcome and then ponder how machines and humans can complement each other.

“Generally, the biggest impediment we see out there is the inability to create a portfolio of initiatives, so having a team or a number of teams coming back and saying, ‘These are the 50 things I could do with AI based on what AI is able to do today and in the next 12 months,’ and then [it’s up to senior management to] prioritize them,” said Giacomelli. “You need to have people going through the organization, unearthing places where value can be impacted.”

Over the last three decades or so, business leaders have been setting strategy and then implementing it, which isn’t going to work moving forward. The AI/human equation requires a hypothesis-driven approach in which experiments can fail fast or succeed.

“It’s a lot more about collective intelligence than let’s get a couple of experts and let them tell us where to do this. There are no experts here,” Giacomelli said.

Focus on the workforce

AI will impact every type of business in some way. The question is, what are business leaders doing to prepare their workforce for a future in which part or all of their jobs will be done by AI? According to the survey, 82% of the business leaders plan to implement AI-related technologies in the next three years but only 38% are providing employees with reskilling options.

“I think HR functions are completely backwards on this one,” said Giacomelli. “They haven’t started connecting the dots with what needs to be done with the employees.”

Some companies are already working on workforce planning, but they view AI as a means of materially reducing the workforce, such as by 20% or 30%, which Giacomelli considers “a primitive approach.”

“There are jobs that will go away completely. For example, people who do reconciliation of basic accounts, invoices, that kind of stuff,” he said. “Most of the jobs that will be impacted will be impacted fractionally, so part of the job is eliminated and then you figure out how to skill the person who does that job so she can use the machine better.”

What would people do, though? It’s clear that most working professionals have various types of experience. The challenge for HR is to stop looking at a snapshot of what a candidate or employee is today and what prior experience has qualified them to do what they do today. Instead, they should consider an individual’s future trajectory. For example, some accountants have become sales analysts or supply chain analysts.

Looking for clues about what particular roles could evolve into is wise, but that does not provide the entire picture, since all types of jobs will either evolve or become obsolete in their current forms.

“I don’t feel that many people are looking at the human element of digital transformation and AI except fearful people,” said Giacomelli. “Every year, we will see people somewhere making sense of this riddle and starting to work in a different way. I think we need to change the way we look at career paths. We’ll have to look at them in a hypothesis testing way as opposed to have a super guru in HR who knows how AI will impact our career paths, because they don’t [know].”

The bottom line is that individuals need to learn how to learn because what AI can do today differs from what it will be able to do tomorrow, so the human-and-machine relationship will evolve over time.

Even if AI was just a science fiction concept today, the accelerating paces of technology and business underscore the fact that change is inevitable, so organizations and individuals need to learn how to cope with it.

Don’t dismiss the other guy

AI proponents and opponents both have valid arguments because any tool, including AI, can be used for good or evil. While it’s true AI will enable positive industrial, commercial and societal outcomes, the transition could be extremely painful for the organizations and individuals who find themselves relics of a bygone era, faster than they imagined.

AI-related privacy and security also need more attention than they’re getting today because the threats are evolving rapidly and the pace will accelerate over time.

An important fundamental question is whether humans can ultimately control AI, which remains to be seen. Microsoft’s Tay Twitterbot demonstrated that AI can adopt the most deplorable forms of human expression, quickly. In less than 24 hours, that experiment was shut down. Similarly, a Facebook chatbot experiment demonstrated that AI is capable of developing its own language, which may be nonsensical or even undecipherable by humans. So risks and rewards both need to be considered.