Strategic Insights and Clickworthy Content Development

Month: September 2018

Why Enterprises Struggle with Hybrid Cloud and DevOps

Cloud

More enterprises are moving to the cloud and implementing DevOps, containers and microservices, but their efforts are falling short of expectations. A recent study from the Ponemon Institute identifies some of the core challenges they face.

Organizations implementing cloud, DevOps, containers, and microservices are often surprised when the outcomes don’t match expectations. A recent survey by the Ponemon Institute, sponsored by hybrid cloud management platform provider Embotics, revealed that 74% of the 600 survey respondents who are responsible for cloud management believe DevOps enablement capabilities are essential, very important, or important for their organization. But only one-third believe their organization has the ability to deliver those capabilities.

Eighty percent believe that microservices and container enablement are essential, very important, or important, but only a quarter believe their organization can quickly deliver those capabilities. The lagging DevOps and microservices enablement costs the average enterprise $34 million per year, which is 23% of their average annual cloud management budget of $147 million.

“There are so many things that result in the loss of information assets and infrastructure issues that can be very costly for organizations to deal with. This rush to the cloud has created all sorts of issues for organizations,” said Larry Ponemon, chairman and founder of the Ponemon Institute. “The way organizations have implemented DevOps facilitates a rush-to-release mentality that doesn’t necessarily improve the state of security.”

Organizations assume that implementing a DevOps framework will necessarily result in software quality improvement and risk reduction because they can build in security early in the software development lifecycle (SDLC).

“That’s all theoretically true, but organizations are pretty sloppy about the way they’ve implemented software,” said Ponemon. “In other Ponemon studies, we’ve seen that there’s a difference between the DevOps that you want versus the DevOps that you get.”

Why hybrid cloud implementations are so tough

Shadow IT is one reason why cloud implementations are more costly and are less effective than they could be.

“Companies develop silos where different groups of people have different tools that aren’t necessarily compatible,” said Ponemon. “A lot of organizations think a cloud platform tool is fungible whether you buy it from Vendor A, B, or C and you’re going to get the same outcome, but that’s not true at all.”

Forty-six percent of survey respondents said their organizations’ consumption model is “cloud direct,” meaning that end users are bypassing IT and cloud management technologies. Instead, they’re communicating directly with clouds such as AWS or Microsoft Azure via native APIs or their own public cloud accounts.

The patchwork model of cloud adoption results in complex environments that provide little visibility and are difficult to manage. In fact, 70% of survey participants said they have no visibility into the purpose or ownership of the virtual machines in their cloud environment.

Interestingly, one cloud benefit often touted is that one does not have to understand where resources reside.

“You would expect if you’re operating in a cloud environment [that] it’s going to help you gain visibility because you don’t have to look very far to find your data, but it basically doesn’t work that way,” said Ponemon. “A lot of organizations don’t have the tools to understand where their machines are and where the data is located. They don’t have much control over the network layer and sometimes no control over the application layer so the end result becomes kind of messy.”

The lack of visibility and control expose an enterprise to several risks including negligence and cyberattacks that are “a perfect storm” for bad actors. Fifty-seven percent of survey participants believe users are increasing business risk by violating policies about where digital assets can reside.

“A lot of CSOs are angry at the lines of business because they can’t necessarily implement the security protocols they need,” said Ponemon. “The whole idea of getting things out quickly can be more important when a little bit slower delivery can result in a higher quality level.”

Ideally, organizations should have a single user interface that enables them to view their entire environment, although 68% of the survey respondents lack that capability. 

“Having a single user interface is really smart because you have one place where you can do a lot of cool things and have more control and visibility, but not all platforms work very well,” said Ponemon. “Platforms have to be implemented like software. It’s not like an appliance that you plug in. It requires a lot of stuff to make it work the right way.”

When a hybrid cloud management platform is implemented correctly it can enable a successful DevOps environment because there’s greater control over the hybrid cloud.

Hybrid cloud management is evolving

Hybrid cloud management maturity has three phases, according to Ponemon. In the first phase, hybrid IT develops organically because lines of business are procuring their own cloud solutions, typically without the involvement of IT or security.

“I think this whole cloud shadow issue is a real issue because it creates turf wars and silos,” said Ponemon. “Dealing with that can be very tough and that’s why we need to have management with an iron fist. There’s too much at risk so some people need to be more influential than others in the decision-making processes rather than running it organically.”

The companies that have matured to the second phase, use a cloud management platform (CMP 1.0), although they tend to experience disparities between the platform’s capabilities and their actual DevOps requirements. The Ponemon Institute defines CMP 1.0 as “solutions providing provisioning, automation, workflow orchestration, self-service, cloud governance, single-pane-of-glass visibility, capacity rightsizing and cost management across public, private and hybrid clouds.”

By comparison, the newer CMP 2.0 platforms provide those benefits and they reduce the friction and complexity associated with microservices, containers, cloud-native applications, and DevOps. CMP 2.0 also enables corporate governance and compliance without impeding development speed and agility. Sixty-three percent of survey respondents believe CMP 2.0 capabilities would reduce hybrid cloud management costs by an average of $34 million per year. 

Will Containers Replace VMs?

While an across-the-board migration from virtual machines to containers isn’t likely, there are issues developers and operations personnel should consider to ensure the best solution for the enterprise.

Chances are, your company uses virtual machines on premises and in the cloud. Meanwhile, the developers are likely considering containers, if they haven’t adopted them already, to simplify development and deployment, improve application scalability and increase application delivery speed.

Architecturally speaking, VMs and containers have enough architectural similarities that some question the long-term survival of VMs, especially since VMs are already a couple of decades old. There are also serverless cloud options available now that dynamically manage the allocation of machine resources so humans don’t have to do it.

“If you embrace [a serverless] service, then you truly don’t have to worry about the virtual machines or the IaaS instances under the hood. You just turn your containers over to the service and they take care of all the provisioning underneath to run those containers,” said Tony Iams, research director at Gartner. “We’re far away from being able to do that on-premises, but if you’re doing this in the cloud, you no longer worry about [virtual machine] instances in the cloud.”

Most enterprises have been using VMs for quite some time and that will continue to be true for the foreseeable future.

“Most of the container infrastructure deployments are going to be on virtual machines,” said Iams. “If you look at where container runtimes are deployed as well as container orchestration systems such as Kubernetes, more often than not, it’s on virtual infrastructure and there’s a very important reason for that. In most cases, especially in today’s enterprise environments, the basic provisioning processes for infrastructure are going to be based on virtual machines, so even if you wanted to switch to provisioning on bare metal, it’s quite possible that you wouldn’t have any processes in place to do that.”

As organizations graduate to more sophisticated container deployments using orchestration platforms like Kubernetes, they can face significant provisioning challenges.

“Kubernetes has to be upgraded pretty frequently given the rapid pace of development, so you need to have solid provisioning processes in place. Most likely that’s going to be based on virtual machines,” said Iams. “Our guidance is not to make any changes there, to continue using the tools and processes you have in place, which is likely based on virtual machines, and focus on achieving the benefits of Kubernetes that are achieved higher up in the stack.”

Mitch Pirtle, principal at Space Monkey Labs and the creator of Joomla, an open source content management system, agrees that VMs will continue to provide value, although what’s considered a VM will likely change over time.

“The main benefit of the VM is that you can deploy that entire stack fully configured and ready-to-roll. My biggest issue with VMs is that if you want to deploy a stack that has unique scaling needs, you’re going to be in a world of hurt,” said Pirtle. “Containers, on the other hand, have a huge upside, especially in the enterprise space. You can scale different parts of your platform as needed, independent of each other.”

Container interest is fueled by developers

Developers are under constant pressure to deliver higher quality software at lower costs in ever-faster time frames. Containers enable applications to run in different environments, and even across environments, without a lot of extra work. That way, developers can write an application once and make minor modifications to it rather than writing the same application time and again. In addition, containers help facilitate DevOps efforts.

“One of the most important benefits containers provide is that once you have a containerized application, it runs in exactly the same environment at every stage of the lifecycle, from initial development through testing and deployment, so you get mobility of a workload at every stage of its lifecycle,” said Iams. “In the past, you would develop an application and turn it over to production. Any environment they would be running it in would run into problems, so they’d kick it back to developers and you’d have to try to recreate the environment that it was running in. A lot of those issues go away once you containerize a workload.”

Mike Nims, Director Advisory, Workday at KPMG said containers have greatly simplified his job because they abstract so much complexity.

“When I was a DBA, everybody knew I was on the Homer Simpson server, my instance lived on Homer Simpson, and my instance was Bart. In a container situation, I don’t even know where I sit,” said Nims. “Taking that technical view away is extremely beneficial [because] it also allows the developer to focus more on the code, integration or UI they’re working on as opposed to the hardware or the server.”

Containers aren’t a panacea

The use case tends to define whether VMs need to be used in conjunction with containers, or not. Businesses operating in highly regulated environments and others which place a high premium on security want multitenancy capabilities so virtualized workloads can run in isolation.

“With containers, you don’t quite get the same level of isolation,” said Iams. “However, you do get some isolation that may be sufficient for internal use cases.”

One concern is whether workloads running across containers are operating in the same trust domain. If not, extra steps must be taken to ensure isolation.

“That’s what’s underlying some of these new [sandboxed container runtime] initiatives you hear about, such as gVisor. [D]evelopers are trying to come up with new virtualization mechanisms that give you the same kind of isolation you would have between virtual machines and the efficiency and low resource consumption of containers,” said Iams. “That’s early-stage, for the time being. If you want to have that kind of isolation, you need virtual machines.”

However, containers are helping to eliminate application-level friction for end users and IT.

“My developers and I don’t ever think about this. A cloud container is really extending that user experience into an area of IT that for the longest time has been controlled by technical gearheads,” said KPMG’s Nims. “My wife, who doesn’t work in IT, could go to Amazon Cloud and set up a MySQL environment to do database work if she wanted to. That’s a container.”

Meanwhile, he thinks enterprise architects and DBAs should consider how cloud containers can be used to manage applications at scale.

“As a DBA, one [my] biggest pain points was the ancillary applications I was responsible for managing that weren’t necessarily tied to my database. When I did it, it was five to one or 10 to one databases per DBA. Then, each of these customers would have another 20 to 30 applications. I couldn’t manage that so we would hire SAs,” said Nims. “If I have a MySQL environment in a cloud container, I can patch it across all of my instances. Back in the day, I would have to individually patch each instance. That’s huge in terms of scalability and management because now you can have a control room managing hundreds of thousands of applications, potentially, with a couple of guys.”

Developers and operations have to work together

Operations teams have spent the last two decades configuring and provisioning VMs. Now, they need to get familiar with containers and container orchestration platforms, if they’re not already, given the popularity of containers among developers.

“It will take some time before both sides of this initiative come to terms and arrive at consistent operational processes,” said Iams. “If you’re doing this on-premises, it doesn’t get set up in a vacuum. Operations still has to configure storage and networking, and there may be some authentication and security-type mechanisms that have to be configured to make sure that this great new containerized infrastructure works well. Ops can’t do that in isolation.”

Of course, getting container infrastructure to work well isn’t an event, it’s a process, especially given the furious pace of container-related innovation.

“Getting the pieces to work well together is a challenge,” said Iams. “There are upgrades that are going to happen at multiple levels of the stack that are going to require processes to be in place that reflect the priorities of both groups.”

Bottom line

VMs and containers will continue to coexist for some time, mainly because businesses require the benefits of both. Even if containers could replace VMs for every conceivable use case, a mainstream shift wouldn’t happen overnight because today’s businesses are heavily dependent on and extremely familiar with VMs.

Why AI is So Brilliant and So Stupid

AI

For all of the promise that artificial intelligence represents, a successful AI initiative still requires all of the right pieces to come together.

AI capabilities are advancing rapidly, but the results are mixed. While chatbots and digital assistants are improving generally, the results can be laughable, perplexing and perhaps even unsettling.

Google’s recent demonstration of Duplex, its natural language technology that completes tasks over the phone, is noteworthy. Whether you love it or hate it, two things are true: It doesn’t sound like your grandfather’s AI; the use case matters.

One of the striking characteristics of the demo, assuming it actually was a demo and not a fake, as some publications have suggested, is the use of filler language in the digital assistant’s speech such as “um” and uh” that make it sound human. Even more impressive, (again, assuming the demo is real), is the fact that Duplex reasons adeptly on-the-fly despite the ambiguous, if not confusing, responses provided by a restaurant hostess on the other end of the line.

Of course, the use case is narrow. In the demo, Duplex is simply making a hair appointment and attempting to make a restaurant reservation. In the May 8 Google Duplex blog introducing the technology, Yaniv Leviathan, principal engineer and Yossi Matias, VP of Engineering explain: “One of the key research insights was to constrain Duplex to closed domains, which are narrow enough to explore extensively. Duplex can only carry out natural conversations after being deeply trained in such domains. It cannot carry out general conversations.”

A common misconception is that there’s a general AI that works for everything. Just point it at raw data and magic happens.

“You can’t plug in an AI tool and it works [because it requires] so much manual tweaking and training. It’s very far away from being plug-and-play in terms of the human side of things,” said Jeremy Warren, CTO of Vivint Smart Home and former CTO of the U.S. Department of Justice. “The success of these systems is driven by dark arts, expertise and fundamentally on data, and these things do not travel well.”

Data availability and quality matter

AI needs training data to learn and improve. Warren said that if someone has mediocre models, processing performance, and machine learning experts, but the data is amazing, the end solution will be very good. Conversely, if they have the world’s best models, processing performance, and machine learning experts but poor data, the result will not be good.

“It’s all in the data, that’s the number one thing to understand, and the feedback loops on truth,” said Warren. “You need to know in a real way what’s working and not working to do this well.”

Daniel Morris, director of product management at real estate company Keller Williams agrees. He and his team have created Kelle, a virtual assistant designed for Keller Williams’ real estate agents that’s available as iPhone and Android apps. Like Alexa, Kelle has been built as a platform so skills can be added to it. For example, Kelle can check calendars and help facilitate referrals between agents.

“We’re using technology embedded in the devices, but we have to do modifications and manipulations to get things right,” said Morris. “Context and meaning are super important.”

One challenge Morris and his team run into as they add new skills and capabilities is handling longtail queries, such as for lead management, lead nurturing, real estate listings, and Keller Williams’ training events. Agents can also ask Kelle for the definitions of terms that are used in the real estate industry or terms that have specific meaning at Keller Williams.

Expectations are or are not managed well

Part of the problem with technology commercialization, including the commercialization of AI, is the age-old problem of over-promising and under-delivering. Vendors solving different types of problems claim that AI is radically improving everything from drug discovery to fraud prevention, which it can, but the implementations and their results can vary considerably, even among vendors focused on the same problem.

“A lot of the people who are really doing this well have access and control over a lot of first-party data,” said Skipper Seabold, director of decision sciences at decision science advisory firm Civis Analytics. “The second thing to note is it’s a really hard problem. What you need to do to deliver a successful AI product is to deliver a system, because you’re delivering software at the end. You need a cross-functional team that’s a mix of researchers and product people.”

Data scientists are often criticized for doing work that’s too academic in nature. Researchers are paid to test the validity of ideas. However, commercial forms of AI ultimately need to deliver value that either feeds bottom line directly, in terms of revenue, cost savings and ultimately profitability or indirectly, such as through data collection, usage and, potentially, the sale of that information to third parties. Either way, it’s important to set end user expectations appropriately.

“You can’t just train AI on raw data and it just works, that’s where things go wrong,” said Seabold. “In a lot of these projects you see the ability for human interaction. They give you an example of how it can work and say there’s more work to be done, including more field testing.”

Decision-making capabilities vary

Data quality affects AI decision-making. If the data is dirty, the results may be spurious. If it’s biased, that bias will likely be emphasized.

“Sometimes you get bad decisions because there are no ethics,” said Seabold. “Also, the decisions a machine makes may not be the same as a human would make. You may get biased outcomes or outcomes you can’t explain.”

Clearly, it’s important to understand what the cause of the bias is and correct for it.

Understanding machine rendered decisions can be difficult, if not impossible, when a black box is involved. Also, human brains and mechanical brains operate differently. An example of that was the Facebook AI Research Lab chatbots that created their own language, which the human researchers were not able to understand. Not surprisingly, the experiment was shut down.

“This idea of general AI is what captures people’s imaginations, but it’s not what’s going on,” said Seabold. “What’s going on in the industry is solving an engineering problem using calculus and algebra.”

Humans are also necessary. For example, when Vivint Smart Homes wants to train a doorbell camera to recognize humans or a person wearing a delivery uniform, it hires people to review video footage and assign labels to what they see. “Data labelling is sometimes an intensely manual effort, but if you don’t do it right, then whatever problems you have in your training data will show up in your algorithms,” said Vivint’s Warren.

Bottom line

AI outcomes vary greatly based on a number of factors which include their scope, the data upon which they’re built, the techniques used, the expertise of the practitioners and whether expectations of the AI implementation are set appropriately. While progress is coming fast and furiously, the progress does not always transfer well from one use case to another or from one company to another because all things, including the availability and cleanliness of data, are not equal.

Why Quantum Computing Should Be on Your Radar Now

Quantum computer

Boston Consulting Group and Forrester are advising clients to get smart about quantum computing and start experimenting now so they can separate hype from reality.

There’s a lot of chatter about quantum computing, some of which is false and some of which is true. For example, there’s a misconception that quantum computers are going to replace classical computers for every possible use case, which is false. “Quantum computing” is not synonymous with “quantum leap,” necessarily. Instead, quantum computing involves quantum physics which makes it fundamentally different than classical, binary computers. Binary computers can only process 1s and 0s. Quantum computers can process many more possibilities, simultaneously.

If math and physics scare you, a simple analogy (albeit not an entirely correct analogy) involves a light switch and a dimmer switch that represent a classical computer and a quantum computer, respectively. The standard light switch has two states: on and off. The dimmer switch provides many more options, including on, off, and range of states between on and off that are experienced as degrees of brightness and darkness. With a dimmer switch, a light bulb can be on, off, or a combination of both.

If math and physics do not scare you, quantum computing involves quantum superposition, which explains the nuances more eloquently.

One reason quantum computers are not an absolute replacement for classical computers has to do with their physical requirements. Quantum computers require extremely cold conditions in order for quantum bits or qubits to remain “coherent.” For example, much of D-Wave’s Input/Output (I/O) system must function at 15 millikelvin (mK), which is near absolute zero. 15 mK is equivalent to minus 273.135 degrees Celsius or minus 459.643 degrees Fahrenheit. By comparison, the classical computers most individuals own have built-in fans, and they may include heat sinks to dissipate heat. Supercomputers tend to be cooled with circulated water. In other words, the ambient operating environments required by quantum computers and classical computers vary greatly. Naturally, there are efforts that are aimed at achieving quantum coherence in room temperature conditions, one of which is described here.

Quantum computers and classical computers are fundamentally different tools. In a recent report, Brian Hopkins, vice president and principal analyst at Forrester explained, “Quantum computing is a class of emerging hardware and software that exploits subatomic phenomenon to solve computationally hard problems.”

What to expect, when

There’s a lot of confusion about the current state of quantum computing which industry research firms Boston Consulting Group (BCG) and Forrester are attempting to clarify.

In the Forrester report, Hopkins estimates that quantum computing is in the early stages of commercialization, a stage that will persist through 2025 to 2030. The growth stage will begin at the end of that period and continue through the end of the forecast period which is 2050.

A recent BCG report estimates that quantum computing will become a $263 to $295 billion-dollar market given two different forecasting scenarios, both of which span 2025 to 2050. BCG also reasons that the quantum computing market will advance in three distinct phases:

  1. The first generation will be specific to applications that are quantum in nature, similar to what D-Wave is doing.
  2. The second generation will unlock what report co-author and BCG senior partner Massimo Russo calls “more interesting use cases.”
  3. In the third generation, quantum computers will have achieved the number of logical qubits required to achieve Quantum Supremacy. (Note: Quantum Supremacy and logical qubits versus physical qubits are important concepts addressed below.)

“If you consider the number of logical qubits [required for problem-solving], it’s going to take a while to figure out what use cases we haven’t identified yet,” said BCG’s Russo. “Molecular simulation is closer. Pharma company interest is higher than in other industries.”

Life sciences, developing new materials, manufacturing, and some logistics problems are ideal for quantum computers for a couple of possible reasons:

  • A quantum machine is more adept at solving quantum mechanics problems than classical computers, even when classical computers are able to simulate quantum computers
  • The nature of the problem is so difficult that it can’t be solved using classical computers at all, or it can’t be solved using classical computers within a reasonable amount of time, at a reasonable cost.

There are also hybrid use cases in which parts of a problem are best solved by classical computers and other parts of the problem are best solved by quantum computers. In this scenario, the classical computer breaks the problem apart, communicates with the quantum computer via an API, receives the result(s) from the quantum computer and then assembles a final answer to the problem, according to BCG’s Russo.

“Think of it as a coprocessor that will address problems in a quantum way,” he said.

While there is a flurry of quantum computing announcements at present, practically speaking, it may take a decade to see the commercial fruits of some efforts and multiple decades to realize the value of others.

Logical versus physical qubits

All qubits are not equal, which is true in two regards. First, there’s an important difference between logical qubits and physical qubits. Second, the large vendors are approaching quantum computing differently, so their “qubits” may differ.

When people talk about quantum computers or semiconductors that have X number of qubits, they’re referring to physical qubits. The reason the number of qubits matters is that the computational power grows exponentially with the addition of each, individual qubit. According to  Microsoft, a calculator is more powerful than a single qubit, and “simulating a 50-qubit quantum computation would arguably push the limits of existing supercomputers.”

BCG’s Russo said for semiconductors, the number of physical qubits required to create a logical qubit can be as high as 3,000:1. Forrester’s Hopkins stated he’s heard numbers ranging from 10,000 to 1 million or more, generally.

“No one’s really sure,” said Hopkins. “Microsoft thinks [it’s] going to be able to achieve a 5X reduction in the number of physical qubits it takes to produce a logical qubit.”  

The difference between physical qubits and logical qubits is extremely important because physical qubits are so unstable they need the additional qubits to ensure error correction and fault tolerance.

Get a grip on Quantum Supremacy

Quantum Supremacy does not signal the death of classical computers for the reasons stated above. Google cites this definition: “A critical question for the field of quantum computing in the near future is whether quantum devices without error correction can perform a well-defined computational task beyond the capabilities of state-of-the-art classical computers, achieving so-called quantum supremacy.”

“We’re not going to achieve Quantum Supremacy overnight, and we’re not going to achieve it across the board,” said Forrester’s Hopkins. “Supremacy is a stepping stone to delivering a solution. Quantum Supremacy is going to be achieved domain by domain, so we’re going to achieve Quantum Supremacy, which Google is advancing, and then Quantum Value, which IBM is advancing, in quantum chemistry or molecular simulation or portfolio risk management or financial arbitrage.”

The fallacy is believing that Quantum Supremacy means that quantum computers will be better at solving all problems, ergo classical computers are doomed.

Given the proper definition of the term, Google is attempting to achieve Quantum Supremacy with its 72-qubit quantum processor, Bristlecone.

How to get started now

First, understand the fundamental differences between quantum computers and classical computers. This article is merely introductory, given its length.

Next, (before, after and simultaneously with the next piece of advice) find out what others are attempting to do with quantum computers and quantum simulations and consider what use cases might apply to your organization. Do not limit your thinking to what others are doing. Based on a fundamental understanding of quantum computing and your company’s business domain, imagine what might be possible, whether the end result might be a minor percentage optimization that would give your company a competitive advantage or a disruptive innovation such as a new material.

Experimentation is also critical, not only to test hypotheses, but also to better understand how quantum computing actually works. The experimentation may inspire new ideas, and it will help refine existing ideas. From a business standpoint, don’t forget to consider the potential value that might result from your work.

Meanwhile, if you want to get hands-on experience with a real quantum computer, try IBM Q. The “IBM Q Experience” includes user guides, interactive demos, the Quantum Composer which enables the creation of algorithms that run on real quantum computing hardware, and the QISKit software developer kit.

Also check out Quantum Computing Playground which is a browser-based WebGL Chrome experiment that features a GPU-accelerated quantum computer with a simple IDE interface, its own scripting language with debugging and 3D quantum state visualization features.

In addition, the Microsoft Quantum Development Kit Preview is available now. It includes the Q# language and compiler, the Q# standard library, a local quantum machine simulator, a trace quantum simulator which estimates the resources required to run a quantum program and Visual Studio extension.