Strategic Insights and Clickworthy Content Development

Category: Software Development

Why Enterprises Struggle with Hybrid Cloud and DevOps

Cloud

More enterprises are moving to the cloud and implementing DevOps, containers and microservices, but their efforts are falling short of expectations. A recent study from the Ponemon Institute identifies some of the core challenges they face.

Organizations implementing cloud, DevOps, containers, and microservices are often surprised when the outcomes don’t match expectations. A recent survey by the Ponemon Institute, sponsored by hybrid cloud management platform provider Embotics, revealed that 74% of the 600 survey respondents who are responsible for cloud management believe DevOps enablement capabilities are essential, very important, or important for their organization. But only one-third believe their organization has the ability to deliver those capabilities.

Eighty percent believe that microservices and container enablement are essential, very important, or important, but only a quarter believe their organization can quickly deliver those capabilities. The lagging DevOps and microservices enablement costs the average enterprise $34 million per year, which is 23% of their average annual cloud management budget of $147 million.

“There are so many things that result in the loss of information assets and infrastructure issues that can be very costly for organizations to deal with. This rush to the cloud has created all sorts of issues for organizations,” said Larry Ponemon, chairman and founder of the Ponemon Institute. “The way organizations have implemented DevOps facilitates a rush-to-release mentality that doesn’t necessarily improve the state of security.”

Organizations assume that implementing a DevOps framework will necessarily result in software quality improvement and risk reduction because they can build in security early in the software development lifecycle (SDLC).

“That’s all theoretically true, but organizations are pretty sloppy about the way they’ve implemented software,” said Ponemon. “In other Ponemon studies, we’ve seen that there’s a difference between the DevOps that you want versus the DevOps that you get.”

Why hybrid cloud implementations are so tough

Shadow IT is one reason why cloud implementations are more costly and are less effective than they could be.

“Companies develop silos where different groups of people have different tools that aren’t necessarily compatible,” said Ponemon. “A lot of organizations think a cloud platform tool is fungible whether you buy it from Vendor A, B, or C and you’re going to get the same outcome, but that’s not true at all.”

Forty-six percent of survey respondents said their organizations’ consumption model is “cloud direct,” meaning that end users are bypassing IT and cloud management technologies. Instead, they’re communicating directly with clouds such as AWS or Microsoft Azure via native APIs or their own public cloud accounts.

The patchwork model of cloud adoption results in complex environments that provide little visibility and are difficult to manage. In fact, 70% of survey participants said they have no visibility into the purpose or ownership of the virtual machines in their cloud environment.

Interestingly, one cloud benefit often touted is that one does not have to understand where resources reside.

“You would expect if you’re operating in a cloud environment [that] it’s going to help you gain visibility because you don’t have to look very far to find your data, but it basically doesn’t work that way,” said Ponemon. “A lot of organizations don’t have the tools to understand where their machines are and where the data is located. They don’t have much control over the network layer and sometimes no control over the application layer so the end result becomes kind of messy.”

The lack of visibility and control expose an enterprise to several risks including negligence and cyberattacks that are “a perfect storm” for bad actors. Fifty-seven percent of survey participants believe users are increasing business risk by violating policies about where digital assets can reside.

“A lot of CSOs are angry at the lines of business because they can’t necessarily implement the security protocols they need,” said Ponemon. “The whole idea of getting things out quickly can be more important when a little bit slower delivery can result in a higher quality level.”

Ideally, organizations should have a single user interface that enables them to view their entire environment, although 68% of the survey respondents lack that capability. 

“Having a single user interface is really smart because you have one place where you can do a lot of cool things and have more control and visibility, but not all platforms work very well,” said Ponemon. “Platforms have to be implemented like software. It’s not like an appliance that you plug in. It requires a lot of stuff to make it work the right way.”

When a hybrid cloud management platform is implemented correctly it can enable a successful DevOps environment because there’s greater control over the hybrid cloud.

Hybrid cloud management is evolving

Hybrid cloud management maturity has three phases, according to Ponemon. In the first phase, hybrid IT develops organically because lines of business are procuring their own cloud solutions, typically without the involvement of IT or security.

“I think this whole cloud shadow issue is a real issue because it creates turf wars and silos,” said Ponemon. “Dealing with that can be very tough and that’s why we need to have management with an iron fist. There’s too much at risk so some people need to be more influential than others in the decision-making processes rather than running it organically.”

The companies that have matured to the second phase, use a cloud management platform (CMP 1.0), although they tend to experience disparities between the platform’s capabilities and their actual DevOps requirements. The Ponemon Institute defines CMP 1.0 as “solutions providing provisioning, automation, workflow orchestration, self-service, cloud governance, single-pane-of-glass visibility, capacity rightsizing and cost management across public, private and hybrid clouds.”

By comparison, the newer CMP 2.0 platforms provide those benefits and they reduce the friction and complexity associated with microservices, containers, cloud-native applications, and DevOps. CMP 2.0 also enables corporate governance and compliance without impeding development speed and agility. Sixty-three percent of survey respondents believe CMP 2.0 capabilities would reduce hybrid cloud management costs by an average of $34 million per year. 

Will Containers Replace VMs?

While an across-the-board migration from virtual machines to containers isn’t likely, there are issues developers and operations personnel should consider to ensure the best solution for the enterprise.

Chances are, your company uses virtual machines on premises and in the cloud. Meanwhile, the developers are likely considering containers, if they haven’t adopted them already, to simplify development and deployment, improve application scalability and increase application delivery speed.

Architecturally speaking, VMs and containers have enough architectural similarities that some question the long-term survival of VMs, especially since VMs are already a couple of decades old. There are also serverless cloud options available now that dynamically manage the allocation of machine resources so humans don’t have to do it.

“If you embrace [a serverless] service, then you truly don’t have to worry about the virtual machines or the IaaS instances under the hood. You just turn your containers over to the service and they take care of all the provisioning underneath to run those containers,” said Tony Iams, research director at Gartner. “We’re far away from being able to do that on-premises, but if you’re doing this in the cloud, you no longer worry about [virtual machine] instances in the cloud.”

Most enterprises have been using VMs for quite some time and that will continue to be true for the foreseeable future.

“Most of the container infrastructure deployments are going to be on virtual machines,” said Iams. “If you look at where container runtimes are deployed as well as container orchestration systems such as Kubernetes, more often than not, it’s on virtual infrastructure and there’s a very important reason for that. In most cases, especially in today’s enterprise environments, the basic provisioning processes for infrastructure are going to be based on virtual machines, so even if you wanted to switch to provisioning on bare metal, it’s quite possible that you wouldn’t have any processes in place to do that.”

As organizations graduate to more sophisticated container deployments using orchestration platforms like Kubernetes, they can face significant provisioning challenges.

“Kubernetes has to be upgraded pretty frequently given the rapid pace of development, so you need to have solid provisioning processes in place. Most likely that’s going to be based on virtual machines,” said Iams. “Our guidance is not to make any changes there, to continue using the tools and processes you have in place, which is likely based on virtual machines, and focus on achieving the benefits of Kubernetes that are achieved higher up in the stack.”

Mitch Pirtle, principal at Space Monkey Labs and the creator of Joomla, an open source content management system, agrees that VMs will continue to provide value, although what’s considered a VM will likely change over time.

“The main benefit of the VM is that you can deploy that entire stack fully configured and ready-to-roll. My biggest issue with VMs is that if you want to deploy a stack that has unique scaling needs, you’re going to be in a world of hurt,” said Pirtle. “Containers, on the other hand, have a huge upside, especially in the enterprise space. You can scale different parts of your platform as needed, independent of each other.”

Container interest is fueled by developers

Developers are under constant pressure to deliver higher quality software at lower costs in ever-faster time frames. Containers enable applications to run in different environments, and even across environments, without a lot of extra work. That way, developers can write an application once and make minor modifications to it rather than writing the same application time and again. In addition, containers help facilitate DevOps efforts.

“One of the most important benefits containers provide is that once you have a containerized application, it runs in exactly the same environment at every stage of the lifecycle, from initial development through testing and deployment, so you get mobility of a workload at every stage of its lifecycle,” said Iams. “In the past, you would develop an application and turn it over to production. Any environment they would be running it in would run into problems, so they’d kick it back to developers and you’d have to try to recreate the environment that it was running in. A lot of those issues go away once you containerize a workload.”

Mike Nims, Director Advisory, Workday at KPMG said containers have greatly simplified his job because they abstract so much complexity.

“When I was a DBA, everybody knew I was on the Homer Simpson server, my instance lived on Homer Simpson, and my instance was Bart. In a container situation, I don’t even know where I sit,” said Nims. “Taking that technical view away is extremely beneficial [because] it also allows the developer to focus more on the code, integration or UI they’re working on as opposed to the hardware or the server.”

Containers aren’t a panacea

The use case tends to define whether VMs need to be used in conjunction with containers, or not. Businesses operating in highly regulated environments and others which place a high premium on security want multitenancy capabilities so virtualized workloads can run in isolation.

“With containers, you don’t quite get the same level of isolation,” said Iams. “However, you do get some isolation that may be sufficient for internal use cases.”

One concern is whether workloads running across containers are operating in the same trust domain. If not, extra steps must be taken to ensure isolation.

“That’s what’s underlying some of these new [sandboxed container runtime] initiatives you hear about, such as gVisor. [D]evelopers are trying to come up with new virtualization mechanisms that give you the same kind of isolation you would have between virtual machines and the efficiency and low resource consumption of containers,” said Iams. “That’s early-stage, for the time being. If you want to have that kind of isolation, you need virtual machines.”

However, containers are helping to eliminate application-level friction for end users and IT.

“My developers and I don’t ever think about this. A cloud container is really extending that user experience into an area of IT that for the longest time has been controlled by technical gearheads,” said KPMG’s Nims. “My wife, who doesn’t work in IT, could go to Amazon Cloud and set up a MySQL environment to do database work if she wanted to. That’s a container.”

Meanwhile, he thinks enterprise architects and DBAs should consider how cloud containers can be used to manage applications at scale.

“As a DBA, one [my] biggest pain points was the ancillary applications I was responsible for managing that weren’t necessarily tied to my database. When I did it, it was five to one or 10 to one databases per DBA. Then, each of these customers would have another 20 to 30 applications. I couldn’t manage that so we would hire SAs,” said Nims. “If I have a MySQL environment in a cloud container, I can patch it across all of my instances. Back in the day, I would have to individually patch each instance. That’s huge in terms of scalability and management because now you can have a control room managing hundreds of thousands of applications, potentially, with a couple of guys.”

Developers and operations have to work together

Operations teams have spent the last two decades configuring and provisioning VMs. Now, they need to get familiar with containers and container orchestration platforms, if they’re not already, given the popularity of containers among developers.

“It will take some time before both sides of this initiative come to terms and arrive at consistent operational processes,” said Iams. “If you’re doing this on-premises, it doesn’t get set up in a vacuum. Operations still has to configure storage and networking, and there may be some authentication and security-type mechanisms that have to be configured to make sure that this great new containerized infrastructure works well. Ops can’t do that in isolation.”

Of course, getting container infrastructure to work well isn’t an event, it’s a process, especially given the furious pace of container-related innovation.

“Getting the pieces to work well together is a challenge,” said Iams. “There are upgrades that are going to happen at multiple levels of the stack that are going to require processes to be in place that reflect the priorities of both groups.”

Bottom line

VMs and containers will continue to coexist for some time, mainly because businesses require the benefits of both. Even if containers could replace VMs for every conceivable use case, a mainstream shift wouldn’t happen overnight because today’s businesses are heavily dependent on and extremely familiar with VMs.

One Point the Equifax Breach Drives Home

oday’s developers use more third-party and open-source components, libraries and frameworks than ever to deliver and update products in ever-shrinking delivery cycles. In the race to get to market, it’s easy to overlook or ignore details that can lead to a security breach.

For example, Equifax blamed its recent security breach on an Apache Struts vulnerability (CVE-2017-9805) and later admitted it failed to install a patch. That patch had been available for six months, according to The Apache Foundation.

“The Equifax hack is so interesting, mostly because their response to the hack has been so poor. Blaming a hack on any sort of software issue – open source or proprietary – is simply part of their inadequate response. It’s a ‘the dog ate my paper’ excuse,” said James Stanger, chief technology evangelist at CompTIA. “That’s not much of an explanation, especially considering that Equifax disclosed this problem on September 7 after knowing about it since July 29. ”

What if the software you built was compromised and you discovered that the root cause was a third-party building block you used? You didn’t build that piece of software, after all, so perhaps that party should be liable for any damages that piece of software caused.

Practically speaking, good luck with that argument.

Little or No Third-Party Liability

If you’re using third-party building blocks in your software, which you likely are, the buck stops with you. Sure, someone else’s code may have caused a catastrophic failure, but did you read the fine print in the license agreement?  Third-party developers have several ways of dealing with the matter contractually.

“There may be disclaimers, especially in the open source community, that say, ‘This component is [provided] as-is’ and you as the licensee are responsible for its testing and usage in another system,” said Roy Hadley, Jr., co-chair of the Privacy & Cybersecurity team at law firm Thompson Hine “If you choose to use it in a nuclear facility or the space shuttle, that’s on you.”

“This WAS a different cybersecurity conference experience, and I really enjoyed all of the interaction and honest discussions.” (2017 Attendee) LEARN MORE

Those who use third-party software in their products are ultimately responsible because the provider can’t foresee how its software will be used or configured by others. So, the licensor protects itself using an “as-is” clause or a limitation of liability. Alternatively, the licensor may require indemnity from the licensee, which means if you use third-party software, something goes wrong and the provider of the component you use gets sued, you’re liable.

What Software Developers Should Do

Test, test, test. Ideally, developers should take the time to understand every piece of third-party software they’re using to make sure it does what it’s supposed to do and that it’s been tested for security vulnerabilities. They should also have a mechanism to ensure that the associated updates and patches are up-to-date.

“I think you have an absolute responsibility to make sure that third-party components work, work together and work the way they’re supposed to,” said Jason Wacha, an attorney and founder of  WS Law Offices which specializes in software licensing. “One of the things about the open source community is you hear [about a software vulnerability], they announce it and everybody jumps on it and tries to fix it. Certainly this was true for the Struts project. One of the things about proprietary software is if someone discovers a vulnerability, it’s not going to get out there and people aren’t going to talk about it.”

The obvious constraint is time. There just isn’t enough time to test everything.

“The issues we keep confronting or not confronting in the IT industry are ignoring or omitting key steps of the Software Development Lifecycle (SDLC) and then mismanaging how that resulting software is deployed,” said CompTIA’s Stanger. “One of the primary reasons why software issues get missed by the good guys and exploited by the bad guys is because companies, individuals and groups that develop software tend to rush software to market.”

There are also challenges with the way software is configured and deployed.

“Many IT pros and even security pros still tend to think, ‘If I obtain the code from a secure source and run the hash tag, I’ll be fine. I’ll just update the code as necessary.’ Plus, relatively few companies actually test the software properly in a production environment by “throwing rocks” at the code and doing proper threat hunting,” said CompTIA’s Stanger. “Fewer still are able to update software adequately, because updates often break custom code. Imagine how security issues can propagate when you combine installed and cloud solutions.”

While developers should verify that the third-party software they use has been adequately tested by whomever built it, they need to retest it in the context of their own product.

Rob Rosenzweig, Risk Strategies Co.

Rob Rosenzweig, Risk Strategies Co.

“The reality of the current world we live in is that any business must undertake extreme caution and implement a thorough due diligence process when vetting any vendor that impacts their supply chain or is processing or storing any information on its behalf,” said Rob Rosenzweig, vice president and national cyber practice leader for insurance brokerage Risk Strategies Company. “While there is significant upside to the utilization of outsourced vendors in managing expense, obtaining a higher level of security and realizing operational efficiencies; the flipside is that organizations lose control and still retain all of the risk.”

Lesson Learned

The Equifax breach underscores the need for vigilance because hackers are constantly experimenting to find and exploit vulnerabilities, particularly when sensitive information is involved. When a vulnerability is found, it needs to be addressed in a timely fashion, unlike the Equifax breach. Due to the confluence of events, the Federal Trade Commission (FTC) is now investigating Equifax.

As is evident, the failure to implement one patch can have devastating consequences.

DevOps Not Working? Here’s Why.

DevOps can help organizations get better software to market faster, when it’s working. When it’s not working, development and operations teams aren’t working as a cohesive unit.  They’re operating as distinct phases of a software development lifecycle.

Part of the problem may involve tools. Either the tools still operate as silos or they don’t provide the kind of cross-functional visibility that DevOps teams require. However, a bigger task may be getting development and operations working together.

What makes DevOps even more challenging is that there’s no one right way to do it.  Of course, there are better and worse ways to approach it, so here are a few suggestions to consider.

Think before automating. Automation is part of DevOps, but it’s not synonymous with DevOps. While it’s true that automating tasks saves time, automation also accelerates the rate at which mistakes can be propagated.

“If you just automate things and you haven’t built the skills to handle high speed, you’re putting yourself in a place where friction and accidents can happen,” said Sean Regan, head of growth, software teams at software development tool provider www.atlassian.com. “Before you automate everything, start with a culture. You’ll have happier developers, happier customers, and better software.”

Test automation is essential for DevOps, and to do that well, developers need to test their software in a production environment.

“DevOps is founded in automation. One of the first things organizations recognize is they need a dynamic infrastructure which most people think is cloud,” said Nathen Harvey, vice president, Community Development at DevOps workflow platform provider www.chef.io Chef Software. “It doesn’t have to be cloud, it means you have compute resources available to developers and the people who are running your production organization.”

With the help of automation and developer access to production environments, DevOps teams are delivering software in days or weeks instead of months.

Cultivate a DevOps culture. Software teams that have gone through an Agile transformation remember they had to change their culture for it to succeed. The same is true for DevOps.

“You need to get your teams collaborating in a way they haven’t done before,” said Harvey. “It becomes much less about a hand-off and more about understanding the common goals we’re working towards.”

One indication of DevOps maturity is whether the shipment of software is considered an end or a beginning. Atlassian used to celebrate after a product shipped, which used to be common for software companies. Now Atlassian celebrates milestones hit after the release, such as the number of customers using a new feature within a given time frame.

Take a hint from web giants. A decade ago, web companies were embracing DevOps and figuring out how infrastructure could be managed as code.  Meanwhile, other companies were operating in business-as-usual mode.

“If you’re coming from a more traditional organization, the idea of managing infrastructure as code may still be new,” said Chef Software’s Harvey. “I think the best way to achieve success is to pull together a cross-functional team that cares about driving a particular business outcome, such as how to deliver this one change out to our customer.”

 Cheat. Companies spend lots of time reinventing what works at other companies. Atlassian memorialized a lot of what it has learned in self-assessments and playbooks, so DevOps teams can identify and address the challenges they face.

“Customers are coming to us saying, ‘Give us playbooks, give us patterns, give us specific actionable ways to move toward DevOps,” said Regan.  “If you’re moving to DevOps, there’s usually an early stage where you wonder if you’re doing it right.”

…But the bugs remain

As seen in SD Times.

bugSoftware teams are under pressure to deliver higher-quality software faster, but as high-profile failures and lackluster app ratings indicate, it’s easier said than done. With the tremendous growth of agile development, finding bugs earlier in the development cycle has become an imperative, but not all organizations are succeeding equally well.

“Developers realize they need better tools to investigate problems, but we need to make sure we’re not creating problems in the first place,” said Gil Zilberfeld, product manager of unit testing solution provider Typemock.

Software teams are using all kinds of tools, including bug and defect trackers, SCM tools, testing suites, and ALM suites, and yet software quality has not improved generally, according to William Nichols, a senior member of the technical staff at the Software Engineering Institute.

“The data don’t suggest that the software being produced is any better than it was a decade or 20 years ago, whether you measure it by lines of code or function points and defects,” he said. “We’re seeing one to seven defects per 1,000 lines of code. We’re making the same mistakes, and the same mistakes cause the same problems.”

One problem is focusing too much on the speed of software delivery rather than software quality. Nichols said this is a symptom of unrealistic management expectations. Tieren Zhou, founder and CEO of testing and ALM solution provider TechExcel, considered it a matter of attention: what’s sexy versus what matters.

“Bug fixing is less interesting than building features,” said Zhou. “In the interest of acquiring new customers, you may be losing old customers who are not happy with your products.”

While software failures are often blamed on coding and inadequate testing, there are many other reasons why software quality isn’t what it should be as evidenced by defects that are being injected at various points within the software life cycle.

Bug and defect tracking go agile

Bug and defect tracking is becoming less of a siloed practice as organizations embrace agile practices. Because agile teams are cross-functional and collaborative, tools are evolving to better align with their needs.

“We’re moving from isolation to transparency,” said Paula Rome, senior project manager at Seapine, a provider of testing and ALM solutions. “It makes no sense to have critical decision-making information trapped in systems.”

Since software teams no longer have weeks to dedicate to QA, developers and testers are working closer together than ever. While pair programming and test-driven development practices can help improve the quality of code, not every team is taking advantage of those and other means that can help find and fix defects earlier in the life cycle.

“There’s a need to find problems earlier, more often and faster, but what you’re seeing are .01 releases that fix a patch, or software teams using their customer base as a bug-tracking and bug-finding system,” said Archie Roboostoff, experience director for the Borland portfolio at Micro Focus, a software-quality tool provider.

Atlassian, maker of the JIRA issue and bug tracker, is helping teams get more insight into bugs with its latest 6.2 release. Instead of viewing bugs in “open” and “closed” terms, users can now see how many commits have been made, whether the peer reviews were success or not, and whether the code has been checked into production or not.

“The process of fixing a bug is a multi-stage process,” said Dan Chuparkoff, head of JIRA product marketing at Atlassian. “Developers check things out of the master branch, write some code, submit their code for peer reviews, peers comment on their code, the developers make some adjustments, check it into the master branch, and roll it up into production. Those steps are completely invisible in most bug systems so the stakeholders have trouble seeing whether something’s close to being finished or not.”

uTest (soon to be known as Applause) offers “in the wild” testing, which is a crowdsourced approach to quality assurance that enables organizations to find issues in production before their customers do.

Software teams are using the service to supplement their lab tests, although some, especially those doing three builds a week, are running lab and in-the-wild tests in parallel.

“In an agile world and in a continuous world, you want to make sure things are thoroughly tested and want to accelerate your sprints,” said Matt Johnston, chief strategy officer of uTest. “We help them catch things that were missed in lab testing, and we’re helping them find things they can’t reproduce.”

To keep pace with faster software release velocities, Hamid Shojaee, CEO of Scrum software provider Axosoft, is focusing on usability so individuals and teams can do a better job of resolving defects in less time.

“The custom pieces of information associated with each tracked bug are different for every team,” he said. “Creating custom fields has been a time-consuming and difficult thing to do when you’re customizing a bug-tracking tool. We have an intuitive user interface, so what would have taken you 20 to 30 minutes takes seconds.”

AccuRev is also enabling teams to spend less time using tools and more time problem-solving.

“Defect tracking can be cumbersome,” said Joy Darby, a director of engineering at AccuRev. “By the time software gets to QA, they have to ask questions or reference e-mails or look at a white board. With a central repository, you have instant access to all the artifacts, all the tests that were done, the build results, and any sort of complex code analysis you may have done.”

While more tools are evolving to support continuous integration and deployment, organizational cultures are not moving as quickly.

“While we’re all off iterating, the business is off waterfalling,” said Jeff Dalton, a Standard CMMI Appraisal Method for Process Improvement lead appraiser and CMMI instructor. “Software teams are accelerating their delivery cycles while the rest of the business still views software in terms of phases, releases, large planning efforts, large requirements, and 12-month delivery cycles.”

The disconnect between agile and traditional ways of working can work against software quality when funding is not tied to the outcome of sprints, for example.

Adopting a life cycle view of quality

As software teams become more agile, discrete workflows become collaborative ones that require a life-cycle view of software assets and interoperable tools. Even with the greater level of visibility life-cycle approaches provide, the root cause of problems nevertheless may be overlooked in the interest of finding and fixing specific bugs.

“We’ve inflated processes and tools in order to support something that could have been figured out earlier in the process if we had defined a better spec,” said Typemock’s Zilberfeld. “Instead, we spend five hours talking about something the customer doesn’t care about.”

Most ALM tools are open enough to support other tools whose capabilities equal or surpass what is in the ALM suite. Conversely, narrower tool providers are looking at bugs and defects in a broader sense because customers want to use the tools in a broader context than they have in the past.

“Software is no longer an isolated venture. It really affects all parts of the business,” said Atlassian’s Chuparkoff. “Modern issue trackers have REST APIs that allow you to easily connect your issue tracker to the entire product life cycle. We wanted to make sure JIRA can integrate with your proprietary tool and other tools via REST APIs or plug-ins from our marketplace. We realize people aren’t going to use JIRA in a silo by itself.”

Octo Consulting Group, which provides technology and management consulting services to federal agencies, is one of many organizations that are using JIRA in tandem with ALM solutions.

“Bug and defect tracking is part of ALM,” said Octo Consulting Group CTO Ashok Nare. “While we use JIRA, and there are a lot of good ALM products like Rally, VersionOne and CollabNet…the tools are really there to facilitate a process.”

Despite the broader life-cycle views, software quality efforts often focus on development and testing even though many defects are caused by ill-defined requirements or user stories.

“Philosophically, we didn’t use to think about bugs and defects in terms of requirements problems or customer problems or management problems, so we focused on code,” said CMMI’s Dalton. “But what we found was the code did what it was supposed to do, but didn’t do what the customer wanted it to do. It’s important to understand where the defect is injected into the process, because if we know that, we can change the process to fix it.”

Dalton prefers the process model approach, which includes prototypes, mockups and wireframes as part of requirements and the design process because they solve problems caused in the early stages when they’re the least costly to fix.

“Every time there’s an assumption or something fuzzy in the requirements it leads to defects,” said Adam Sandman, director at Inflectra (a maker of test-management software). “If you can’t define it, you can’t build it well.”

Inflectra, TechExcel, Seapine and the other ALM solution providers tie requirements, development, testing and other life-cycle stages together so that, among other things, defects can be identified, fixed and prevented from coming back in future iterations or releases.

“We’re connecting the dots, making it possible to have transparency between the silos so you get the data you need when you need it,” said Seapine’s Rome.

In addition to providing solutions, TechExcel is trying to help software teams deliver better products by promoting the concept of “QA floaters” who, as part of an agile team, help developers define test cases and run test cases in parallel with developers.

“When developers and QA floaters are both testing, you have a built-in process that helps you find and fix bugs earlier so the developer can satisfy a requirement or story,” said TechExcel’s Zhou. “When you tie in total traceability, you tie requirements, development and testing together in a way that improves productivity and software quality.”

Who owns software quality?
Software quality has become everyone’s job, but not everyone sees it that way, which is one reason why defects continue to fall through organizational cracks.

“When you separate the accountability and resources, that’s where disaster always starts,” said Andreas Kuehlmann, SVP of research and development at testing solution provider Coverity. “A lot of teams have gotten to the point where the developers are doing a little bit of testing, but the rest is tossed over the fence to QA who can’t even start the executable.”

Coverity offers three products that move testing into development: Quality Advisor, which uses deep semantic and static analysis to identify bugs in code when the code is compiling; Security Advisor, which uses the same technology to find security vulnerabilities; and Test Advisor which identifies the most risky code.

“Moving testing into development requires a lot to be done from a workflow perspective,” said Kuehlmann. “You have to have tests running 24×7, you have to have the tools and infrastructure in place, and you have to change developers’ mindsets. That’s really hard. The role of QA is evolving into more like a sign-off check.”

The dynamics between coders and testers is changing, but not in a uniform way. A minority of organizations are collapsing coding and testing into a single function, although the majority is leveraging the skill sets of both developers and QA with the goal of optimizing delivery speed and quality.

“Developers are really good at solving problems, and test engineers are good at finding vulnerabilities,” said Atlassian’s Chuparkoff. “If a developer can run an automated test after he finishes his code, he can fix the bug immediately while he’s in the thinking mode of fixing it. It’s a lot more efficient than fixing it four days later after someone gave you the issue.”

Annotated screen shots help speed up issue resolution, which is why Axosoft, Atlassian and Seapine have added the capability to their tools.

“You have to make sure people are taking the time to put the proper reproduction steps in to make sure those bugs are fixed,” said Axosoft’s Shojaee.

Not everyone on the team may be responsible for fixing defects, but many have the potential to inject them. Because software is increasingly the face of businesses, organizations are starting to realize that software quality isn’t a technical problem; it’s a business problem. For example, uTest’s Johnston recently met with the CIO of a major media company who considers software quality the CEO’s responsibility since a major portion of the company’s revenue is driven by digital experiences.

“If that sentiment can win the day, a lot more companies will be successful in the app economy,” said Johnston.

The complexity paradox
On one hand, the software landscape is becoming more complex, and at the same time, tools and approaches to software development are becoming more abstract, all of which can make finding and fixing defects more difficult.

“It’s not about Windows and Linux anymore,” said Inflectra’s Sandman. “Now you have all these mobile devices and frameworks, and you’re seeing constant updates to browsers. If you’re building systems with frameworks and jQuery plug-ins and something goes wrong, do you fix it, ask the vendor to fix it, or ask the open-source community to fix it? Inevitably the bugs may not be in your application but in the infrastructure you’re relying on.”

Micro Focus’ Roboostoff agreed. “If users see quality as something that works, and your product doesn’t work, then it’s hugely defective,” he said.

“When I had my Web server, application server and database server sitting in my office, I could rest assured a problem was somewhere in the closet and I’d find it eventually. Now, I might have some REST services sitting in Amazon, some service-based message in Azure, six CDNs around the world, and A/B testing for optimizing linking going on, and then on Monday morning half of my customers say something is slow.”

Because there is so much complexity and because the landscape is changing so fast at so many levels, edge-case testing is becoming more important.

“When you consider there are about 160,000 combinations of devices, browsers and platforms you have to test for, most customers aren’t coming close to where they should be,” said Roboostoff. “Since it isn’t practical, you pick the biggest screen and the smallest screen, the newest devices and the oldest devices to lower that risk profile.”

The fragmentation that is continuing to occur at so many levels can cause errors that are difficult to identify and rectify.

“One brand may have four to 10 different codebases, four to 10 product road maps, varying skill sets to accomplish all that, and a multitude of platforms and devices they are building software for that they have to test against,” said Johnston. “Meanwhile, users expect things to operate like a light switch.”

The U.S. government established a standardized approach to security assessment called the Federal Risk and Authorization Management Program (FedRAMP), which is apparently benefitting some software developers and consultants who need to be responsible for their software quality but are not in control of the cloud infrastructure. Octo Consulting Group’s Nare said that FedRAMP’s certification simplifies the testing he would otherwise have to do.

“As the level of abstraction goes up, if you’re only testing the top layer, you have to assume that the lower layers underneath like the infrastructure in the cloud and the PaaS are fundamentally sound so that everything is working the way it’s supposed to,” he said. “When we do security testing today and we test our applications, we don’t certify the whole stack anymore because the cloud service providers have already been certified. Otherwise you might have to write tests at the infrastructure of PaaS level.”

Meanwhile, most organizations are trying to wrap their arms around the breadth of testing and defect resolution practices necessary to deliver Web and mobile applications that provide the scalability, performance, and security customers expect.

“If you’re going to build better quality software faster, you need to make sure that the build actually works,” said Andreas Grabner, technology strategist at Compuware (an IT services company). “The software I write is more complex because it is interacting with things I can’t control.”

And that’s just the current state of Web and mobile development. With the Internet of Things looming, some tool providers expect that mainstream developers will have to write applications for devices other than smartphones, and as a result, system complexity and the related bug- and defect-tracking challenges will increase.

“If you think about the Web and the fragmentation of mobile devices, the complexity has increased by an order of magnitude,” said Johnston. “If you think about wearables or automobiles or smart appliances or smartwatches, it’s going to get exponentially worse.”

There’s no excuse for bad quality

There are many reasons why software quality falls short of user expectations, but the problem is that users don’t want to hear it. Even though every user complaint won’t make it to the top of a backlog, what customers consider “bugs” and “defects” have a nasty habit of making headlines, resulting in seething customer reviews and negatively impacted revenue.

“It’s unacceptable to tell users that you can’t reproduce a bug. These days they have all the cards,” said Johnston. “We live in a world where app quality—functional quality, usability quality, performance quality and security quality—are differentiators, and yet quality is still thought of as a cost center.”

Bug and defect tracking is all about problem-solving, but unfortunately some of the lingering problems aren’t being addressed despite impressive tool advancements because organizations change slower than technology does.

“I can have a product that’s completely bug free and has a great user experience, but if you get no value out of it, the quality is bad,” said Micro Focus’ Roboostoff. “People need to understand quality. It’s not about function; it’s about the customer perception of your product, your brand, and your company.”

 

Hadoop is Now a General-purpose Platform

As seen in SD Times

HadoopApache Hadoop adoption is accelerating among enterprises and advanced computing environments as the project, related projects, and ecosystem continue to expand. While there were valid reasons to avoid the 1.x versions, skeptics are reconsidering since Hadoop 2 (particularly the latest 2.2.0 version) provides a viable choice for a wider range of users and uses.

“The Hadoop 1.x generation was not easy to deploy or easy to manage,” said Juergen Urbanski, former chief technologist of T-Systems, the IT consulting division of Deutsche Telecom. “The many moving parts that make up a Hadoop cluster were difficult for users to configure. Fortunately, Hadoop 2 fills in many of the gaps. Manageability is a key expectation, particularly for the more critical business use cases.”

Hadoop 2.2.0 adds the YARN resource-management framework to the core set of Hadoop modules, which include the Hadoop Common set of utilities, the Hadoop Distributed File System (HDFS), and Hadoop MapReduce for parallel processing. Other improvements include enhancements to HDFS, binary compatibility for Map/Reduce applications built on Hadoop 1.x, and support for running Hadoop on Windows.
Meanwhile, Hadoop-related projects and commercial products are proliferating along with the ecosystem. Collectively, the new Hadoop capabilities provide a more palatable and workable solution, not only for enterprise developers, business analysts and IT, but also a larger community of data scientists.

“There are many technologies that are helping Hadoop realize its potential as being a more general-purpose platform for computing,” said Doug Cutting, co-creator of Hadoop. “We started out as a batch processing system. People used it to do computations on large data sets that they couldn’t do before, and they could do it affordably. Now there’s an ever-increasing amount of data processing that organizations can do using this one platform.”

YARN expands the possibilities
The limitations of Map/Reduce were the genesis of Apache Hadoop NextGen MapReduce (a.k.a. YARN), according to Arun Murthy, release manager for Hadoop 2.

“It was apparent as early as 2008 that Map/Reduce was going to become a limiting factor because it’s just one algorithm,” he said. “If you’re trying to do things like machine learning and modeling, Map/Reduce is not the right algorithm to do it.”

Rather than replacing Map/Reduce altogether, it was supplemented with YARN to provide things like resource management and fault tolerance as base primitives in the platform, while allowing end users to do different things as they process and track the data in different ways.

“The architecture had to be more general-purpose than Map/Reduce,” said Murthy. “We kept the good parts of Map/Reduce, such as scale and simple APIs, but we had to allow other things to coexist on the same platform.”

The original Hadoop MapReduce was based on the Google Map/Reduce paper, while Hadoop HDFS was based on the Google File System paper. HDFS provides a mechanism to store huge amounts of heterogeneous data cheaply; Map/Reduce enables highly efficient parallel processing.

“Map/Reduce is a mature concept that comes from LISP and functional programming,” said Murthy. “Google scaled Map/Reduce out in a massive way while keeping a real simple interface for the end user so the end user does not have to deal with the nitty-gritty details of scheduling, resource management, fault tolerance, network partitions, and other crazy stuff. It allowed the end user to just deal with the business logic.”

Because YARN is an open framework, users are free to use algorithms other than Map/Reduce. In addition, applications can run on and integrate with it.

“The scientific and security computing communities depend on Open MPI technologies, which weren’t even an option in Hadoop 1,” said Edmon Begoli, CTO of analytics consulting firm PYA Analytics. “The architecture of Hadoop 2 and YARN allows you to plug in your own resource manager and your own parallel processing algorithms. People in the high-performance computing community have been talking about YARN enthusiastically for years.”

HDFS: Aspirin for other headaches
Some CIOs have been reluctant to bring Hadoop into the enterprise because there have been too many barriers to entry, although Hadoop 2 improvements are turning the tide.

“I think two of the deal breakers were NameNode federation and the Quorum Journal Manager, which is basically a failover for the HDFS NameNode,” said Jonathan Ellis, project chair for Apache Cassandra. “Historically, if your NameNode went down, you were basically screwed because you’d lose some amount of data.”

Hadoop 2 introduces the Quorum Journal Manager, where changes to the NameNodes are recorded to replicated machines to avoid data loss, he said. NameNode federation allows a pool of NameNodes to share responsibility for an HDFS cluster.

“NameNode federation is a bit of a hack because each NameNode still only knows about the file set it owns, so at the client level you have to somehow teach the client to look for some files on one NameNode and other files on another NameNode,” said Ellis.

HDFS is nevertheless an economically feasible way to store terabytes or even petabytes of data. Facebook has a single cluster that stores more than 100PB on Hadoop, according to Murthy.

“It’s amazing how much data you can store on Hadoop,” he said. “But you have to interact with the data, interrogate it, and come up with insights. That’s where YARN comes in. Now you have a general-purpose data operating system, and on top of it you can run applications like Apache Storm.”

John Haddad, senior director of product marketing at Informatica, said the Hadoop 2 improvements allow his organization to run more types of applications and workloads.

“Various teams can run a variety of different applications on the cluster concurrently,” he said. “Hadoop 1 lacked some of the security, high availability and flexibility necessary to have different applications, different types of workloads, and more than one organization or team submitting jobs to the cluster.”

Gearing up for prime time
The number and types of Hadoop open-source projects and commercial offerings are expanding rapidly. Hadoop-related projects include HBase, a highly scalable distributed database; the Hive data warehouse infrastructure; the Pig language and framework for parallel computing; and Ambari, which provisions, manages and monitors Apache Hadoop clusters.

“It seems like we’ve got 20 or 30 new projects every week,” said Cutting. “We have all these separate, independent projects that work together, so they’re interdependent but under separate control so the ecosystem can evolve.”

Meanwhile, solution providers are building products for or integrating their products with Hadoop. Collectively, Hadoop improvements, open-source projects and compatible commercial products are allowing organizations to tailor it to their needs, rather than having to shoehorn what they are doing into a limited set of capabilities. And the results are impressive.

For example, Oak Ridge National Laboratory used Hadoop to help the Center for Medicare and Medicaid Services identify tens of millions of dollars in overpayments and fraudulent transactions in just three weeks.

“Using only two or three engineers, we were able to approach and understand the data from different angles using Hive on Hadoop because it allowed us to write SQL-like queries and use a machine-learning library or run straight Map/Reduce queries,” said PYA Analytics’ Begoli. “In the traditional warehousing world, the same project would have taken months unless you had a very expensive data warehouse platform and very expensive technology consulting resources to help you.”

The groundswell of innovation is enabling Hadoop to move beyond its batch-processing roots to include real-time and near-real-time analytics.

 

The groundswell of innovation is enabling Hadoop to move beyond its batch-processing roots to include real-time and near-real-time analytics.

Skeptics are doing a double take
Hadoop 2 is converting more skeptics than Hadoop 1 because it’s more mature, it’s easier (but not necessarily easy) to implement, it has more options, and its community is robust.

“You can bring Hadoop into your organization and not worry about vendor lock-in or what happens if the provider disappears,” said Murthy. “We have contributions from about 2,000 people at this point.”

There are also significant competitive pressures at work. Organizations that have adopted Hadoop are improving the effectiveness of things like fraud detection, portfolio management, ad targeting, search, and customer behavior by combining structured and unstructured data from internal and external sources that commonly include social networks, mobile devices and sensors.

“We’re seeing organizations start off with basic things like data warehouse optimization, and then move on to other cool and interesting things that can drive more revenue from the company,” said Informatica’s Haddad.

For example, Yahoo has been deploying YARN in production for a year, and the throughput of the YARN clusters has more than doubled. According to Murthy, Yahoo’s 35,000-node cluster now processes 130 to 150 jobs per day versus 50 to 60 before YARN.

“When you’ve got 2x over 35,000 to 40,000 nodes, that’s phenomenal,” he said. “It’s a pretty compelling story to tell a CIO that if you just upgrade your software from Hadoop 1 to Hadoop 2, you’ll see 2x throughput improvements in your jobs.”

Of course, Hadoop 2.2.0 isn’t perfect. Nothing is. And some question what Hadoop will become as it continues to evolve.

Hadoop co-creator Cutting said the beauty of Hadoop as an open-source project is that new things can replace old things naturally. That prospect somewhat concerns PYA Analytics’ Begoli, however.

“I’m concerned about the explosion of frameworks because it happened with Java and it’s happening with JavaScript,” he said. “When everyone is contributing something, it can be too much or the original vision can be diluted. On the other hand, a lot of brilliant teams are contributing to Hadoop. There are management tools, SQL tools, third-party tools and a lot of other things that are being incubated to deliver advanced capabilities.”

While Hadoop’s full impact has yet to be realized, Hadoop 2 is a major step forward.

Well-known Hadoop implementations

Amazon Web Services: Amazon Elastic MapReduce uses Hadoop in order to provide a quick, easy and cost-effective way to distribute and process large amounts of data across a resizable cluster of Amazon EC2 instances. It can be used to analyze click-stream data, process vast amounts of genomic data and other large scientific data sets, and process logs generated by Web and mobile applications.