Strategic Insights and Clickworthy Content Development

Category: IT strategy (Page 1 of 2)

How to Prepare for the Machine-Aided Future

Intelligent automation is going to impact companies and individuals in profound ways, some of which are not yet foreseeable. Unlike traditional automation, which lacks an AI element, intelligent automation will automate more kinds of tasks in an organization, at all levels within an organization.

As history has shown, rote, repetitive tasks are ripe for automation. Machines can do them faster and more accurately than humans 24/7/365 without getting bored, distracted or fatigued.

When AI and automation are combined for intelligent automation, the picture changes dramatically. With AI, automated systems are not just capable of doing things; they’re also capable of making decisions. Unlike manufacturing automation which replaced factory-floor workers with robots, intelligent automation can impact highly-skilled, highly-educated specialists as well as their less-skilled, less-educated counterparts.

Intelligent automation will affect everyone

The non-linear impact of intelligent automation should serve as a wakeup call to everyone in an organization from the C-suite down. Here’s why: If the impact of intelligent automation were linear, then the tasks requiring the least skill and education would be automated first and tasks requiring the most skill and education would be automated last. Business leaders could easily understand the trajectory and plan for it accordingly.

However, intelligent automation is impacting industries in a non-linear fashion. For example, legal AI platform provider LawGeex conducted an experiment that was vetted by professors from Duke University School of Law, Stanford University and an independent attorney to determine which could review contracts more accurately: AI or lawyers. In the experiment, 20 lawyers took an average of 92 minutes to review five non-disclosure agreements (NDAs) in which there were 30 legal issues to spot. The average accuracy rating was 85%. The AI completed the same task in 26 seconds with a 94% accuracy level. Similar results were achieved in a study conducted by researchers at the University of California, San Francisco (UCSF). That experiment involved board-certified echocardiographers. In both cases, AI was better than trained experts at pattern recognition.

Interestingly, most jobs involve some rote, repetitive tasks and pattern recognition. CEOs may consider themselves exempt from intelligent automation but Jack Ma, billionaire founder and CEO of ecommerce platform Alibaba disagrees. “AI remembers better than you, it counts faster than you, and it won’t be angry with competitors.”

What the C-Suite Should Consider

Intelligent automation isn’t something that will only affect other people. It will affect you directly and indirectly. How you handle the intelligently automated future will matter to your career and the health of your organization.

You can approach the matter tactically if you choose. If you take this path, you’ll probably set a goal of using automation to reduce the workforce by XX%.

A strategic approach considers the bigger picture, including the potential competitive effects, the economic impact of a divided labor workforce, what “optimized” business processes might look like, and the ramifications for human capital (e.g., job reassignment, new roles, reimagined roles, upskilling).

The latter approach is more constructive because work automation is not an end it itself. The reason business leaders need to think about intelligent automation now is underscored by a recent McKinsey study. It suggested that 30% of the tasks performed in 6 out of 10 jobs could be automated today.

Tomorrow, there will be even more opportunities for intelligent automation as the technology advances, so business leaders should consider its potential impacts on their organizations.

For argument’s sake, if 30% of every job in your organization could be automated today, what tasks do you consider ripe for automation? If those tasks were automated, how would it affect the organization’s structure, operations and value proposition? How would intelligent automation impact specific roles and departments? How might you lead the workforce differently and how might your expectations of the workforce change? What ongoing training are you prepared to provide so your workforce can adapt as more types of tasks are automated?

Granted, business leaders have little spare time to ponder what-if questions, but these aren’t what-if questions, they’re what-when questions. You can either anticipate the impact, observe and adjust or ignore the trend and react after the fact.

The latter strategy didn’t work so well for brick-and-mortar retailers when the ecommerce tidal wave hit…

What Managers Should Consider

The C-suite should set the tone for what the intelligently automated future looks like for the company and its people. Your job will be to manage the day-to-day aspects of the transition.

As a manager, you’re constantly dealing with people issues. In this case, some people will regard automation as a threat even if the C-suite is approaching it responsibly and with compassion. Others will naturally evolve as the people-machine partnership evolves.

The question for managers is how might automation impact their teams? How might the division of labor shift? What parts of which jobs do you think are ripe for automation? If those tasks were automated, how would peoples’ roles change? How would your group change? Likely, new roles would be created, but what would they be? What sort of training would your people need to succeed in their new positions?

You likely haven’t taken the time to ponder these and related questions, perhaps because they haven’t occurred to you yet. As a team leader, you owe it to yourself and your team to think about how the various scenarios might play out, as well as the recommendations you’d have for your people and the C-suite.

What Employees Should Consider

Everyone should consider how automation might affect their jobs, including managers and members of the C-suite, because everyone will be impacted by it somehow.

In this case, think about your current position and allow yourself to imagine what part of your job could be automated. Start with the boring routine stuff you do over and over, the kinds of things you wish you didn’t have to do. Likely, those things could be automated.

Next, consider the parts of your job that require pattern recognition. If your job entails contract review and contract review is automated, what would you do in addition to overseeing the automated system’s work? As the LawGeex experiment showed, AI is highly accurate, but it isn’t perfect.

Your choice is fight or flight. You can give into the fear that you may be automated out of existence and act accordingly, which will likely result in a self-fulling prophecy. Alternatively, consider what parts of your job could be automated and reimagine your future. If you no longer had to do X, what would Y be?  What might your job title be and what your scope responsibilities be?

If you consider how intelligent automation may impact your career, you’ll be in a better position to evolve as things change and you’ll be better prepared to discuss the matter with your superiors.

The Bottom Line

The intelligently automated future is already taking shape. While the future impacts aren’t entirely clear yet, business leaders, managers and professionals can help shape their own future and the future of their companies by understanding what’s possible and how that might affect the business, departments and individual careers. Everyone will have to work together to make intelligent automation work well for the company and its people.

The worst course of action is to ignore it, because it isn’t going away.

Ethical Tech: Myth or Reality?

New technologies continue to shape society, albeit at an accelerating rate. Decades ago, societal change lagged behind tech innovation by many years, a decade or more. Now, change is occurring much faster as evidenced by the impact of disrupters including Uber and Airbnb.

Central to much of the change is the data being collected, stored and analyzed for various reasons, not all of which are transparent. As the pace of technology innovation and tech-driven societal change accelerate, businesses are wise to think harder about the longer-term impacts of what they’re doing, both good and bad.

Why contemplate ethics?

Technology in all its forms is just a tool that can be used for good or evil. While businesses do not tend to think in those terms, there is some acknowledgement of what is “right” and “wrong.” Doing the right thing tends to be reflected in corporate responsibility programs designed to benefit people, animals, and the environment. Doing the wrong thing often involves irresponsible or inadvertent actions that are harmful to people, whether it’s invading their privacy or exposing their personal data.

While corporate responsibility programs in their current form are “good” on some level, ethics on a societal scale tends to be missing.

In the tech industry, for example, innovators are constantly doing things because they’re possible without considering whether they’re ethical. A blatant recent example is the human-sheep hybrid. Closer to home in high tech are fears about AI gone awry.

Why ethics is a difficult concept

The definition of ethics is simple. According to Merriam Webster, it is “the discipline dealing with what is good and bad and with moral duty and obligation.”

In practical application, particularly in relation to technology, “good” and “bad” coexist. Airbnb is just one example. On one hand, homeowners are able to take advantage of another income stream. However, hotels and motels now face new competition and the residents living next to or near Airbnb properties often face negative quality-of-life impacts.

According to Gartner research, organizations at the beginning stages of a digital strategy rank ethics a Number 7 priority. Organizations establishing a digital strategy rank it Number 5 and organizations that are executing a digital strategy rank it Number 3.

“The [CIOs] who tend to be more enlightened are the ones in regulated environments, such as financial services and public sector, where trust is important,” said Frank Buytendijk, a Gartner research vice president and Gartner fellow.

Today’s organizations tend to approach ethics from a risk avoidance perspective; specifically, for regulatory compliance purposes and to avoid the consequences of operating an unethical business. On the positive side, some view ethics as a competitive differentiator or better yet, the right thing to do.

Unfortunately, it’s regulatory compliance pressure and risk because of all the scandals you see with AI, big data [and] social media, but hey, I’ll take it,” said Buytendijk. “With big data there was a discussion about privacy but too little, too late. We’re hopeful with robotics and the emergence of AI, as there is active discussion about the ethical use of those technologies, not onlyt by academics, but by the engineers themselves.”

IEEE ethics group emerges

In 2016, the IEEE launched the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Its goal is to ensure that those involved in the design and development of autonomous and intelligent systems are educated, trained, and empowered to prioritize ethical considerations so that technologies are advanced for the benefit of humanity.

From a business perspective, the idea is to align corporate values with the values of customers.

“Ethics is the new green,” said Raja Chatila, Executive Committee Member of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. “People value their health so they value products that do not endanger their health. People want to buy technology that respects the values they cherish.”

However, the overarching goal is to serve society in a positive way, not just individuals. Examples of that tend to include education, health, employment and safety.

“As an industry, we could do a better job of being responsible for the technology we’re developing,” said Chatila.

At the present time, 13 different committees involved in the initiative are contemplating ethics from different technological perspectives, including personal data and individual access control, ethical research and design, autonomous weapons, classical ethics in AI, and mixed reality. In December 2017, the group released “Ethically Aligned Design volume 2,” a 266-page document available for public comment. It includes the participation of all 13 committees.

In addition, the initiative has proposed 11 IEEE standards, all of which have been accepted. The standards address transparency, data privacy, algorithmic bias, and more. Approximately 250 individuals are now participating in the initiative.

Society must demand ethics for its own good

Groups within society tend to react to technology innovation differently due to generational differences, cultural differences, and other factors. Generally speaking, early adopters tend to be more interested in a new technology’s capabilities than its potential negative effects. Conversely, laggards are more risk averse. Nevertheless, people in general tend to use services, apps, and websites without bothering to read the associated privacy policies. Society is not protecting itself, in other words. Instead, one individual at a time is acquiescing to the collection, storage and use of data about them without understanding to what they are acquiescing.

“I think the practical aspect comes down to transparency and honesty,” said Bill Franks, chief analytics officer at the International Institute for Analytics(IIA). “However, individuals should be aware of what companies are doing with their data when they sign up, because a lot of the analytics –- both the data and analysis –- could be harmful to you if they got into the wrong hands and were misused.”

Right now, the societal impacts of technology tend to be recognized after the fact, rather than contemplated from the beginning. Arguably, not all impacts are necessarily foreseeable, but with the pace of technology innovation constantly accelerating, the innovators themselves need to put more thought into the positive and negative consequences of bringing their technology to market.

Meanwhile, individuals have a responsibility to themselves to become more informed than they are today.

“Until the public actually sees the need for ethics, and demands it, I just don’t know that it would ever necessarily go mainstream,” said Franks. “Why would you put a lot of time and money into following policies that add overhead to manage and maintain when your customers don’t seem to care? That’s the dilemma.”

Businesses, individuals, and groups need to put more thought into the ethics of technology for their own good and for the good of all. More disruptions are coming in the form of machine intelligence, automation, and digital transformation which will impact society somehow. “How” is the question.

How Valuable Is Your Company’s Data?

Companies are amassing tremendous volumes of data, which they consider their greatest asset, or at least one of their greatest assets. Yet, few business leaders can articulate what their company’s data is worth.

Successful data-driven digital natives understand the value of their data and their valuations depend on sound applications of that data. Increasingly venture capitalists, financial analysts and board members will expect startup, public company and other organizational leaders to explain the value of their data in terms of opportunities, top-line growth, bottom line improvement and risks.

For example, venture capital firm Mercury Fund recently analyzed SaaS startup valuations based on market data that its team has observed. According to Managing Director Aziz Gilani, the team confirmed that SaaS company valuations, which range from 5x to 11x revenue, depend on the underlying metrics of the company. The variable that determines whether those companies land in the top or bottom half of the spectrum is the company’s annual recurring revenue (ARR) growth rate, which reflects how well a company understands its customers.

Mercury Fund’s most successful companies scrutinize their unit economics “under a microscope” to optimize customer interactions in a capital-efficient manner and maximize their revenue growth rates.

For other companies, the calculus is not so straightforward and, in fact, it’s very complicated.

Direct value

When business leaders and managers ponder the value of data, their first thought is direct monetization which means selling data they have.

“[I]t’s a question of the holy grail because we know we have a lot of data,” said David Schatsky, managing director at Deloitte. “[The first thought is] let’s go off and monetize it, but they have to ask themselves the fundamental questions right now of how they’re going to use it: How much data do they have? Can they get at it? And, can they use it in the way they have in mind?”

Data-driven digital natives have a better handle on the value of their data than the typical enterprise because their business models depend on collecting data, analyzing that data and then monetizing it. Usually, considerable testing is involved to understand the market’s perception of value, although a shortcut is to observe how similar companies are pricing their data.

“As best as I can tell, there’s no manual on how to value data but there are indirect methods. For example, if you’re doing deep learning and you need labeled training data, you might go to a company like CrowdFlower and they’d create the labeled dataset and then you’d get some idea of how much that type of data is worth,” said Ben Lorica, chief data officer at O’Reilly Media. “The other thing to look at is the valuation of startups that are valued highly because of their data.”

Observation can be especially misleading for those who fail to consider the differences between their organization and the organizations they’re observing. The business models may differ, the audiences may differ, and the amount of data the organization has and the usefulness of that data may differ. Yet, a common mistake is to assume that because Facebook or Amazon did something, what they did is a generally-applicable template for success.

However, there’s no one magic formula for valuing data because not all data is equally valuable, usable or available.

“The first thing I look at is the data [a client has] that could be turned into data-as-a-service and if they did that, what is the opportunity the value [offers] for that business,” said Sanjay Srivastava, chief digital officer at global professional services firm Genpact.

Automation value

More rote and repeatable tasks are being automated using chatbots, robotic process automation (RPA) and AI. The question is, what is the value of the work employees do in the absence of automation and what would the value of their work be if parts of their jobs were automated and they had more time to do higher-value tasks?

“That’s another that’s a shortcut to valuing that data that you already have,” said O’Reilly’s Lorica.

Recombinant value

Genpact also advances the concept of “derivative opportunity value” which means creating an opportunity or an entirely new business model by combining a company’s data with external data.

For example, weather data by zip code can be combined with data about prevalent weeds by zip code and the available core seed attributes by zip codes. Agri-food companies use such data to determine which pesticides to use and to optimize crops in a specific region.

“The idea is it’s not just selling weather data as a service, that’s a direct opportunity,” said Srivastava. “The derivative opportunity value is about enhancing the value of agriculture and what value we can drive.”

It is also possible to do an A/B test with and without a new dataset to determine the value before and after the new data was added to the mix.

Algorithmic value

Netflix and Amazon use recommendation engines to drive value. For example, Netflix increases its revenue and stickiness by matching content with a customer’s tastes and viewing habits. Similarly, Amazon recommends products, including those that others have also viewed or purchased. In doing so, Amazon successfully increases average order values through cross-selling and upselling.

“Algorithmic value modeling is the most exciting,” said Srivastava. “For example, the more labeled data I can provide on rooftops that have been damaged by Florida hurricanes, the more pictures I have of the damage caused by the hurricanes and the more information I have about claim settlements, the better my data engine will be.”

For that use case, the trained AI system can automatically provide an insurance claim value based on a photograph associated with a particular claim.

Risk-of-Loss value

If a company using an external data source were to lose access to that data source, what economic impact would it have? Further, given the very real possibility of cyberattacks and cyberterrorism, what would the value of lost or corrupted data be? Points to consider would be the financial impact which may include actual loss, opportunity cost, regulatory fines and litigation settlement values. If the company has cybersecurity insurance, there’s a coverage limit on the policy which may differ from the actual claim settlement value and the overall cost to the company.

A bigger risk than data loss is the failure to use data to drive value, according to Genpact’s Srivastava.

There’s no silver bullet

No single equation can accurately assess the value of a company’s data. The value of data depends on several factors, including the usability, accessibility and cleanliness of the data. Other considerations are how the data is applied to business problems and what the value of the data would be if it were directly monetized, combined with other data, or used in machine learning to improve outcomes.

Further, business leaders should consider not only what the value of their company’s data is today, but the potential value of new services, business models or businesses that could be created by aggregating data, using internal data or, more likely, using a combination of internal and external data. In addition, business leaders should contemplate the risk of data loss, corruption or misuse.

While there’s no standard playbook for valuing data, expect data valuation and the inability to value data to have a direct impact on startup, public company, and merger and acquisition target valuations.

Why Operationalizing Analytics is So Difficult

Today’s businesses are applying analytics to a growing number of use cases, but analytics for analytics’ sake has little, if any, value. The most analytically astute companies have operationalized analytics, but many of them, particularly the non-digital natives, have faced several challenges along the way getting the people, processes and technology aligned in a way that drives value for the business.

Here are some of the hurdles that an analytics initiative might encounter.

Analytics is considered a technology problem

Some organizations consider analytics a technology problem, and then they wonder why the ROI of their efforts is so poor. While having the right technology in place matters, successful initiatives require more.

“The first key challenge is designing how and in what way an analytics solution would affect the outcome of the business,” said Bill Waid, general manager of Decision Management at FICO. “We start by modeling the business problem and then filling in the analytic pieces that address that business problem. More often than not, there’s a business process or business decision that needs to be incorporated into the model as we build the solution.”

Framing the business problem is essential, because if the analytics don’t provide any business value, they won’t get used.

“Better than 80% of analytics never end up being used. A lot of that stems from the fact that an analysis gets built and it might make sense given the dataset but it’s not used to make something happen,” said Waid. “That’s probably the hardest element.”

Placing analytics in the hands of the business requires access to the right data, but governance must also be in place.

“[T]he technical aspects are becoming easier to solve and there are many more options for solving them, so the people and the process challenges that you’ll face obviously have to come along,” said Bill Franks, chief analytics officer at the International Institute for Analytics (IIA). “In a non-digital-native company, the people and process progress does not match the technology progress.”

Operationalizing analytics lacks buy in

Many analytics initiatives have struggled to get the executive and organizational support they need to be successful. Operationalizing analytics requires the same thing.

“When you operationalize analytics, you’re automating a lot of decisions, so the buy-in you require from all of the various stakeholders has to be high,” said IIA’s Franks. “If you’re a digital native, this is what you do for a living so people are used to it. When you’re a large, legacy company dipping your toe into this, the first couple of attempts will be painful.”

For example, if an organization is automating what used to be batch processes, there need to be more safety checks, data checks, and accuracy checks. Chances are high that everything won’t be done right the first time, so people have to get comfortable with the concept of iteration, which is just part of the learning process.

Analytical results are not transparent

If your company operates in a regulated environment, you need to be able to explain an analytical result. Even if you’re not in a regulated industry, business leaders, investors and potential M&A partners may ask for an explanation.

“We refer to it as ‘reasoning code’ or ‘the outcomes,’ but in AI it’s a form of explainable AI where you can explain to a business owner or a business user why the analytics came to the conclusion it came to,” said FICO’s Waid. “The second thing that you need to provide the business person with is some kind of dashboard for them to be able to change, adjust or accommodate different directions.”

4 Ways Companies Impede Their Analytics Efforts

Businesses in the race to become “data-driven” or “insights-driven” often face several disconnects between their vision of an initiative and their execution of it. Of course, everyone wants to be competitive, but there are several things that differentiate the leaders from the laggards. Part of it is weathering the growing pains that companies tend to experience, some of which are easier to change than others. These are some of the stumbling blocks.

Business objectives and analytics are not aligned

Analytics still takes place in pockets within the majority of organizations. The good news is that various functions are now able to operate more effectively and efficiently as a result of applying analytics. However, there is greater power in aligning efforts with the strategic goals of the business.

In a recent research note, Gartner stated, “Internally, the integrative, connected, real-time nature of digital business requires collaboration between historically independent organizational units. To make this collaboration happen, business and IT must work together on vision, strategy, roles and metrics. Everyone is going to have to change, and everyone is going to have to learn.”

All of that requires cultural adjustment, which can be the most difficult challenge of all.

There’s insight but no action

It’s one thing to get an insight and quite another to put that insight into action. To be effective, analytics need to be operationalized, which means weaving analytics into business processes so that insights can be turned into meaningful actions. Prescriptive analytics is part of it, but fundamentally, business processes need to be updated to include analytics. A point often missed is that decisions and actions are not ends in themselves. They, too, need to be analyzed to determine their effectiveness.

An EY presentation stresses the need to operationalize analytics. Specifically, it says, ” The key to operationalizing analytics is to appreciate the analytics value chain.”

Interestingly, when most of us think about “the analytics value chain” we think of data, analytics, insights, decisions and optimizing outcomes. While that’s the way work flows, EY says our thought process should be the reverse. Similarly, to optimize a process, one must understand what that process is supposed to achieve (e.g., thwart fraud, improve customer experience, reduce churn).

They’re not looking ahead

Less analytically mature companies haven’t moved beyond descriptive analytics yet. They’re still generating reports, albeit faster than they used to because IT and lines of business tend to agree that self-service reporting is better for everyone. Gartner says “the BI and analytics market is in the final stages of a multiyear shift from IT-lead, system-of-record reporting to business-led, self-service analytics. As a result, the modern business intelligence and analytics platform has emerged to meet new organizational requirements for accessibility, agility and deeper analytical insight.”

Still, organizations can only get so far with descriptive analytics. If they want to up their competitive game, they need to move to predictive and prescriptive analytics.

Poor data quality prevents accurate analytics

If you don’t have good data or a critical mass of the right data, your analytical outcomes are going to fall short. Just about any multichannel (and sometimes even single-channel) communication experience with a bank, a telephone company, a credit card company, or a vendor support organization will prove data quality is still a huge issue. Never mind the fact some of these companies are big brand companies who invest staggering amounts of money in technology, including data and analytics technologies.

In a typical telephone scenario, a bot asks the customer to enter an account number or a customer number. If the customer needs to be transferred to a live customer service representative (CSR), chances are the CSR will ask the customer to repeat the number because it doesn’t come up on their screen automatically. If the CSR can’t resolve the issue, then the call is usually transferred to a supervisor or different department. What was your name and number again? It’s a frustrating problem that’s all too common.

The underlying problem is that customer’s information is stored in different systems for different reasons such as sales, CRM and finance.

I spoke with someone recently who said a company he worked with had gone through nearly 20 acquisitions. Not surprisingly, data quality was a huge issue. The most difficult part was dealing with the limited fields in a legacy system. Because the system did not contain enough of the appropriate fields in which to enter data, users made up their own workarounds.

These are just a few of the challenges organizations face on their journey.

How SaaS Strategies Are Evolving

Enterprises are subscribing to more SaaS services than ever, with considerable procurement happening at the departmental level. Specialized SaaS providers target problems that those departments want solved quickly. Because SaaS software tends to be easy to set up and use, there appears to be no need for IT’s involvement, until something goes wrong.

According to the Harvey Nash /KPMG 2017 CIO Survey, 91% of the nearly 4,500 CIO and IT leaders who responded expect to make moderate or significant SaaS investments, up from 82% in 2016. The report also states that 40% of SaaS product procurement now happens outside IT.

“IT needs a new operating model,” said Gianna D’Angelo, principal of KPMG CIO Advisory. “CIOs must respond by continuing to focus on operational excellence while adopting a new operating model for IT to drive innovation and value in these changing times.”

Some IT shops are reacting to shadow IT like they reacted to “bring your own device” (BYOD), meaning if you can’t stop it, you have to enable it with governance in mind. However, issues remain.

“In the last three years, we’ve put policies and some governance in place, but it doesn’t matter. You pull out your credit card, you buy an open source application and I have a virus on my network,” said Todd Reynolds, CTO of WEX Health, which provides a platform for benefit management and healthcare-related financial management. “I don’t even know about it until there’s an issue.”

How SaaS pricing is changing

KPMG’s D’Angelo said most SaaS pricing is based on users or by revenue, and that the contract timeframe is three to five years. There has been some movement to shorter timeframes as low as two years.

Sanjay Srivastava, chief digital officer of Genpact, a global professional services company, said his firm sees a shift from user-based pricing to usage-based pricing, which in Genpact’s case takes the form of a per-item charge for a document or balance sheet, for example.

Regardless of what the SaaS pricing model is, SaaS providers are facing downward pricing pressure. According to Gartner, “Vendors are becoming more creative with their SaaS business models to reflect a need to stand out in the fast-growing subscription economy.”

For its part, WEX Health is responding with new services that drive additional revenue. It has also put some usage-based pricing in place for customers that require elastic compute capabilities. “Mobile is killing us,” said Wex Health’s Reynolds. “You’ve given somebody an application to use on their phone 24/7, so they’re starting to leverage that usage so much more. It’s good people are using [our software] more often, but it requires us to have more storage.”

Longer-term thinking is wise

When departments purchase SaaS software, they usually are seeking relief from some sort of business problem, such as multichannel marketing attribution – studying the set of actions that users take in various environments. What business people often miss is the longer-term requirement to share data across disparate systems.

“If you have half on-premises and half in different clouds, you might have a private cloud, some in Azure and some in Amazon because the technology stack is beneficial to the apps,” said WEX Health’s Reynolds. “Pulling all of that together and making it safe and accessible is the biggest challenge from an operational perspective on the IT side.”

While SaaS systems tend to have APIs that help with data exchange, most enterprises have hybrid environments that include legacy systems, some of which do not have APIs. In the older systems, the data dictionaries may not be up-to-date and Master Data Management (MDM) may not have been maintained. So enterprises often face substantial data quality issues that negatively impact the value they’re getting from their investments.

“If you really want to get value out of [SaaS] — if you want Salesforce to run CRM and you want it to run sales, integrated, and it still has to be connected to ERP — each thing has to be connected,” said Genpact’s  Srivastava. “There’s a lot of back and forth. Planning for that back and forth, and planning well, is really critical.”

Part of that back-and-forth is ensuring that the right governance, compliance and security controls are in place.

Bottom line

There’s more to SaaS investments than may be obvious to the people procuring them. At the same time, IT departments can no longer be the sole gatekeepers of all things tech.

“The challenge for CIOs is enormous, the stakes are large and change efforts of this magnitude take years, but transforming the IT operating model can be done,” said KPMG’s D’Angelo. “Complicating the effort is that IT must continue to support the existing portfolios, including retained infrastructure and legacy applications, during the transformation.”

This means that, for a period of time, IT will have to use a hybrid model comprising both the project-oriented, plan-build-run approach and the next-generation, broker-integrate-orchestrate approach, D’Angelo added.

Tips for Ensuring Winning SaaS Strategies

SaaS software is not a one-size-fits-all proposition. Costs and benefits vary greatly, as do the short-term and long-term trade-offs. Following are a few things you can do along the way to ease the transition.

If you’re just starting out, chances are that most if not all of the software you procure will be SaaS because that’s the way things are going. In addition, SaaS allows for an economic shift to relatively low-cost subscriptions that include upgrades and maintenance (an operational expenditure). This is instead of substantial up-front, on-premises software investments that require subsequent maintenance investments and IT’s help (a capital expenditure). Regardless of what type of software you choose, though, it’s wise to think beyond today’s requirements so you have a better chance of avoiding unforeseen challenges and costs in the future.

If you’re piloting a new type of software, SaaS is probably the way to go because you can usually experiment without a long-term commitment. However, be mindful of the potential integration, security and governance challenges you may encounter as you attempt to connect different data sources.

If you’re in production, you’ll want to continuously assess your requirements in terms of software models, integration, compliance, governance and security. As you continue your move into the cloud, understand what’s holding you back. Finance and HR, for instance, may still hesitate to store their sensitive data anywhere but on-premises. For the foreseeable future, you’ll probably have a hybrid strategy that becomes more cloud-based with time.

At each stage, it’s wise to understand the potential risks and rewards beyond what’s obvious today.

Deloitte: 5 Trends That Will Drive Machine Learning Adoption

Companies across industries are experimenting with and using machine learning, but the actual adoption rates are lower than it might be seem. According to a 2017 SAP Digital Transformation Study, fewer than 10% of 3,100 executives from small, medium and large companies said their organizations were investing in machine learning. That will change dramatically in the coming years, according to a new Deloitte report, because researchers and vendors are making progress in five key areas that may make machine learning more practical for businesses of all sizes.

1. Automating data science

There is a lot of debate about whether data scientists will or won’t be automated out of a job. It turns out that machines are far better at doing rote tasks faster and more reliably than humans, such as data wrangling.

“The automation of data science will likely be widely adopted and speak to this issue of the shortage of data scientists, so I think in the near term this could have a lot of impact,” said David Schatsky, managing director at Deloitte and one of the authors of Deloitte’s new report.

Industry analysts are bullish about the prospect of automating data science tasks, since data scientists can spend an inordinate amount of time collecting data and preparing it ready for analysis. For example, Gartner estimates that 40% of a data scientist’s job will be automated by 2020.

Data scientists aren’t so sure about that, and to be fair, few people, regardless of their position, have considered which parts of their job are ripe for automation.

2. Reducing the need for training data

Machine learning tends to require a lot of data. According to the Deloitte report, training a machine learning model might require millions of data elements. While machine learning requirements vary based on the use case, “acquiring and labeling data can be time-consuming and costly.”

One way to address that challenge is to use synthetic data. Using synthetic data, Deloitte was able to reduce the actual amount of data required for training by 80%. In other words, 20% of the data was actual data and the remaining 80% was synthetic data.

“How far we can go in reducing the need for training data has two kinds of question marks: How far can you reduce the need for training data and what characteristics of data are most likely minimized and which require massive datasets?” said Schatsky.

3. Accelerating training

Massive amounts of data and heavy computation can take considerable time. Chip manufacturers are addressing this issue with various types of chips, including GPUs and application-specific integrated circuits (ASICs). The end result is faster training of machine learning models.

“I have no doubt that with the new processor architectures, execution is going to get faster,” said Schatsky. “[The chips] are important and necessary, but not sufficient to drive significant adoption on their own.”

4. Explaining results

Many machine learning models spit out a result, but they don’t provide the reasoning behind the result. As Deloitte points out, business leaders often hesitate to place blind faith in a result that can’t be explained, and some regulations require an explanation.

In the future, we’ll likely see machine learning models that are more accurate and transparent, which should open the door for greater use in regulated industries.

[Deloitte also recently discussed 9 AI Benefits Enterprises Are Experiencing Today.]

“No one knows how far you can go yet in terms of making an arbitrary neural network-based model interpretable,” said Schatsky. “We could end up hitting some limits identifying a fairly narrow set of cases where you can turn a black box model into an open book for certain kinds of models and situations, but there will be other scenarios where they work well but you can’t use them in certain situations.”

5. Deploying locally

Right now, machine learning typically requires a lot of data and training can be time-consuming. All of that requires a lot of memory and a lot of processing power, more than mobile and smart sensors can handle, at least for now.

In its report, Deloitte points out there is research in this area too, some of which has reduced the size of models by an order of magnitude or more using compression.

The bottom line

Machine learning is having profound effects in different industries ranging from TV pilots to medical diagnoses. It seems somewhat magical and somewhat scary to the uninitiated, though the barriers to adoption are falling. As machine learning becomes more practical for mainstream use, more businesses will use it whether they realize it or not.

“[The five] things [we identified in the report] are converging to put machine learning on a path toward mainstream adoption,” said Schatsky.  “If companies have been sitting it out waiting for this to get easier and more relevant, they should sit up instead and start getting involved.”

Why Privacy Is a Corporate Responsibility Issue

Many organizations have Corporate Responsibility programs that focus on social issues and philanthropy. Especially in today’s Big Data era, why is privacy not part of the program?

Today’s companies are promising to lower their carbon footprints and save endangered species. They’re donating to people in developing countries who have far less than we do, which is also noble. But what about the fact that American citizens are a product whose information is bought, sold, and obtained without consent? In light of recent events, perhaps the privacy policies deserve more consideration than just two linked words at the bottom of a website home page.

“Privacy is a big issue for a host of reasons — legal, ethical, brand protection and moral,” Mark Cohen, Chief Strategy Officer at consultancy and technology service provider Elevate. “[Privacy] is an element of corporate culture [so what goes into a privacy policy depends on] your values and priorities.”

Problems with Privacy Policies

There are three big problems with privacy policies, at least in the US: what’s in them, how they’re written, and how they’re ignored.

One might think that privacy policies are tailored to a particular company and its audience. However, such documents are not necessarily original. Rather than penning a privacy policy from scratch, some are literally cutting and pasting entire privacy policies regardless of their contents. In fact, the people who are simply grabbing another company’s privacy policy might not even bother to read the content before using it.

The boilerplate language is also a problem. In-house counsel often uses freely available forms to put together a privacy policy. They may use one form or a combination of forms available to lawyers, but again, they’re not thinking about what should be in the document.

In addition, the documents are written in legalese, which is difficult for the average person to read. Businesses are counting on that because if you don’t know what’s in a privacy policy, what you’re giving away and what they intend to do with your information, you’ll probably just hope for the best. Even better, you’ll click an “I agree” button without knowing what clicking that button actually means. It’s a common practice, so you’re not alone if that’s the case.

Oh, and what’s stated in the documents may or may not be true, either because the company changed the policy since you last read it or they’re ignoring the document itself.

“After May 2018 when the new GDPR [General Data Protection Regulation] goes into effect, it’s going to force many companies to look at their privacy policies. their privacy statements and consents and make them more transparent,” said Sheila Fitzpatrick, Data Governance & Privacy counsel and chief privacy officer at data services for hybrid cloud company NetApp. “They’re going to have to be easily understandable and readable.”

Businesses Confuse Privacy with Security

Privacy and security go hand-in-hand, but they’re not the same thing. However, the assumption is, if you’re encrypting data then you’re protecting privacy.

“Every company focuses on risk, export control trade compliance, security, but rarely you find companies focused on privacy,” said Fitzpatrick. “That’s changing with GDPR because it’s extraterritorial. It’s forcing companies to start really addressing areas around privacy.”

It’s entirely possible to have all kinds of security and still not address privacy issues. OK, so the data is being locked down, but are you legally allowed to have it in the first place? Perhaps not.

“Before you lock down that data, you need the legal right to have it,” said Fitzpatrick. “That’s the part that organizations still aren’t comprehending because they think they need the data to manage the relationship. In the past organizations thought they need the data to manage employment, customer or prospect relationships, but they were never really transparent about what they’re doing with that data, and they haven’t obtained the consent from the individual.”

In the US the default is opt-in. In countries that have restrictive privacy policies, the default is opt-out.

The Data Lake Mentality Problem

We hear a lot about data lakes and data swamps. In a lot of cases, companies are just throwing every piece of data into a data lake, hoping it will have value in the future. After all, cloud storage is dirt cheap.

“Companies need to think about the data they absolutely need to support a relationship. If they’re an organization that designs technology, what problem are they trying to solve and what data do they need to solve the problem?” said Fitzpatrick.

Instead of collecting massive amounts of information that’s totally irrelevant, they should consider data minimization if they want to lower privacy-related risks and comply with the EU’s GDPR.

“Companies also need to think about how long are they’re maintaining this data because they have a tendency to want to keep data forever even if it has no value,” said Fitzpatrick. “Under data protection laws, not just the GDPR, data should only be maintained for the purpose it was given and only for the time period for which it was relevant.”

The Effect of GDPR

Under the GDPR, consent has to be freely given, not forced or implied. That means companies can’t pre-check an opt-in box or force people to trade personal data for the use or continued use of a service.

“Some data is needed. If you’re buying a new car they need financial information, but they’d only be using it for the purpose of the purchase, not 19 other things they want to use it for including sales and marketing purposes,” said Fitzpatrick.

Privacy may well become the new competitive advantage as people become more aware of privacy policies and what they mean and don’t mean.

“Especially Europeans, Canadians, and those who live in Asia-Pacific countries that have restrictive privacy laws, part of their vetting process will be looking at your privacy program,” said Fitzpatrick. “If you have a strong privacy program and can answer a privacy question with a privacy answer as opposed to answering a privacy question with a security answer, [you’ll have an advantage].”

On the flip side, sanctions from international countries can destroy a company from reputational, brand and financial points of view. The sanction under the new GDPR regulation can be as high as 4% of a company’s annual turnover.

Quantum Computing Brings Promise and Threats

Digital computing has some serious limitations. While the technology advances made over the past few decades are impressive such as smaller footprints, faster processors, better UIs and more memory and storage, some problems could be solved better by quantum computers.

For one thing, quantum computers are faster than classical (traditional) computers. They are also able to solve problems that classical computers can’t do well or can’t do within a reasonable amount of time.

“Quantum computing exploits fundamental laws of physics to solve complex computing problems in new ways, problems like discovering how diseases develop and creating more effective drugs to battle them,” said Jim Clarke, director of quantum hardware at Intel Labs.”Once quantum systems are available commercially, they can be used to simulate nature to advance research in chemistry, materials science and molecular modeling. For instance, they can be used to help create a new catalyst to sequester carbon dioxide or a room temperature superconductor.”

Quantum computing will also drive new levels of business optimization, benefit machine learning and artifical intelligence, and change the cryptography landscape.

David Schatsky, managing director at Deloitte, said the common thread is optimization problems where there are multiple probable answers and the task is to find the right one. Examples include investment management, portfolio management, risk mitigation and the design of communication systems and transportation systems. Logistics companies are already exploring route optimization while the defense industry is considering communications applications.

“A year ago [quantum computing] was thought of more of as a physics experiment [but] the perception has changed quickly,” said Schatsky.  “In the last 3 months there have been a flurry of breakthroughs including fundamental engineering breakthroughs and commercial product announcements.”

Test drive a quantum computer today

It’s probably safe to say that none of us will have a quantum computer sitting on our desks anytime soon, but just about anyone with a browser can get access to IBM’s 5 and 16 quantum bit (qubit) computers via the cloud.  Earlier this year, the company announced IBM Q, an initiative intended to result in commercially available quantum computing systems.  IBM also announced that it had built and tested two quantum computing processors including the 16 qubit open processor for use by the public and the 17-qubit commercial processor for customers.

According to an IBM paper in Nature, scientists successfully used a seven-qubit quantum processor to address a molecular structure problem for beryllium hydride (BeH2), the largest molecule simulated on a quantum computer to date.

“It is early days, but it’s going to scale rapidly,” said Scott Crowder, vice president and CTO, Quantum Computing, Technical Strategy & Transformation at IBM Systems. “When you start talking about hundreds or low thousands of qubits, you can start exploring business value problems that [can’t be addressed well using] classical computers such as quantum chemistry [and] certain types of optimization problems that are also exponential problems.”

An exponential problem is one that scales exponentially with the number of elements in it. For example, planning a route involving 50 locations could be optimized in a number of ways depending on the objective, such as identifying the fastest route. That seemingly simple problem actually involves one quadrillion different possibilities, which is too many possibilities for a classical computer to handle, Crowder said.

Intel is making progress too

Intel teamed up with QuTech, an academic partner in the Netherlands in 2015. Since then, Intel has achieved milestones such as demonstrating key circuit blocks for an integrated cryogenic-CMOS control system, developing a spin qubit fabrication flow on Intel’s 300mm process technology and developing a unique packaging solution for superconducting qubits that it demonstrated in the 17-qubit superconducting test chip introduced on October 10, 2017. A week later, at the Wall Street Journal D.Live conference in Laguna, Calif., Intel CEO Brian Krzanich said he expects Intel to deliver a 49-qubit quantum chipby the end of 2017.

“Ultimately the goal is to develop a commercially relevant quantum computer, one that is relevant for many applications and one that impacts Intel’s bottom line,” said Intel’s Clarke.

Toward that end, Intel’s work with QuTech spans the entire quantum stack from the qubit devices to the overall hardware architecture, software architecture, applications and complementary electronics that workable quantum systems will require.

“Quantum computing, in essence, is the ultimate in parallel computing, with the potential to tackle problems conventional computers can’t handle,” said Clarke. “But, realizing the promise of quantum computing will require a combination of excellent science, advanced engineering and the continued development of classical computing technologies, which Intel is working towards through our various partnerships and R&D programs.”

Decryption and other threats

There is a debate about whether quantum computers will render current encryption methods obsolete or not. Take a brute force attack, for example. In a brute force attack, hackers continually guess passwords and use computers to accelerate that work. Quantum computing would accelerate such an attack even further.

“Virtually all security protocols that are used and deployed today are vulnerable to an attack by a quantum computer,” said William “Whurley” Hurley) Chair of the Quantum Standards Working Group at the IEEE. “Quantum information allows us to secure information in ways that are completely unbreakable even against a quantum attack.”

Along those lines, there are efforts to develop a new type of security protocol that doesn’t necessarily leverage quantum mechanics. Hurley said they’re using extremely difficult mathematical problems that even quantum computers won’t be able to solve, which is referred to as “Quantum-ibmSafe Cryptography” or “Post-Quantum Cryptography).

The IEEE Quantum Standards Working Group is working on other quantum technologies including, quantum sensors and quantum materials. The research institute has brought together physicists, chemists, engineers, mathematicians and computer scientists to ensure that the institute can adapt rapidly to change.

Deloitte’s Schatsky said synthetic biology and gene editing are also potentially dangerous, mainly because capabilities can be developed faster than one’s ability to understand how to apply such technologies wisely. The same could be said for many emerging technologies.

Quantum computing should be on your radar

Quantum computing is advancing rapidly now so it’s wise to ponder how the capabilities might benefit your business.  The reality is that no one knows all the ways quantum computing can be used, but it will eventually impact businesses in many different industries.

Will quantum computers overtake classical computers, following the same evolutionary path we’ve seen over the past several decades or will the two co-exist? For the foreseeable future, co-existence is the answer because binary and quantum computers each solve different kinds of problems better than the other.

 

 

Your Data Is Biased. Here’s Why.

Bias is everywhere, including in your data. A little skew here and there may be fine if the ramifications are minimal, but bias can negatively affect your company and its customers if left unchecked, so you should make an effort to understand how, where and why it happens.

“Many [business leaders] trust the technical experts but I would argue that they’re ultimately responsible if one of these models has unexpected results or causes harm to people’s lives in some way,” said Steve Mills, a principal and director of machine intelligence at technology and management consulting firm Booz Allen Hamilton.

In the financial industry, for example, biased data may cause results that offend the Equal Credit Opportunity Act (fair lending). That law, enacted in 1974, prohibits credit discrimination based on race, color, religion, national origin, sex, marital status, age or source of income. While lenders will take steps not to include such data in a loan decision, it may be possible to infer race in some cases using a zip code, for example.

“The best example of [bias in data] is the 2008 crash in which the models were trained on a dataset,” said Shervin Khodabandeh, a partner and managing director of Boston Computing Group (BCG) Los Angeles, a management consulting company. “Everything looked good, but the datasets changed and the models were not able to pick that up, [so] the model collapsed and the financial system collapsed.”

What Causes Bias in Data

A considerable amount of data has been generated by humans, whether it’s the diagnosis of a patient’s condition or the facts associated with an automobile accident.  Quite often, individual biases are evident in the data, so when such data is used for machine learning training purposes, the machine intelligence reflects that bias.  A prime example of that was Microsoft’s infamous AI bot, Tay, which in less than 24 hours adopted the biases of certain Twitter members. The results were a string of shocking, offensive and racist posts.

“There’s a famous case in Broward County, Florida, that showed racial bias,” said Mills. “What appears to have happened is there was historically racial bias in sentencing so when you base a model on that data, bias flows into the model. At times, bias can be extremely hard to detect and it may take as much work as building the original model to tease out whether that bias exists or not.”

What Needs to Happen

Business leaders need to be aware of bias and the unintended consequences biased data may cause.  In the longer-term view, data-related bias is a governance issue that needs to be addressed with the appropriate checks and balances which include awareness, mitigation and a game plan should matters go awry.

“You need a formal process in place, especially when you’re impacting people’s lives,” said Booz Allen Hamilton’s Mills. “If there’s no formal process in place, it’s a really bad situation. Too many times we’ve seen these cases where issues are pointed out, and rather than the original people who did the work stepping up and saying, ‘I see what you’re seeing, let’s talk about this,’ they get very defensive and defend their approach so I think we need to have a much more open dialog on this.”

As a matter of policy, business leaders need to consider which decisions they’re comfortable allowing algorithms to make, the safeguards which ensure the algorithms remain accurate over time, and model transparency, meaning that the reasoning behind an automated decision or recommendation can be explained.  That’s not always possible, but still, business leaders should endeavor to understand the reasoning behind decisions and recommendations.

“The tough part is not knowing where the biases are there and not taking the initiative to do adequate testing to find out if something is wrong,” said Kevin Petrasic, a partner at law firm White & Case.  “If you have a situation where certain results are being kicked out by a program, it’s incumbent on the folks monitoring the programs to do periodic testing to make sure there’s appropriate alignment so there’s not fair lending issues or other issues that could be problematic because of key datasets or the training or the structure of the program.”

Data scientists know how to compensate for bias, but they often have trouble explaining what they did and why they did it, or the output of a model in simple terms. To bridge that gap, BCG’s Khodabandeh uses two models: one that’s used to make decisions and a simpler model that explains the basics in a way that clients can understand.

Drexel University’s online MS in Data Science will set you on the path to success in one of today’s fastest growing fields. Learn how to examine and manipulate data to solve problems by creating machine learning algorithms and emerge from the program work-place ready.

Brought to you by Drexel University

BCG also uses two models to identify and mitigate bias.  One is the original model, the other is used to test extreme scenarios.

“We have models with an opposite hypothesis in mind which forces the model to go to extremes,” said Khodabandeh. “We also force models to go to extremes. That didn’t happen in the 2008 collapse. They did not test extreme scenarios. If they had tested extreme scenarios, there would have been indicators coming in in 2007 and 2008 that would allow the model to realize it needs to adjust itself.”

A smart assumption is that bias is present in data, regardless.  What the bias is, where it stems from, what can be done about it and what the potential outcomes of it may be are all things to ponder.

Conclusion

All organizations have biased data.  The questions are whether the bias can be identified, what effect that bias may have, and what the organization is going to do about it.

To minimize the negative effects of bias, business leaders should make a point of understanding the various types and how they can impact data, analysis and decisions. They should also ensure there’s a formal process in place for identifying and dealing with bias, which is likely best executed as a formal part of data governance.

Finally, the risks associated with data bias vary greatly, depending on the circumstances. While it’s prudent to ponder all the positive things machine learning and AI can do for an organization, business leaders are wise to understand the weaknesses also, one of which is data bias.

« Older posts