Strategic Insights and Clickworthy Content Development

Category: Analytics strategy (Page 1 of 4)

Ethical Tech: Myth or Reality?

New technologies continue to shape society, albeit at an accelerating rate. Decades ago, societal change lagged behind tech innovation by many years, a decade or more. Now, change is occurring much faster as evidenced by the impact of disrupters including Uber and Airbnb.

Central to much of the change is the data being collected, stored and analyzed for various reasons, not all of which are transparent. As the pace of technology innovation and tech-driven societal change accelerate, businesses are wise to think harder about the longer-term impacts of what they’re doing, both good and bad.

Why contemplate ethics?

Technology in all its forms is just a tool that can be used for good or evil. While businesses do not tend to think in those terms, there is some acknowledgement of what is “right” and “wrong.” Doing the right thing tends to be reflected in corporate responsibility programs designed to benefit people, animals, and the environment. Doing the wrong thing often involves irresponsible or inadvertent actions that are harmful to people, whether it’s invading their privacy or exposing their personal data.

While corporate responsibility programs in their current form are “good” on some level, ethics on a societal scale tends to be missing.

In the tech industry, for example, innovators are constantly doing things because they’re possible without considering whether they’re ethical. A blatant recent example is the human-sheep hybrid. Closer to home in high tech are fears about AI gone awry.

Why ethics is a difficult concept

The definition of ethics is simple. According to Merriam Webster, it is “the discipline dealing with what is good and bad and with moral duty and obligation.”

In practical application, particularly in relation to technology, “good” and “bad” coexist. Airbnb is just one example. On one hand, homeowners are able to take advantage of another income stream. However, hotels and motels now face new competition and the residents living next to or near Airbnb properties often face negative quality-of-life impacts.

According to Gartner research, organizations at the beginning stages of a digital strategy rank ethics a Number 7 priority. Organizations establishing a digital strategy rank it Number 5 and organizations that are executing a digital strategy rank it Number 3.

“The [CIOs] who tend to be more enlightened are the ones in regulated environments, such as financial services and public sector, where trust is important,” said Frank Buytendijk, a Gartner research vice president and Gartner fellow.

Today’s organizations tend to approach ethics from a risk avoidance perspective; specifically, for regulatory compliance purposes and to avoid the consequences of operating an unethical business. On the positive side, some view ethics as a competitive differentiator or better yet, the right thing to do.

Unfortunately, it’s regulatory compliance pressure and risk because of all the scandals you see with AI, big data [and] social media, but hey, I’ll take it,” said Buytendijk. “With big data there was a discussion about privacy but too little, too late. We’re hopeful with robotics and the emergence of AI, as there is active discussion about the ethical use of those technologies, not onlyt by academics, but by the engineers themselves.”

IEEE ethics group emerges

In 2016, the IEEE launched the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Its goal is to ensure that those involved in the design and development of autonomous and intelligent systems are educated, trained, and empowered to prioritize ethical considerations so that technologies are advanced for the benefit of humanity.

From a business perspective, the idea is to align corporate values with the values of customers.

“Ethics is the new green,” said Raja Chatila, Executive Committee Member of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. “People value their health so they value products that do not endanger their health. People want to buy technology that respects the values they cherish.”

However, the overarching goal is to serve society in a positive way, not just individuals. Examples of that tend to include education, health, employment and safety.

“As an industry, we could do a better job of being responsible for the technology we’re developing,” said Chatila.

At the present time, 13 different committees involved in the initiative are contemplating ethics from different technological perspectives, including personal data and individual access control, ethical research and design, autonomous weapons, classical ethics in AI, and mixed reality. In December 2017, the group released “Ethically Aligned Design volume 2,” a 266-page document available for public comment. It includes the participation of all 13 committees.

In addition, the initiative has proposed 11 IEEE standards, all of which have been accepted. The standards address transparency, data privacy, algorithmic bias, and more. Approximately 250 individuals are now participating in the initiative.

Society must demand ethics for its own good

Groups within society tend to react to technology innovation differently due to generational differences, cultural differences, and other factors. Generally speaking, early adopters tend to be more interested in a new technology’s capabilities than its potential negative effects. Conversely, laggards are more risk averse. Nevertheless, people in general tend to use services, apps, and websites without bothering to read the associated privacy policies. Society is not protecting itself, in other words. Instead, one individual at a time is acquiescing to the collection, storage and use of data about them without understanding to what they are acquiescing.

“I think the practical aspect comes down to transparency and honesty,” said Bill Franks, chief analytics officer at the International Institute for Analytics(IIA). “However, individuals should be aware of what companies are doing with their data when they sign up, because a lot of the analytics –- both the data and analysis –- could be harmful to you if they got into the wrong hands and were misused.”

Right now, the societal impacts of technology tend to be recognized after the fact, rather than contemplated from the beginning. Arguably, not all impacts are necessarily foreseeable, but with the pace of technology innovation constantly accelerating, the innovators themselves need to put more thought into the positive and negative consequences of bringing their technology to market.

Meanwhile, individuals have a responsibility to themselves to become more informed than they are today.

“Until the public actually sees the need for ethics, and demands it, I just don’t know that it would ever necessarily go mainstream,” said Franks. “Why would you put a lot of time and money into following policies that add overhead to manage and maintain when your customers don’t seem to care? That’s the dilemma.”

Businesses, individuals, and groups need to put more thought into the ethics of technology for their own good and for the good of all. More disruptions are coming in the form of machine intelligence, automation, and digital transformation which will impact society somehow. “How” is the question.

The Trouble with Data About Data

Two people looking at the same analytical result can come to different conclusions. The same goes for the collection of data and its presentation. A couple of experiences underscore how the data about data — even from authoritative sources — may not be as accurate as the people working on the project or the audience believe. You guessed it: Bias can turn a well-meaning, “objective” exercise into a subjective one. In my experience, the most nefarious thing about bias is the lack of awareness or acknowledgement of it.

The Trouble with Research

I can’t speak for all types of research, but I’m very familiar with what happens in the high-tech industry. Some of it involves considerable primary and secondary research, and some of it involves one or the other.

Let’s say we’re doing research about analytics. The scope of our research will include a massive survey of a target audience (because higher numbers seem to indicate statistical significance). The target respondents will be a subset of subscribers to a mailing list or individuals chosen from multiple databases based on pre-defined criteria. Our errors here most likely will include sampling bias (a non-random sample) and selection bias (aka cherry-picking).

The survey respondents will receive a set of questions that someone has to define and structure. That someone may have a personal agenda (confirmation bias), may be privy to an employer’s agenda (funding bias), and/or may choose a subset of the original questions (potentially selection bias).

The survey will be supplemented with interviews of analytics professionals who represent the audience we survey, demographically speaking. However, they will have certain unique attributes — a high profile or they work for a high-profile company (selection bias). We likely won’t be able to use all of what a person says so we’ll omit some stuff — selection bias and confirmation bias combined.

We’ll also do some secondary research that bolsters our position — selection bias and confirmation bias, again.

Then, we’ll combine the results of the survey, the interviews, and the secondary research. Not all of it will be usable because it’s too voluminous, irrelevant, or contradicts our position. Rather than stating any of that as part of the research, we’ll just omit those pieces — selection bias and confirmation bias again. We can also structure the data visualizations in the report so they underscore our points (and misrepresent the data).

Bias is not something that happens to other people. It happens to everyone because it is natural, whether consciously or unconsciously. Rather than dismiss it, it’s prudent to acknowledge the tendency and attempt to identify what types of bias may be involved, why, and rectify them, if possible.

I recently worked on a project for which I did some interviews. Before I began, someone in power said, “This point is [this] and I doubt anyone will say different.” Really? I couldn’t believe my ears. Personally, I find assumptions to be a bad thing because unlike hypotheses, there’s no room for disproof or differing opinions.

Meanwhile, I received a research report. One takeaway was that vendors are failing to deliver “what end customers want most.” The accompanying infographic shows, on average, that 15.5% of end customers want what 59% of vendors don’t provide. The information raised more questions than it answered on several levels, at least for me, and I know I won’t get access to the raw data.

My overarching point is that bias is rampant and burying our heads in the sand only makes matters worse. Ethically speaking, I think as an industry, we need to do more.

 

Analytics Leaders and Laggards: Which Fits Your Company?

Different companies and industries are at different levels of analytical maturity. There are still businesses that don’t use analytics at all and businesses that are masters by today’s standards. Most organizations are somewhere in between.

So, who are the leaders and laggards anyway? The International Institute for Analytics (IIA) asked that question in 2016 and found that digital natives are the most mature and the insurance industry is the least mature.

How Industries and Sectors Stack Up

IIA’s research included 11 different industries and sectors, in addition to digital natives. The poster children included Google, Facebook, Amazon, and Netflix. From Day 1, data has been their business and analytics has been critical to their success.

The report shows the descending order of industries in terms of analytical maturity, with insurance falling behind because its IT and finance analytics are the weakest of all.

Another report, from business and technology consultants West Monroe Partners found that only 11% of the 122 insurance executives they surveyed think their companies are realizing the full benefits of advanced analytics. “Advanced analytics” in this report is defined as identifying new revenue opportunities, improving customer and agent experience, performing operational diagnostics, and improving control mechanisms.

Two of the reasons West Monroe cited for the immaturity of the insurance industry are the inability to quantify the ROI and poor data quality.

Maturity is a Journey

Different organizations and individuals have different opinions about what an analytics maturity model looks like. IIA defines five stages ranging from “analytically impaired” (organizations that make decisions by gut feel) to “analytical nirvana” (using enterprise analytics).

“Data-first companies haven’t had to invest in becoming data-driven since they are, but for the companies that aren’t data-first, understanding the multi-faceted nature of the journey is a good thing,” said Daniel Magestro, research director at IIA. “There’s no free lunch, no way to circumvent this. The C-suite can’t just say that we’re going to be data-driven in 2017.”

Others look at the types of analytics companies are doing: descriptive, predictive, and prescriptive. However, looking at the type of analytics doesn’t tell the entire story.

What’s interesting is that different companies at different stages of maturity are stumped by different questions: Do you think you need analytics? If the answer is no, then it’s going to be a long and winding road.

Why do you think you need analytics? What would you use analytics to improve? Those two related questions require serious thought. Scope and priorities are challenges here.

How would you define success? That can be a tough question because the answers have to be quantified and realistic to be effective. “Increase sales” doesn’t cut it. How much and when are missing.

One indicator of maturity is what companies are doing with their analytics. The first thing everyone says is, “make better business decisions,” which is always important. However, progressive companies are also using analytics to identify risks and opportunities that weren’t apparent before.

The degree to which analytics are siloed in an organization also impacts maturity as can the user experience. Dashboards can be so complicated they’re ineffective versus simple to prioritize and expedite decision-making.

Time is another element. IT-created reports have fallen out of favor. Self-service is where it’s at. At the same time, it makes no sense to pull the same information in the same format again and again, such as weekly sales reports. That should simply be automated and pushed to the user.

The other time element — timeliness whether real-time, near real-time, or batch — is not an indication of maturity in my mind because what’s timely depends on what’s actually necessary.

How Valuable Is Your Company’s Data?

Companies are amassing tremendous volumes of data, which they consider their greatest asset, or at least one of their greatest assets. Yet, few business leaders can articulate what their company’s data is worth.

Successful data-driven digital natives understand the value of their data and their valuations depend on sound applications of that data. Increasingly venture capitalists, financial analysts and board members will expect startup, public company and other organizational leaders to explain the value of their data in terms of opportunities, top-line growth, bottom line improvement and risks.

For example, venture capital firm Mercury Fund recently analyzed SaaS startup valuations based on market data that its team has observed. According to Managing Director Aziz Gilani, the team confirmed that SaaS company valuations, which range from 5x to 11x revenue, depend on the underlying metrics of the company. The variable that determines whether those companies land in the top or bottom half of the spectrum is the company’s annual recurring revenue (ARR) growth rate, which reflects how well a company understands its customers.

Mercury Fund’s most successful companies scrutinize their unit economics “under a microscope” to optimize customer interactions in a capital-efficient manner and maximize their revenue growth rates.

For other companies, the calculus is not so straightforward and, in fact, it’s very complicated.

Direct value

When business leaders and managers ponder the value of data, their first thought is direct monetization which means selling data they have.

“[I]t’s a question of the holy grail because we know we have a lot of data,” said David Schatsky, managing director at Deloitte. “[The first thought is] let’s go off and monetize it, but they have to ask themselves the fundamental questions right now of how they’re going to use it: How much data do they have? Can they get at it? And, can they use it in the way they have in mind?”

Data-driven digital natives have a better handle on the value of their data than the typical enterprise because their business models depend on collecting data, analyzing that data and then monetizing it. Usually, considerable testing is involved to understand the market’s perception of value, although a shortcut is to observe how similar companies are pricing their data.

“As best as I can tell, there’s no manual on how to value data but there are indirect methods. For example, if you’re doing deep learning and you need labeled training data, you might go to a company like CrowdFlower and they’d create the labeled dataset and then you’d get some idea of how much that type of data is worth,” said Ben Lorica, chief data officer at O’Reilly Media. “The other thing to look at is the valuation of startups that are valued highly because of their data.”

Observation can be especially misleading for those who fail to consider the differences between their organization and the organizations they’re observing. The business models may differ, the audiences may differ, and the amount of data the organization has and the usefulness of that data may differ. Yet, a common mistake is to assume that because Facebook or Amazon did something, what they did is a generally-applicable template for success.

However, there’s no one magic formula for valuing data because not all data is equally valuable, usable or available.

“The first thing I look at is the data [a client has] that could be turned into data-as-a-service and if they did that, what is the opportunity the value [offers] for that business,” said Sanjay Srivastava, chief digital officer at global professional services firm Genpact.

Automation value

More rote and repeatable tasks are being automated using chatbots, robotic process automation (RPA) and AI. The question is, what is the value of the work employees do in the absence of automation and what would the value of their work be if parts of their jobs were automated and they had more time to do higher-value tasks?

“That’s another that’s a shortcut to valuing that data that you already have,” said O’Reilly’s Lorica.

Recombinant value

Genpact also advances the concept of “derivative opportunity value” which means creating an opportunity or an entirely new business model by combining a company’s data with external data.

For example, weather data by zip code can be combined with data about prevalent weeds by zip code and the available core seed attributes by zip codes. Agri-food companies use such data to determine which pesticides to use and to optimize crops in a specific region.

“The idea is it’s not just selling weather data as a service, that’s a direct opportunity,” said Srivastava. “The derivative opportunity value is about enhancing the value of agriculture and what value we can drive.”

It is also possible to do an A/B test with and without a new dataset to determine the value before and after the new data was added to the mix.

Algorithmic value

Netflix and Amazon use recommendation engines to drive value. For example, Netflix increases its revenue and stickiness by matching content with a customer’s tastes and viewing habits. Similarly, Amazon recommends products, including those that others have also viewed or purchased. In doing so, Amazon successfully increases average order values through cross-selling and upselling.

“Algorithmic value modeling is the most exciting,” said Srivastava. “For example, the more labeled data I can provide on rooftops that have been damaged by Florida hurricanes, the more pictures I have of the damage caused by the hurricanes and the more information I have about claim settlements, the better my data engine will be.”

For that use case, the trained AI system can automatically provide an insurance claim value based on a photograph associated with a particular claim.

Risk-of-Loss value

If a company using an external data source were to lose access to that data source, what economic impact would it have? Further, given the very real possibility of cyberattacks and cyberterrorism, what would the value of lost or corrupted data be? Points to consider would be the financial impact which may include actual loss, opportunity cost, regulatory fines and litigation settlement values. If the company has cybersecurity insurance, there’s a coverage limit on the policy which may differ from the actual claim settlement value and the overall cost to the company.

A bigger risk than data loss is the failure to use data to drive value, according to Genpact’s Srivastava.

There’s no silver bullet

No single equation can accurately assess the value of a company’s data. The value of data depends on several factors, including the usability, accessibility and cleanliness of the data. Other considerations are how the data is applied to business problems and what the value of the data would be if it were directly monetized, combined with other data, or used in machine learning to improve outcomes.

Further, business leaders should consider not only what the value of their company’s data is today, but the potential value of new services, business models or businesses that could be created by aggregating data, using internal data or, more likely, using a combination of internal and external data. In addition, business leaders should contemplate the risk of data loss, corruption or misuse.

While there’s no standard playbook for valuing data, expect data valuation and the inability to value data to have a direct impact on startup, public company, and merger and acquisition target valuations.

Why Operationalizing Analytics is So Difficult

Today’s businesses are applying analytics to a growing number of use cases, but analytics for analytics’ sake has little, if any, value. The most analytically astute companies have operationalized analytics, but many of them, particularly the non-digital natives, have faced several challenges along the way getting the people, processes and technology aligned in a way that drives value for the business.

Here are some of the hurdles that an analytics initiative might encounter.

Analytics is considered a technology problem

Some organizations consider analytics a technology problem, and then they wonder why the ROI of their efforts is so poor. While having the right technology in place matters, successful initiatives require more.

“The first key challenge is designing how and in what way an analytics solution would affect the outcome of the business,” said Bill Waid, general manager of Decision Management at FICO. “We start by modeling the business problem and then filling in the analytic pieces that address that business problem. More often than not, there’s a business process or business decision that needs to be incorporated into the model as we build the solution.”

Framing the business problem is essential, because if the analytics don’t provide any business value, they won’t get used.

“Better than 80% of analytics never end up being used. A lot of that stems from the fact that an analysis gets built and it might make sense given the dataset but it’s not used to make something happen,” said Waid. “That’s probably the hardest element.”

Placing analytics in the hands of the business requires access to the right data, but governance must also be in place.

“[T]he technical aspects are becoming easier to solve and there are many more options for solving them, so the people and the process challenges that you’ll face obviously have to come along,” said Bill Franks, chief analytics officer at the International Institute for Analytics (IIA). “In a non-digital-native company, the people and process progress does not match the technology progress.”

Operationalizing analytics lacks buy in

Many analytics initiatives have struggled to get the executive and organizational support they need to be successful. Operationalizing analytics requires the same thing.

“When you operationalize analytics, you’re automating a lot of decisions, so the buy-in you require from all of the various stakeholders has to be high,” said IIA’s Franks. “If you’re a digital native, this is what you do for a living so people are used to it. When you’re a large, legacy company dipping your toe into this, the first couple of attempts will be painful.”

For example, if an organization is automating what used to be batch processes, there need to be more safety checks, data checks, and accuracy checks. Chances are high that everything won’t be done right the first time, so people have to get comfortable with the concept of iteration, which is just part of the learning process.

Analytical results are not transparent

If your company operates in a regulated environment, you need to be able to explain an analytical result. Even if you’re not in a regulated industry, business leaders, investors and potential M&A partners may ask for an explanation.

“We refer to it as ‘reasoning code’ or ‘the outcomes,’ but in AI it’s a form of explainable AI where you can explain to a business owner or a business user why the analytics came to the conclusion it came to,” said FICO’s Waid. “The second thing that you need to provide the business person with is some kind of dashboard for them to be able to change, adjust or accommodate different directions.”

4 Ways Companies Impede Their Analytics Efforts

Businesses in the race to become “data-driven” or “insights-driven” often face several disconnects between their vision of an initiative and their execution of it. Of course, everyone wants to be competitive, but there are several things that differentiate the leaders from the laggards. Part of it is weathering the growing pains that companies tend to experience, some of which are easier to change than others. These are some of the stumbling blocks.

Business objectives and analytics are not aligned

Analytics still takes place in pockets within the majority of organizations. The good news is that various functions are now able to operate more effectively and efficiently as a result of applying analytics. However, there is greater power in aligning efforts with the strategic goals of the business.

In a recent research note, Gartner stated, “Internally, the integrative, connected, real-time nature of digital business requires collaboration between historically independent organizational units. To make this collaboration happen, business and IT must work together on vision, strategy, roles and metrics. Everyone is going to have to change, and everyone is going to have to learn.”

All of that requires cultural adjustment, which can be the most difficult challenge of all.

There’s insight but no action

It’s one thing to get an insight and quite another to put that insight into action. To be effective, analytics need to be operationalized, which means weaving analytics into business processes so that insights can be turned into meaningful actions. Prescriptive analytics is part of it, but fundamentally, business processes need to be updated to include analytics. A point often missed is that decisions and actions are not ends in themselves. They, too, need to be analyzed to determine their effectiveness.

An EY presentation stresses the need to operationalize analytics. Specifically, it says, ” The key to operationalizing analytics is to appreciate the analytics value chain.”

Interestingly, when most of us think about “the analytics value chain” we think of data, analytics, insights, decisions and optimizing outcomes. While that’s the way work flows, EY says our thought process should be the reverse. Similarly, to optimize a process, one must understand what that process is supposed to achieve (e.g., thwart fraud, improve customer experience, reduce churn).

They’re not looking ahead

Less analytically mature companies haven’t moved beyond descriptive analytics yet. They’re still generating reports, albeit faster than they used to because IT and lines of business tend to agree that self-service reporting is better for everyone. Gartner says “the BI and analytics market is in the final stages of a multiyear shift from IT-lead, system-of-record reporting to business-led, self-service analytics. As a result, the modern business intelligence and analytics platform has emerged to meet new organizational requirements for accessibility, agility and deeper analytical insight.”

Still, organizations can only get so far with descriptive analytics. If they want to up their competitive game, they need to move to predictive and prescriptive analytics.

Poor data quality prevents accurate analytics

If you don’t have good data or a critical mass of the right data, your analytical outcomes are going to fall short. Just about any multichannel (and sometimes even single-channel) communication experience with a bank, a telephone company, a credit card company, or a vendor support organization will prove data quality is still a huge issue. Never mind the fact some of these companies are big brand companies who invest staggering amounts of money in technology, including data and analytics technologies.

In a typical telephone scenario, a bot asks the customer to enter an account number or a customer number. If the customer needs to be transferred to a live customer service representative (CSR), chances are the CSR will ask the customer to repeat the number because it doesn’t come up on their screen automatically. If the CSR can’t resolve the issue, then the call is usually transferred to a supervisor or different department. What was your name and number again? It’s a frustrating problem that’s all too common.

The underlying problem is that customer’s information is stored in different systems for different reasons such as sales, CRM and finance.

I spoke with someone recently who said a company he worked with had gone through nearly 20 acquisitions. Not surprisingly, data quality was a huge issue. The most difficult part was dealing with the limited fields in a legacy system. Because the system did not contain enough of the appropriate fields in which to enter data, users made up their own workarounds.

These are just a few of the challenges organizations face on their journey.

Why Surveys Should Be Structured Differently

keyboard-417093_1280If you’re anything like me, you’re often asked to participate in surveys.  Some of them are short and simple.  Others are very long, very complicated, or both.

You may also design and implement surveys from time to time like I do.   If you want some insight into the effectiveness of your survey designs and their outcomes, pay attention to the responses you get.

Notice the Drop-off Points

Complicated surveys that take 15 or 20 minutes to complete tend to reflect drop off points at which the respondents decided that the time investment required wasn’t worth whatever incentive was offered.  After all, not everyone actually cares about survey results or a  1-in-1,000 chance of winning the latest iPad, for example.  If there’s no incentive whatsoever, long and complicated surveys may  be even less successful, even if you’re pinging your own  database.

A magazine publisher recently ran such a survey, and boy, was it hairy.  It started out like similar surveys, asking questions about the respondent’s title, affiliation, company revenue and size.  It also asked about purchasing habits – who approves, who specifies, who recommends, etc. for different kinds of technologies.  Then, what the respondent’s content preferences are for learning about tech (several drop-down menus), using tech (several drop-down menus), purchasing tech (several drop-down menus), and I can’t remember what else.  At that point, one was about 6% done with the survey.  So much for “10 – 15 minutes.”  It took about 10 or 15 minutes just to wade through the first single-digit percent of it.  One would really want a slim chance of winning the incentive to complete that survey.

In short, the quest to learn everything about everything in one very long and complex survey may end in more knowledge about who took the survey than how how people feel about important issues.

On the flip side are very simple surveys that take a minute or two to answer.  Those types of surveys tend to focus on whether a customer is satisfied or dissatisfied with customer service, rather than delving into the details of opinions about several complicated matters.

Survey design is really important.  Complex fishing expeditions can and often do reflect a lack of focus on the survey designer’s part.

Complex Surveys May Skew Results

Overly complicated surveys may also yield spurious results.  For example, let’s say 500 people agree to take a survey we just launched that happens to be very long and very complex.  Not all of the respondents will get past the who-are-you questions because those too are complicated.  Then, as the survey goes on, more people drop, then more.

The result is that  X% of of the survey responses at the end of the survey are not the same as X% earlier in the survey.  What I mean by that is 500 people started, maybe 400 get past the qualification portion, and the numbers continue to fall as yet more complicated questions arise but  the “progress bar” shows little forward movement.  By the end of the survey, far less than 500 have participated, maybe 200  or 100.

Of course, no one outside the survey team knows this, including the people in the company who are presented with the survey results.  They only know that 500 people participated in the survey and X% said this or that.

However, had all 500 people answered all the questions, the results of some of the questions would likely look slightly or considerably different, which may be very important.

Let’s say 150 people completed our  survey and the last question asked whether they planned to purchase an iPhone 7 within the next three months.  40% of them or 60 respondents said yes.  If all 500 survey respondents answered that same question, I can almost guarantee you the answer would not be 40% .  It might be close to 40% or it might not be even close to 40%.

So, if you genuinely care about divining some sort of “truth” from surveys, you need to be mindful about how to define and structure the survey and that the data you see may not be telling you the entire story, or even an accurate story.

The point about accuracy is very important and one that people without some kind of statistical background likely haven’t even considered because they’re viewing all aggregate numbers as having equal weight and equal accuracy.

I, for one, think that survey “best practices” are going to evolve in the coming years with the help of data science.  While the average business person knows little about data science now, in the future it will likely seem cavalier not to consider the quality of the data you’re getting and what you can do to improve the quality of that data.  Your credibility and perhaps your job may depend on it.

In the meantime, try not to shift the burden of thinking entirely to your survey audience because it won’t do either of you much good.  Think about what you want to achieve, structure your questions in a way that gives you insight into your audience and their motivations (avoid leading questions!), and be mindful that not all aggregate answers are equally accurate or representative, even within the same survey.

What Retailers Know About You

Retailers now have access to more information than ever. They’re using loyalty cards, cameras, POS transaction data, GPS data, and third-party data in an effort to get shoppers to visit more often and buy more. The focus is to provide better shopping experiences on a more personalized level. Operationally speaking, they’re trying to reduce waste, optimize inventory selection, and improve merchandising.

The barrier to personalized experiences is PII (personally identifiable information), of course.

“[What retailers know about you] is still largely transaction-based,” said Dave Harvey, VP of Thought Leadership, branding and retail services provider Daymon. “Information about lifestyles, behaviors and attitudes are hard for retailers to get themselves so they partner with companies and providers that have those kinds of panels, especially getting information about how people are reacting through social media and what they’re buying online.”

Transactions Are Driving Insights

The most powerful asset is a retailer’s transactional database. How they segment data is critical, whether it’s transaction-based reach, frequency, lifestyle behaviors, or product groupings. Retailers can identify how you live your life based on the products you buy.

“The biggest Achille’s heel of transaction data, no matter how much you’re segmenting it and how much you’re mining it, is you’re not seeing what your competitors are doing,” said Harvey. “Looking at transaction data across your competition becomes critical.”

As consumers we see the results of that in offers, which may show up in an app, email, flyer or coupons generated at the POS.

Social media scraping has also become popular, not only to gauge consumer sentiment about brands and products, but also to provide additional lifestyle insight.

Some retailers are using predictive and prescriptive analytics to optimize pricing, promotions and inventory. They also have a lot of information about where their customers are coming from, based on credit card transactions. In addition, they’re using third party data to understand customer demographics, including the median incomes of the zip codes in which customers live.

They’re Watching Your Buying Patterns

Retailers monitor what shoppers buy over time, including items they tend to buy together, such as shirts and ties or eggs and orange juice. The information helps them organize shelves, aisles, and end caps.

“There’s a lot of implications for meal solutions and category adjacencies, how people are shopping in the store, how that might lead a retailer test way to offer a right combination of products to create a solution somewhere in the store,” said Harvey. “You can’t be everything to everyone, so how can the information help you prioritize where to focus? The information you can mine from your transaction database, your loyalty card database can help you become more efficient.”

Information about buying patterns and price elasticity allows retailers to micro-target so effectively that shoppers visit the store more often and spend more money.

They May Know How You Shop the Store

Shopping carts and baskets are a necessary convenience for customers, although the latest ones include sensors that track customers’ paths as they navigate through the store.

“They can get the path data and purchase data about how much time you spend at stations, and they can use it to redesign the store or and get you move through the store much more because they know the more you move through the store the more you buy,” said PK Kannan, a professor of marketing science at the University of Maryland’s Robert H. Smith School of Business.

Retailers also use cameras to optimize merchandising to better understand customer behavior including where they go and how long they stay. Now they’re also analyzing facial expressions to determine one’s state of mind.

Driving Business Value from Analytics

Different kinds of analytics result in different ROI. If a retailer is just starting out, Kannaan recommends starting with loyalty cards since other types of data capture and analysis can be prohibitively expensive and the analysis can be cumbersome.

“The ROI on loyalty cards is pretty good,” said Kannaan. “The initial ROI is going to be high and then as you go into more of these cart or visual data, video data, your ROI is going to level off.”

Strategies also differ among types of retailers. For example, a specialty retailer will want data that provides deep insight into the category and shoppers of that category versus a store such as Walmart that carries items in many different categories.

“If you’re a retailer trying to sell a ton of categories you want to understand how people are talking about their shopping experience,” said Harvey. “There’s still a lot of untapped opportunity in understanding social media as it relates to doing better analysis with retailers.”

They’re Innovating

Retailers are working hard to understand their customers, so they can provide better shopping experiences. While personalization techniques are getting more sophisticated, there’s only so far they can go legally in many jurisdictions.

Kannan said a way of getting around this is to take all the informational content, remove any PII, and then extract the resulting information out of the data.

“It’s like I’m taking the kernel from this thing because I don’t have the space to store it and keeping it is not a good policy, so I am going to keep some of the sufficient statistics with me and as new data comes in, I’m going to combine the old data with new data and use it for targeting purposes,” said Kannan. That’s becoming more of a possibility now, and also it’s a reality because data volumes are increasing like crazy. That way I don’t have to store all the data in a data lake.”

Deloitte: 5 Trends That Will Drive Machine Learning Adoption

Companies across industries are experimenting with and using machine learning, but the actual adoption rates are lower than it might be seem. According to a 2017 SAP Digital Transformation Study, fewer than 10% of 3,100 executives from small, medium and large companies said their organizations were investing in machine learning. That will change dramatically in the coming years, according to a new Deloitte report, because researchers and vendors are making progress in five key areas that may make machine learning more practical for businesses of all sizes.

1. Automating data science

There is a lot of debate about whether data scientists will or won’t be automated out of a job. It turns out that machines are far better at doing rote tasks faster and more reliably than humans, such as data wrangling.

“The automation of data science will likely be widely adopted and speak to this issue of the shortage of data scientists, so I think in the near term this could have a lot of impact,” said David Schatsky, managing director at Deloitte and one of the authors of Deloitte’s new report.

Industry analysts are bullish about the prospect of automating data science tasks, since data scientists can spend an inordinate amount of time collecting data and preparing it ready for analysis. For example, Gartner estimates that 40% of a data scientist’s job will be automated by 2020.

Data scientists aren’t so sure about that, and to be fair, few people, regardless of their position, have considered which parts of their job are ripe for automation.

2. Reducing the need for training data

Machine learning tends to require a lot of data. According to the Deloitte report, training a machine learning model might require millions of data elements. While machine learning requirements vary based on the use case, “acquiring and labeling data can be time-consuming and costly.”

One way to address that challenge is to use synthetic data. Using synthetic data, Deloitte was able to reduce the actual amount of data required for training by 80%. In other words, 20% of the data was actual data and the remaining 80% was synthetic data.

“How far we can go in reducing the need for training data has two kinds of question marks: How far can you reduce the need for training data and what characteristics of data are most likely minimized and which require massive datasets?” said Schatsky.

3. Accelerating training

Massive amounts of data and heavy computation can take considerable time. Chip manufacturers are addressing this issue with various types of chips, including GPUs and application-specific integrated circuits (ASICs). The end result is faster training of machine learning models.

“I have no doubt that with the new processor architectures, execution is going to get faster,” said Schatsky. “[The chips] are important and necessary, but not sufficient to drive significant adoption on their own.”

4. Explaining results

Many machine learning models spit out a result, but they don’t provide the reasoning behind the result. As Deloitte points out, business leaders often hesitate to place blind faith in a result that can’t be explained, and some regulations require an explanation.

In the future, we’ll likely see machine learning models that are more accurate and transparent, which should open the door for greater use in regulated industries.

[Deloitte also recently discussed 9 AI Benefits Enterprises Are Experiencing Today.]

“No one knows how far you can go yet in terms of making an arbitrary neural network-based model interpretable,” said Schatsky. “We could end up hitting some limits identifying a fairly narrow set of cases where you can turn a black box model into an open book for certain kinds of models and situations, but there will be other scenarios where they work well but you can’t use them in certain situations.”

5. Deploying locally

Right now, machine learning typically requires a lot of data and training can be time-consuming. All of that requires a lot of memory and a lot of processing power, more than mobile and smart sensors can handle, at least for now.

In its report, Deloitte points out there is research in this area too, some of which has reduced the size of models by an order of magnitude or more using compression.

The bottom line

Machine learning is having profound effects in different industries ranging from TV pilots to medical diagnoses. It seems somewhat magical and somewhat scary to the uninitiated, though the barriers to adoption are falling. As machine learning becomes more practical for mainstream use, more businesses will use it whether they realize it or not.

“[The five] things [we identified in the report] are converging to put machine learning on a path toward mainstream adoption,” said Schatsky.  “If companies have been sitting it out waiting for this to get easier and more relevant, they should sit up instead and start getting involved.”

What Data Analysts Want to See in 2018

The demand for data analysts is at an all-time high, but organizations don’t always get the value they expect, mainly because the organization, or parts of it, are getting in the way.

Being an analyst can be a frustrating job if your position isn’t getting what it needs in terms of data, tools and organizational support. Are you getting what you need? Here are some of the things your contemporaries are saying.

More Data

Despite the glut of data companies have, analysts don’t always get the data they need, often because the data owners are concerned about privacy, security, losing control of their data or some combination of those things.

“The problem of data ownership and data sharing is universal,” said Sam Ruchlewicz, director of Digital Strategy & Data Analytics at advertising, digital, PR and brand agency Warschawski. “For analytics professionals, these artificial barriers hinder the creation of comprehensive, whole-organization analyses that can provide real, tangible value and serve as a catalyst for the creation (and funding) of additional analytics programs.”

Jesse Tutt, program lead of the IT Analytics Center of Excellence at Alberta Health Services said getting access to the data he needs takes a lot of time because he has to work with the data repository owners to get their approval and then work with the technologists to get access to the systems. He also has to work with the vendors and the data repository subject matter experts.

“We’ve worked really hard getting access to the data sets, correlating the different datasets using correlation tables and cleaning up the data within the source systems,” he said. “If you ask a specific set or data repository what something is, it can tell you, but if you can snapshot it on a monthly basis you can see a trend. If you correlate that across other systems, you can find more value. In our case, the highest value is connecting the system and creating the capability in a data warehouse, reporting you can correlate across the systems.

Four years ago, people at Alberta Health Services wanted to see trend data instead of just snapshots, so one system was connected to another. Now, 60 connected data sources are connected with 60 more planned by the end of 2017. The company has a total of about 1,600 data sources, many of which will be connected in the next couple of years.

More Respect

The most effective data analytics align with business objectives, but what happens when your data analysts aren’t informed? Warschawski’s Ruchlewicz recently had dinner with the CEO of a large, international agency who spent millions of dollars on a marketing campaign that failed simply because the executive didn’t want to listen to “the analytics kids.” Never mind the fact that the analytics team had identified a major issue the target audience had with the client’s brand.

“[The CEO] dismissed them as analytics kids who didn’t know what they were talking about and proceeded to launch the campaign,” said Ruchlewicz. “Only later, after millions of dollars in spending (with no results to show for it), did the CEO allow them to make their case and implement their recommendations.”

Ultimately, their recommendations turned the campaign around. Ruchlewicz said.

“I wish this as a one-off story. It’s not. I wish this was confined to ‘old school’ companies. It’s not,” said Ruchlewicz. “Until analytics teams are given a seat at the table where decisions are made, analytics will continue to be undervalued and underappreciated across the entire organization.”

Analysts have to earn respect like anyone else, however. That requires communicating to business professionals in business terms.

“Executives and investors today are hyper-focused on the bottom line, and most that I’ve interacted with perceive analytics as a line item expenditure,” said Ruchlewicz. “[A]nalytics professionals need to take the first step toward resolution. There are several methods that allow the creation of a rigorous, defensible first approximation, which is sufficient to get the conversation started (and usually, some data shared).”

To help turn the tide, analytics practitioners are well-advised present information and construct business cases around their activities.

More Consistency

If everyone in the organization used the same terminology for everything, always had the right database fields accessible, and always entered data correctly and in the same manner, some enterprise data would be much cleaner than it is today. However, the problem doesn’t stop there

“If a person says, ‘I want an analytical tool,’ how do you group that and do trending on it when a person may call it one of the 100 different analytical tool names or they’ll say I need to do analysis on data? The words the submit are often different from what they actually want,” said Alberta Health Services’ Tutt

Tutt and his team are endeavoring to better understand what people are requesting in service desk tickets so the company can manage its software investments more effectively. Now that his team has access to the different systems, they know who’s using a product and when they used it. They’re looking at the problem from a Robotics Process Automation (RPA) perspective so software can be automatically removed if it hasn’t been used in a certain time period.

More Power to Affect Change

Industry analysts are pushing back on “data-driven” mantras because they think companies should be “insight-driven.” While they have a valid point, insights without action have little value.

For example, a large U.S. health provider has a massive analytics team that’s generating highly-actionable insights, but those insights are not being acted upon by the business. They can meet with a functional unit such as risk or compliance and show them insights. The operating unit will say, “That’s interesting,” but there’s no way to connect insights and action.

“The data teams are frustrated because they’re not getting the operational support they need,” said Adam Nathan, CEO and Founder of analytics strategy firm The Bartlett System. “The data teams don’t know how to drive that, except to get frustrated and quiet and get more value elsewhere. I think the tipping point will come when the company realizes it’s falling behind competitors. They’ll realize the company isn’t getting the value it could from analytics and that will put pressure on them to do something with those insights.”

« Older posts