In this special feature, Daniel Erickson, Founder and CEO of Viable, discusses how to master the use of qualitative data to drive sales. Dan brings over 15 years of industry experience as a self-taught coder, including positions as CTO of Getable, VP of Engineering at Eaze, and Principal Engineer at Yammer, Inc. Dan believes building systems and tools to help teams achieve their goal. Goals.

Companies have always collected data. From notches on sticks to marks on clay tablets and eventually writing on paper, the act of recording information to inform future actions is as old as commerce itself. Of course, data collection today is on a very different scale. We have reached a point where there is so much influx in business that even trying to manage it can harm, not increase, its value to an organization.

We’ve all seen countless headlines claiming, “Data is the new gold!” or “Data is the new oil!” And in order not to lose any chance of taking advantage of it, companies have become veritable data collection machines. However, and I am not the first to point this out, data is like gold and oil in that its value is only achieved after it has been processed. With data, it’s this processing, or what happens to the data after it’s collected, that most organizations have yet to get to grips with.

According to Forrester, between 60% and 73% of all data in a company is not used for analytics. This alone illustrates the lack of control to which I refer.

Yet even if we were to use 100% of the available data to analytic, we tend to rely too heavily on quantitative assessments. Organizations are quick to produce hard numbers like units sold and conversion rates. In the meantime, they leave qualitative data on the table which, if properly analyzed, could reveal a wealth of valuable information.

According to Statista, market research companies generated more than $47 billion in revenue in 2019 in the United States alone. Nearly two-thirds of this spending was on quantitative methods such as online surveys and telephone interviews. Customer satisfaction surveys were the largest category of these expenses.

With a quantitative customer satisfaction survey, you’ll likely find out things like how many customers are happy with the product and whether they would recommend the product to others. Now imagine that you are part of the product team. You learn that 60% of customers are satisfied with the product, but only 30% would recommend it to others. Now what? You know you may be doing something right, but then again, no. Not very useful.

Qualitative surveys allow you to go beyond yes and no and ask the most important question: why? For the previous example, you might find that customers love the product, but it’s the after-sales support that falls short of expectations. You now have information that tells you what you can do to fix the problem. Or more specifically, work with customer service to resolve the issue.

Adjusting a purely yes/no survey to incorporate a few open-ended questions is quite easy. The challenge – and the reason why so many organizations don’t even try – is parsing these unstructured text-based responses. It’s just a poll. Imagine all the information organizations are already sitting on: social media mentions, call center transcripts, support emails, chat logs, and more. If analyzed, it could provide game-changing information. But, when it’s just sitting in a data warehouse somewhere, it’s useless.

The second area where lack of fluency is consistently demonstrated is how organizations release — or more accurately, don’t release — data across an organization. What we see most often is more like “the one who collects it keeps it”. For example, the product team might have usage metrics while the marketing team has app store reviews and the communications team has social media mentions. While each of these divisions may derive minimal use from the information, it would be exponentially more valuable if it could be properly shared and used to inform larger cross-functional decisions and strategies.

While the value of becoming masters of how data is used is clear, the reasons why organizations aren’t striving for it are less so. In the past, this type of analysis would have been prohibitively expensive and/or labor intensive. Today, that is not true. With advances in natural language processing (NLP) and greater availability of low-cost technologies that automate previously manual tasks, it is possible to achieve fluency. Here are three steps you can take to drive the necessary changes within your organization that will set it up for successful data use:

  1. Proactively incorporate qualitative data into every stage of decision-making: from the research phase to product development and post-launch, be sure to ask your customers the kinds of “why” questions that will yield real insights . For example, why are customers drawn to your product? Why do users think the latest update is confusing? Why are reviews overwhelmingly positive for one segment but disappointing for another? Of course, responses are worthless if you can’t understand them, so be sure to build in the processes you’ll use to analyze responses and take action if necessary from the start.
  2. Present the internal case for automation: Lack of analytics is often a side effect of lack of resources. But with the increasing availability of cost-effective automation tools, that should be a thing of the past. For example, reading and analyzing the data manually can take 30 to 45 seconds per data point. If your organization has 7,000 data points, it would take around 60-90 human hours to process them. With automation, you can reduce that to minutes. In a cost-benefit analysis, the winner should be clear.
  3. Give everyone in your organization access to qualitative data insights: Data analytics that only the technically skilled user can access is less valuable than data analytics that every business team can use. Find ways to proactively share qualitative data analytics with every type of user, regardless of their technical abilities. Building habits and processes (point #1 of this framework) and embracing automation (point 2 of this framework) go a long way to making this possible.

Although data is not gold, mastering its use can have an alchemy-like effect, bringing immense value to something that was previously lacking.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1