From $3.1 trillion in annual costs for US businesses to ruining a brand's customer experience, bad data quality affects most companies and only those that take proactive steps will prevent failure and stay afloat.
We have compiled a few golden kernels about the causes and impacts of bad data quality. We hope that it can help motivate your team and company to start prioritizing good quality data.
In this pdf, you will find info on:
Garbage in, Garbage out.
George Fueschel is generally credited for coining the term back in the late '50s while working as an IBM 305 RAMAC technician.
He was reminding programmers that their machines will not transform bad or incomplete data into valuable results.
If bad data is inputed, then bad results will be produced.
GIGO is just as prevalent in the world of Big Data and Machine Learning as back then.
Did you know the No. 1 failure of CRM success is bad quality data?
Data is estimated to decay at a rate of 30% annually which could mean thousands of records are outdated and accumulating each year.
Data comes largely from two main sources, machines and humans.
Machines referring to anything from sensors to websites and IoT devices. Data for the most part generated from machines can be very accurate and easily cleaned, refined and analyzed.
People-generated data is more problematic and prone to error (data entry, text, voice, etc.), and requires more attention to improve the quality of data. The answer is human augmentation.
The impact bad data has on a company is paramount from debilitating sales efficiency to ruining brand reputation.
With 85% of companies believing AI will help sustain or obtain new competitive advantages, the first step should be to invest in improving data quality otherwise AI/ML investments to improve business effectiveness aren't going to succeed.
According to IBM, US businesses spend $3 trillion USD annually in costs associated with bad quality data.
Data is the foundation of businesses today and only good foundations are going to be able to maintain strong and competitive businesses.
There are many characteristics of data that influence quality, but we will focus on six major buckets :
There isn't a hierarchy amongst these dimensions in terms of which has a bigger impact on data quality, but each possesses unique challenges for companies depending on their growth and maturity stages.
Businesses are challenged on a daily basis when it comes to each one of these data characteristics.
Companies could be struggling with:
The key is to define what data needs to be collected, how it's stored and what procedures need to exist to maintain high quality.
Continuing to do business and data collection as usual will not improve data quality. In fact, doing nothing or waiting to fix it later will increase costs exponentially.
Bad data affects the entire business from marketing to sales and customer success in the form of:
Again we are not talking about bad data produced from machines such as the faulty data from malfunctioning sensors that can cause Boeing 737 crashes, but the data from human-involved processes.
We are not equating loss of life to business loss, however bad quality of data can have severe impacts on the health of a business.
Let’s take a deeper dive into the specific impacts.
Poor quality data in your CRM can mislead your marketing and sales teams’ approach to landing and converting opportunities into successful deals.
Bad proposals may be created and delivered as a result of the poor data that served as the foundations for each one or it may be that potential opportunities are overlooked entirely.
Accurate and relevant opportunities will be more often secured when there is high-quality data fueling the identification, nurturing and conversion processes.
Business leaders need good data in order to make good decisions (think GIGO). When marketing and sales teams have inaccurate information or outdated data in their databases, they are unable to make efficient decisions and ultimately waste resources.
For example, Forrester conducted a research on how bad quality data affects marketing teams and their findings suggest that 21 cents of every media dollar spent was wasted due to poor data (about $16.5 million average annual loss for enterprises).
Additional cost estimates:
The result of bad decisions are mistakes and more mistakes require more time fixing. It becomes a very tedious and painful process for the organization when people are unable to trust the data and have to spend resources to fix incorrect data.
All teams suffer productivity losses when an organization has poor quality data. 32% of marketing teams' time is spent on managing data quality and 26% of campaigns on average suffered from poor data quality.
When data is incorrectly assessed as accurate, teams can make decisions that have negative consequences such as bad customer support and compliance issues. Sending products to the wrong address or having one customer’s buying and support records split across duplicate contacts can create bad touch points with the brand.
Not sure about you, but I find it completely frustrating when customer service teams don’t have a clear understanding of my purchase history of their products and/or services and I have to explain to them when and where those events occurred. The customer in these instances should be remunerated for helping the customer success team update their CRM! These bad touch points can lead to a bad customer experience and damage the brand’s reputation.
The answer can be found in the 1-10-100 rule. Fixing data quality retroactively is going to cost the business 10X more and potentially 100X more depending on the severity of the situation compared to investing in prevention methods.
The golden ticket that will protect higher profit margins for businesses is to begin proactively improving data quality.
Research suggests 30% of CRM data becomes obsolete annually.
Instead of waiting until the functionality of your CRM is tiptoeing around the precipice of failure to throw one-time savior funds to pull it back from the edge, businesses should implement processes that prevent databases from getting near the edge in the first place.
The key is to figure out how to build a firewall that ensures only good data is being saved and proactively updated in the company’s database.