If asked to estimate the amount of data used on a daily basis, we’d struggle to come up with an accurate answer. And therein lies the problem. With so much data available, the real challenge lies in realistic ways to manage the sheer volume of information.
Many businesses now use this abundance of data to make critical decisions as a matter of standard practice. But God really is in the detail. A very small mistake in the information used may lead to major faults in the decisions they influence.
One such small error very recently caused an upset for the Australian Bureau of Statistics (ABS), banking industry and the government as a whole, when it came to light that data on first homebuyers in Australia was incorrect.
Prior to the ABS analysis, a number of banks changed the way they classified their mortgage customers. Unaware of the modification in measurement, the ABS, which initially collected and used this information, continued to offer information under its old metric definition.
Based on numbers supplied by the ABS, politicians became charged up over a forthcoming crisis in first-home ownership, spurring hundreds of public servants to review and reset their policies. This led the government to invest millions of dollars into correcting a problem that didn’t exist, all thanks to a single metric definition reporting error.
The lesson to be learned here is two-fold. The first is that big mistakes can arise from very small oversights. The fact that a government body like the ABS is capable of making such a simple error should highlight just how easy it is for a company to be vulnerable.
Secondly, this example highlights why using a single metric to inform business decisions is fraught with danger. Best practice is to use different sources of information across a number of market factors to create indices.
As an organisation, we consistently apply the ‘Ten Measurement’ test (see below) to ensure we’re covering all aspects of the business question, filtering through to what’s really important for any business.
Our 10 measurement tests for any business data
- The truth test. Are we really measuring what we set out to measure?
- The focus test. Are we only measuring what we set out to measure?
- The relevancy test. Is it the right measure to measure the performance we want to track?
- The consistency test. Will the data always be collected in the same way whoever measures it?
- The access test. Is it easy to locate and capture the data needed to make the measurement?
- The clarity test. Is any ambiguity possible in interpreting the results?
- The so-what test. Can and will the data be acted upon, i.e. is it actionable?
- The timeliness test. Can the data be accessed rapidly and frequently enough for action?
- The cost test. Is the measure worth the cost of measurement?
- The gaming test. Is the measure likely to encourage undesirable or inappropriate behaviours?
At the end of the day, effective use of data takes more than just reporting the numbers you get. Ensuring your measurement reflects reality, gives you truthful insights and the information you need to take the right action at the right time.
Article originally appeared on CMO, guest written by Luke Brown