Data. An endless supply of information wanted by everyone in the business. Are we ensuring that we are receiving (or giving) the data in the correct format, the correct volume required and the data itself correct?
These types of questions occur regularly in businesses with over 250 employees, who now (since April 2018) have to report on their Gender Pay Gap on an annual basis. They will have to monitor the gap each year, and report the steps they are taking to reduce it, requiring a significant amount of data gathering and analysis. In the companies we have worked with, data capture, data sources and outputs were sore points which required quick review, to ensure that the core data was of high quality and could be presented back in the most effective way. As a result, regular exception reports and dashboards have become embedded in the clients we work with.
In addition to Gender Pay Gap reporting, there is a constant requirement for data across businesses to enable them to make better informed decisions, making those with the keys to the data consider the quality of the information they receive for analysis. More times than not, poor quality data is the downfall.
With all of this important data required, is it feasible to say that poor quality data can be a good thing?
Identifying an anomaly or error provides an opportunity to implement a process of capturing these types of issue again. There should also be a series of checks, such as the key points we look at when working with our clients:
Data Collection and Input – cataloguing processes that have direct user input or data transferred between systems. Are automatic updates from one system to another not occurring or is there replication of incorrect data? Are users trained sufficiently on how to use the system and the information they should be entering? This presents an opportunity to increase the skills of our workforce whilst also looking at opportunities to streamline data entry.
Validity of Information – analysing if and how validation occurs. Is there an approval process before information is locked in and if the approver is the correct person (or indeed dedicating the correct amount of time for the process)? The answers to these questions could identify some key areas where the data quality is poor.
Assumptions – are there scenarios where we could have made the assumptions about data that is automatically calculated or populated from some rules applied sometime in the past. Listing these rules allows us to review regularly and ensure they are correct.
Timescales – assessing the end-to-end process in terms of timing. By looking at the timescales set for each area of the process we can start to see if there are any areas which need to have extended or minimised timescales. This would help us avoid being in a situation whereby, data is being entered last minute hurriedly which may not be accurate.
Knowing the data is wrong is half the battle. Finding the cause of poor quality data, is a demanding task. To avoid data issues we work with our clients to implement exception reporting, specifically to find the poor quality data before it is released, which usually takes the form of report packs or dashboards.
Having a set of exception reports, or dashboards with real time information detailing the source of your data, the processes it goes through and all the systems requiring information from this data can show the weak links in the business intelligence chain. This can ultimately assist you in overcoming any issues with poor data, so that you can implement the right systems for your business and the outputs you require.
So perhaps the old saying is true after all, sometimes you do have to be wrong to be right, or it can at least help you get to the right solution.
For more information please contact Stephen Lawie (Stephen.email@example.com) or your usual AAB contact.
To find out more about the Systems and Process reviews team, click here.