How would your home look like if you let dirt and mess accumulate for years? It would be a health hazard and would also make it impossible to find what you need when you need it most. In the end, you would reach a point when the problem simply couldn’t be overlooked. This is the situation that many plant managers are facing after accumulating huge quantities of manufacturing data over the years. Here Neil Ballinger, head of EMEA at automation parts supplier EU Automation, explains the concept of data cleansing.

By implementing a data-driven company culture, manufacturers can exponentially improve virtually any aspect of production. Big data can be used, among other things, to maximise energy efficiency, improve the business’s predictive maintenance strategy, and prevent downtime caused by equipment failure. To do this, manufacturers need accurate and reliable data.   

But when data is collected and accumulated for several years, its quality can start to decline. Dirty or rogue data is data affected by issues such as duplicates, inaccuracies, inconsistencies, and out-of-date information. When plants reach this point, it’s time for a good clean-up.

Not the exception

Dirty data is the norm, not the exception. As companies evolve, the amount of data they collect grows in quantity and complexity. High employee turnover, the use of different enterprise resources planning (ERP) solutions across several departments, and lack of standard guidelines for data entry complicate the situation. For these reasons, achieving perfect data is almost impossible, especially in large organisations.  

Data cleansing, or cleaning, is the process of detecting and correcting or eliminating incomplete, inaccurate, out-of-date or irrelevant data. It differs from data validation in that the latter is automatically performed by the system at the time of data entry, while data cleaning is done later on batches of data that have become unreliable.

There are a lot of data cleansing tools available, such as Trifacta, Openprise, WinPure, OpenRefine and many more. It’s also possible to use libraries like Panda for Python, or Dplyr for R. The variety of solutions on the market means that manufacturers might want to consult a data analyst to choose the best one for their business case.

How dirty, exactly?

Regardless of the solution employed and the type of data being cleansed, the first step is assessing the quality of the existing data. In this phase, a data analyst will assess the company’s needs and establish specific KPIs for clean data. Legacy data is then audited using statistical and database methods to reveal anomalies and inconsistencies. 

This can be done using commercial software that allows the user to specify various constraints. The existing data will be uploaded and tested against these constraints, and data that doesn’t pass the test should be cleansed. 

During this phase, manufacturers should establish which input fields must be standardised across the company. Standardisation rules can help businesses prevent the build-up of dirty data in that they minimise inconsistencies and facilitate the uploading of clean data into a common ERP.

Keep it clean

After the audit, the cleaning process can begin. Data will pass through a series of automated software programmes that discard what is not compliant with the specified KPIs. The result is then tested for correctness and incomplete data will be amended manually, if possible. A final quality control phase will ensure that the output data is clean enough to by seamlessly uploaded into the chosen ERP.

However, just like when cleaning our homes, a big clean-up every now and then is not enough. The best approach is to implement a culture of continuous data improvement, distributing tasks among each member of the team. Developing practices that support ongoing data hygiene is the key to success.

For more information on how to use big data to optimise your business, visit www.euautomation.com.