How to refine data capture and collection for implementing predictive maintenance in aviation?

According to the latest MRO Survey conducted by Oliver Wyman (2018) many operators face the challenge of rising material cost, labour productivity and labour shortage in the future. Predictive maintenance is seen as a means to combat these challenges and more than 70% of respondents want to implement predictive maintenance in the coming three years.

However, one crucial thing to consider is that for predictive maintenance to be effective, data capture and collection needs to be more refined (Oliver Wyman, 2017). Airlines are collecting large amounts of data, but the data is often ‘dirty’, disconnected, and/or fragmented, so requiring considerable additional preparation to be turned into useful information. Another general challenge seems to be that fact that airlines do not have the time, available resources, and knowledge needed to benefit from the data in a meaningful and fully integrated way. Finally, some still have to interact with a ‘dumb’ system and thus cannot benefit from the desired operational impact (Oliver Wyman, 2017). Thus, today we would like to focus on how airlines can tackle the challenges on:

The ‘dumb’ system

‘Dirty’ data and proper data collection

Tackle the first challenge: The 'dumb' system

Starting with the system one can either upgrade the current system or even replace it. In both cases it is important to consider a few parameters so that the system is working effectively with big data and predictive maintenance.

In our experience, it is unrealistic to expect aviation MRO / ERP vendors to focus on implementing complex algorithms for predictive maintenance and analytics within their ERP framework. ERP systems are designed for a reason to integrate and collaborate functions within the Aviation Maintenance Ecosystem, and not necessarily to enhance the analysis of data being captured. They continue to evolve by designing and implementing features to reduce the manual labour associated with the data capture and entry process, and rightfully so.

The following factors to be considered while upgrading/implementing a system:

  • Ease of data capture (cluttered screens with data fields versus workflow driven systems) – assists in comparing productivity (time spent doing work versus time spent updating records and forms)

  • Functional breadth (more often than not, in our experience, we find most MRO software falling short in Engineering & Inventory related functions – e.g. ease of recording and assessing mass effectivity, impact and compliance of complex SBs and MODs, AD / SB impact on components, Inventory valuation methods – LIFO vs FIFO vs Weighted Averages)

  • Scalability – modular upgrade based on expansion in scope and size of your airline / MRO

  • Conformity to ATA standards for data exchange (in and out) – this is key when data is being shared with your supply chain vendors (e.g. procurement and warranty information), maintenance service providers (work pack compliance, components removed and installed etc.,)

  • Dashboards and Alerts for faster decision making

  • Pre-built reports repository (Airworthiness, Inventory, Finance, Maintenance etc.,)

  • Pre-built interfaces with external Flight Operations, Finance / HR / SMS / QA systems (e.g. ability to read and process ACARS data) and standard Vendor Portals (Aeroxchange, Locatory, PartsBase etc.,)

  • Interfacing with OEM portals (Boeing Toolbox, Airbus Airman)

  • Electronic Signatures (doesn’t eliminate the need for DFPs as a lot of CAAs still insist on them) as a way of moving towards a paperless environment

  • Cross-platform access (mobiles, tablets, desktops, PCs) with role-based security

  • Technology support – Bar Coding (printing and reading stock labels), RFID (parts / tools tracking) etc.,

There is no one-size-fits-all solution, and choosing a system is a balance between how it fits well into your process and how the organisation wants to adopt towards industry best practices. 

Tackle the second challenge: 'Dirty' data & data collection

The second challenge is ‘dirty’ data and proper data collection. Here we should focus on the following factors:

What kind of steps do airlines need to undertake to improve their way of data collection?

How can you clean your ‘dirty data’?

What steps to take to keep data quality high?

Data Collection

Data collection and inconsistencies with data quality arise due to various reasons ranging from an IT landscape consisting multiple systems / data sources to inadequate training of end users in using the system. The most obvious reason is due to manual data entry: systems designed to inherently auto-fill data (based on pre-defined drop downs, bar code label reading etc.,) witness lesser data inconsistencies compared to systems designed with numerous fields requiring manual input of data. User bases are rarely organic, and greater the number of users accessing the system, the more inconsistent are the data points captured. A simple example could be that of systems supporting standard coding of ATAs, associated defect descriptions and rectification actions. This prevents walloping consequences in mismatch of defects against the respective ATA chapters. Instead of users keying in most of this data, they can choose it from pre-defined templates. This is one of the reasons why, for example, a lot of the reliability data is extracted out of the system, sanitized and then reported up on – a process that can take weeks. More fatal consequences include inconsistencies in calculation of last-done, next-dues which can question the very airworthiness status of the aircraft.

Alternatively, data being captured from external sources (e.g. aircraft phase-ins, or component removals and installations from a 3rd Party MRO that performed a C Check in one of your aircraft, or procurement information from supply chain vendors) can be standardized to conform to ATA specifications. While the industry offers numerous data standardisations across most of the key areas within an airline, adoption has been painstakingly slow. There is significant benefit in adopting these industry standards to improve overall data collection. It will require less customisation of OEM data, and thus allows direct imports. Furthermore, it promotes better, and more automatic, data exchange with OEMS, vendors/suppliers, partners and other airlines, reducing the need for manually re-entering order information for example and thus reducing the overall risk of human error.

Data Cleansing

Cleansing dirty data starts by identifying what is actually “dirty data” and “good data”. Bulk (automated) data verifications should be able to make a rough segregation between these. Once the actual problems in the data are identified, methods can be defined for the actual cleansing. An important consideration in this is how much of historical data is still relevant to the current operation, often this is the main culprit of “dirty data”. Defining hard rules on the historic data that will be in consideration, focuses the cleansing efforts and significantly reduces the overall scope of work.

In general, there is no right or wrong way of approaching this, but every user of the system must take responsibility towards the data being captured and input into the system. But in the world of aviation, where time equals money, stressful days and environments inadvertently cause quick inputs of data into systems which increase the chances of errors.

Periodic audits of data driven internally by the Quality team, or externally using consultants must be implemented. It is important that the team involved in cleaning the data not only possess knowledge of the fleet type, but also to an extent, knowledge on the systems in place. Larger the size and scope of data being captured; higher the frequencies must be to audit the data. These also need to prioritize based on airworthiness critical data, and non-airworthiness data.

Usually this process also involves implementation of data quality / cleansing tools that bridge with the systems in place to act as a framework to clean the data using advanced machine learning techniques (e.g. identification in inconsistencies between ATA chapter and defect description, part classification when receiving parts etc.,).

Keep data quality

Implementing a combination of systems requiring minimal manual data input and scheduling of periodic data audits is the best approach to keeping the data quality high. Refresher training on system usage is another must to ensure users follow best practices while recording transactions in the system. External audits additionally tend to remove biases that can often be overlooked when done internally.

However, it has to be noted that the most significant improvement will always come from a culture change within the organisation that promotes data discipline and regular checks/clean-ups. During the implementation of data driven systems, it is important to involve and get the full support of the end-users in the organisation. In the long-term, these users need to perceive ownership of the system which will automatically induce better data discipline and quality.

Ultimately, data quality is the cornerstone of achieving reliable predictive maintenance capabilities. As predictive maintenance uses this data in order to be fed into algorithms, pouring low quality data in will also result in low quality outcomes of predictions. Similarly, if accurate and complete data is fed into predictive maintenance algorithms you are receiving reliable and accurate outcomes from your prediction models.

Are you also faced with some of the above challenges and would like to receive more detailed advise? 

Previous
Previous

Dirty Dozen of Data in Aircraft Maintenance

Next
Next

5 Things to know about: Rob Vermeij