Skip to main content

In-field crack validation data: can we trust it?

Published by , Senior Editor
World Pipelines,

Tom Oldfield, Service Manager – Field Verification, ROSEN, UK, finds the most efficient way to understand the quality of the field verification while validating ILI system performance.

In-field crack validation data: can we trust it?

Validating inline inspection (ILI) system performance is a critical element of any ILI campaign. It ensures that the integrity decision-making based on the data collected is realistic, considering both the classification and the sizing tolerances stated in the inspection system’s performance specification. Nowhere is this task more important than in validating crack sizing, due to the difficulty of predicting the behaviour of crack like-defects and the challenges in measuring these features.

The ILI crack sizing technologies of ultrasonic (UT) and electromagnetic acoustic transducer (EMAT) are validated and qualified technologies, but the industry is continuously learning and gaining experience. The quality of results across different vendors can therefore vary due to the level of experience with particular feature types, the robustness of the processes, techno-logical limitations and numerous other factors. Therefore, from a safety perspective, it is essential that ILI system validation is performed considering pipeline specifics in order to understand performance on a run-by-run basis and thus confirm that the ILI system performance is within its specification. This validation is now a regulatory requirement in the US, under legislation that explicitly references API 1163. Validation can be done in a number of ways, including benchmarking against previous data, cut-outs and validation spools. The most common, however, is in-field investigations of features reported by ILI. The question we will be looking to answer: can we trust the data we get from the field to validate ILI system performance?

Contrary to popular belief, the feature sizes measured in the field are not absolute and do come with a level of tolerance – either known or, more worryingly, unknown. The accuracy and repeatability of measurement techniques used for sizing cracks are heavily dependent on user knowledge and skill, meaning that two technicians using the same equipment, working to the same procedure, can get two very different results depending on their level of experience and competence. Understanding uncertainty in field measurement accuracy is a fundamental challenge to ILI system validation and the quantification of ILI system performance.

In-field crack sizing

The term ‘crack sizing’ encompasses numerous cracking mechanisms with different surface and subsurface morphologies. As a result, and due to the limitations of the various techniques, there is no single field-deployable technology that is appropriate for all cracking types. Further complicating matters, results collected in-field are heavily reliant on the skill and experience of the technician taking the measurements.

Combining these factors means that there can be considerable variation in in-field sizing accuracies. If these measurements are then used without understanding their tolerance or accuracy to prove or disprove ILI performance, this can clearly cause challenges in understanding the real problem.

ILI system performance

The published ILI system performance specification of a given ILI service provides a statistical basis for its detection, classification and sizing capability. ILI service performance specifications are created and refined through extensive testing of representative anomaly samples in small-scale (laboratory trials), full-scale (pull tests) and real-pipeline environments. There are, however, only a finite number of scenarios that can be considered as part of a development programme. Real-life variations in run condition, line cleanliness and defect morphology, to name a few, can all affect the ability to meet the detection and sizing performance stated in the ILI service specification.

API 1163

API 1163 gives guidance on how to use field results to validate ILI system performance. It states, “ignoring field measurement inaccuracies is generally conservative but may be overly conservative when evaluating ILI sizing performance”. This phrase alludes to the fact that tolerance of field measurement is understood to be an issue, but that it can be difficult to quantify. Ignoring tolerance from the field-measurement may well indicate that...

To access the full version of this article, sign up for the magazine (print or digital) here!

Read the article online at:

You might also like


Embed article link: (copy the HTML code below):