Structural cracks can critically undermine infrastructure integrity, driving the need for precise, scalable inspection methods beyond conventional visual or 2D image-based approaches. This study presents an automated system integrating photogrammetric 3D reconstruction with deep learning to quantify crack dimensions in a spatial context. Multiple images are processed via Agisoft Metashape to generate high-fidelity 3D meshes. Then, a subset of images are automatically selected based on camera orientation and distance, and a deep learning algorithm is applied to detect cracks in 2D images. The detected crack edges are projected onto a 3D mesh, enabling width measurements grounded in the structure’s true geometry rather than perspective-distorted 2D approximations. This methodology addresses the key limitations of traditional methods (parallax, occlusion, and surface curvature errors) and shows how these limitations can be mitigated by spatially anchoring measurements to the 3D model. Laboratory validation confirms the system’s robustness, with controlled tests highlighting the importance of near-orthogonal camera angles and ground sample distance (GSD) thresholds to ensure crack detectability. By synthesizing photogrammetry and a convolutional neural network (CNN), the framework eliminates subjectivity in inspections, enhances safety by reducing manual intervention, and provides engineers with dimensionally accurate data for maintenance decisions.

Automated Crack Width Measurement in 3D Models: A Photogrammetric Approach with Image Selection

Zappa, Emanuele
2025-01-01

Abstract

Structural cracks can critically undermine infrastructure integrity, driving the need for precise, scalable inspection methods beyond conventional visual or 2D image-based approaches. This study presents an automated system integrating photogrammetric 3D reconstruction with deep learning to quantify crack dimensions in a spatial context. Multiple images are processed via Agisoft Metashape to generate high-fidelity 3D meshes. Then, a subset of images are automatically selected based on camera orientation and distance, and a deep learning algorithm is applied to detect cracks in 2D images. The detected crack edges are projected onto a 3D mesh, enabling width measurements grounded in the structure’s true geometry rather than perspective-distorted 2D approximations. This methodology addresses the key limitations of traditional methods (parallax, occlusion, and surface curvature errors) and shows how these limitations can be mitigated by spatially anchoring measurements to the 3D model. Laboratory validation confirms the system’s robustness, with controlled tests highlighting the importance of near-orthogonal camera angles and ground sample distance (GSD) thresholds to ensure crack detectability. By synthesizing photogrammetry and a convolutional neural network (CNN), the framework eliminates subjectivity in inspections, enhances safety by reducing manual intervention, and provides engineers with dimensionally accurate data for maintenance decisions.
2025
3D reconstruction; camera orientation; crack detection; crack segmentation; ground sample distance (GSD); photogrammetry;
3D reconstruction
camera orientation
crack detection
crack segmentation
ground sample distance (GSD)
photogrammetry
File in questo prodotto:
File Dimensione Formato  
information-16-00448.pdf

accesso aperto

: Publisher’s version
Dimensione 3.4 MB
Formato Adobe PDF
3.4 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1293457
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact