ML and DP
ML and DP
SPIEDigitalLibrary.org/conference-proceedings-of-spie
Fuh-Gwo Yuan, Sakib Ashraf Zargar, Qiuyi Chen, Shaohan Wang, "Machine
learning for structural health monitoring: challenges and opportunities," Proc.
SPIE 11379, Sensors and Smart Structures Technologies for Civil,
Mechanical, and Aerospace Systems 2020, 1137903 (23 April 2020); doi:
10.1117/12.2561610
Department of Mechanical and Aerospace Engineering, North Carolina State University, Raleigh,
North Carolina 27606, USA
*
Corresponding authors, email: yuan@ncsu.edu, szargar@ncsu.edu
ABSTRACT
A physics-based approach to structural health monitoring (SHM) has practical shortcomings which restrict its suitability to
simple structures under well controlled environments. With the advances in information and sensing technology (sensors and
sensor networks), it has become feasible to monitor large/diverse number of parameters in complex real-world structures either
continuously or intermittently by employing large in-situ (wireless) sensor networks. The availability of this historical data has
engendered a lot of interest in a data-driven approach as a natural and more viable option for realizing the goal of SHM in
such structures. However, the lack of sensor data corresponding to different damage scenarios continues to remain a
challenge. Most of the supervised machine-learning/deep-learning techniques, when trained using this inherently limited
data, lack robustness and generalizability. Physics-informed learning, which involves the integration of domain knowledge
into the learning process, is presented here as a potential remedy to this challenge. As a step towards the goal of automated
damage detection (mathematically an inverse problem), preliminary results are presented from dynamic modelling of beam
structures using physics-informed artificial neural networks. Forward and inverse problems involving partial differential
equations are solved and comparisons reveal a clear superiority of physics-informed approach over one that is purely data-
driven vis-à-vis overfitting/generalization. Other ways of incorporating domain knowledge into the machine learning
pipeline are then presented through case-studies on various aspects of NDI/SHM (visual inspection, impact diagnosis).
Lastly, as the final attribute of an optimal SHM approach, a sensing paradigm for non-contact full-field measurements for
damage diagnosis is presented.
Keywords: Machine learning, artificial neural networks, physics-informed learning, visual inspection, augmented reality,
impact diagnosis, damage diagnosis, structural health monitoring.
Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems 2020
edited by Haiying Huang, Hoon Sohn, Daniele Zonta, Proc. of SPIE Vol. 11379, 1137903
© 2020 SPIE CCC code: 0277-786X/20/$21 · doi: 10.1117/12.2561610
Structural health monitoring (SHM) is a transdisciplinary field of engineering devoted to ensuring the structural integrity
and operational safety of a component or structure. This is achieved by facilitating the (in-service) detection and
characterization of damage in a structure that may adversely affect its ability to fully and safely perform its intended
function. As such, the problem of damage detection lies at the heart of SHM and the goal is to identify the damage at the
earliest possible stage (in near real-time). This is indispensable for ensuring timely corrective action in order to minimize
the system downtime, overall operational and maintenance costs, and to reduce the risk of catastrophic failure. Broadly
speaking, as can be seen in Fig. 1, two approaches have traditionally been adopted for automated damage detection in SHM:
physics-based and data-driven.
Physics-based methods, in some form or the other, rely primarily on the physical laws governing the structural behavior in
order to extract meaningful information about the damage and its evolution from the measured sensor data. However,
difficulty in the modelling of complex real-world structures, considerations of multiple sensing modalities, material and/or
geometric non-linearity, and uncertainty in material properties, boundary conditions, environmental/operational variations
are some of the factors that make this exclusive reliance on the system physics rather impractical in case of complex real-
world structures. This limits the application of such methods to the health monitoring of rather simple structures with pre-
defined boundary conditions and well-controlled environments [1-3]. As the underlying system complexity increases, such
an approach becomes much less dependable. Owing to the advances in information and sensing technologies in recent times,
it has now become feasible to monitor a large number of parameters in-situ in large/complex real-world structures on either a
continuous or sporadic basis. This motivates the use of a data-driven approach for SHM where-in damage assessment is
dealt with, at least at a lower level, primarily as being a type of statistical pattern recognition problem thus circumventing
some of the major challenges associated with the physics-based approach.
During the last few decades, machine learning (ML) techniques have been extensively employed by researchers both for
vibration based and ultrasonic guided wave based damage detection [4-14]. Machine learning in SHM aims at building
models or representations for mapping input patterns in measured sensor data to output targets for damage assessment at
different levels, Rytter [15]. Conventional machine learning techniques are however limited in their ability to process the
large amounts of measured sensor data in their raw form. As such, careful engineering and considerable domain knowledge
are required to extract damage-sensitive features from the raw data which are then fed into a suitable ML model.
Historically, the choice of these hand-crafted damage-sensitive features has stemmed from the wealth of well-developed
literature on physics-based SHM techniques including but not limited to modal assurance criterion (MAC) and Coordinate
MAC [16, 17], modal strain energy (MSE) [18, 19], modal curvature (MC) [20, 21], modal flexibility (MF) [22-24], damage
locating vector (DLV) [25], wavelet transform [26-30], Hilbert-Huang transform [31], probabilistic reconstruction
algorithm (PRA) [32], ZLCC [33] etc,. Multi-layer feedforward artificial neural networks (ANNs), also referred to as multi-
layer perceptrons (MLPs), have been the most well-known and widely used ML technique for automated damage detection
[34-37]. Apart from the algorithm itself, the overall performance of such a damage detection system depends on the choice
of the damage-sensitive features used. The problem with hand-crafted features is that while they may be sub-optimal even
for the structure under consideration, there is no guarantee that the same set of features can be adopted for other structures.
In order to overcome this shortcoming, deep learning (DL) methods have attracted much attention in recent years. These
allow data to be used in their raw form and features are learned automatically from data using a general-purpose learning
procedure. Deep learning automatically discovers intricate features in high-dimensional data which has led to its widespread
adoption across many application domains [38-41]. It is certain that DL will continue to flourish in the future and the
progress will only be accelerated with advances in computational capabilities through the development of high-tech central
processing units (CPUs)/graphics processing units (GPUs), availability of large amounts of data (big data), and development
of new learning algorithms.
While supervised learning is the most common and well developed form of learning, a fundamental challenge in many
SHM applications is that damage detection must often be performed in an unsupervised manner. This is because, for real-
world structures, it is highly unlikely to have data corresponding to different damage scenarios. Among the four levels of
damage assessment identified by Rytter [15], the lowest level (i.e., establishing the presence of damage) has been achieved
in the past through unsupervised learning by what is referred to as novelty (outlier) detection [42-44]. Higher levels of
damage assessment in real-world structures require either a mechanism for augmenting the insufficient/incomplete training
data by incorporating some form of prior knowledge into the learning/training process and/or the development of more
comprehensive data acquisition systems. These will henceforth be referred to as the two aspects of an optimal SHM system.
This paper introduces these two aspects through a discussion on physics-informed learning in Sections 2, 3 and 4 and non-
contact full-field measurements using state-of-the-art high-speed digital cameras for vision-based SHM in Section 5.
Summary, conclusions and future prospects are finally presented in Section 6.
Mathematically speaking, the problem of automated damage detection from measured sensor data is an inverse problem. In
conventional machine learning/deep learning, it is usually formulated as a minimization problem with a purely data-based
loss function. However, in most SHM applications, not only is the cost of data acquisition prohibitive, but it is highly
unlikely to have sufficient data capturing different damage scenarios, especially for structures that have been in service for
relatively short periods of time. When trained using this inherently limited data, the models tend to overfit the given data
which eventually leads to poor generalization. This gives rise to the need for some form of model regularization which
primarily entails guiding the training process towards an optimal solution in one way or the other. Accomplishing this can
result in what may be thought of as data-efficient function approximators.
Based on Euler-Bernoulli beam theory, the vibration of a simply supported beam under transverse load q(x,t) is governed
by Eq. (1) with boundary conditions (BCs) described by Eq. (2)
For free vibration of the beam (q=0), let us suppose the initial conditions (ICs) are given by
where EI/A, E is the elastic modulus of the beam’s material, I is the area moment of inertia, is the density of the
material, A is the cross sectional area, l is its length, and w is the transverse displacement. Collocation, which entails
choosing a number of points (called collocation points) in the spatial and temporal domains, is adopted and the loss function
for solving the PDE using an ANN consists of three terms and can be expressed as follows:
wˆ ( x ,0) w ( x )
NI
1
I wˆ t ( xi ,0) w0 ( xi )
2 2
i 0 i
NI xi
wˆ (0, t )
NB
1 2 2 2 2
B j wˆ (l , t j ) wˆ xx (0, t j ) wˆ xx (l , t j ) (4)
NB t j T
NG
1
G wˆ ( xk , tk ) wˆ tt ( xk , tk )
2
xxxx
NG xk
tk T
where wˆ ( x, t ) is the predicted displacement and the first term in is the mean squared error (MSE) term associated with
the initial conditions, the second term corresponds to the boundary conditions and the third term to the governing equation.
NI, NB, and NG stand for the number of collocation points selected from T, , and Ω respectively. The weights I, B,
and G for the three terms are determined heuristically to be 1, 1, and 2×10-4 respectively. For free vibration of the simple
supported beam with =1, l=1 and
w0 ( x) sin(3 x), w0 ( x) 0, [0,1], T [0,0.15]
the exact solution for transverse displacement can be easily obtained as w(x,t)= sin(3x).cos(9t). The steady-state solution
can be considered as a superposition of two transient waves with identical amplitude and propagating in opposite direction
as w(x,t)= ½ {sin[(3x–3)]+ sin[(3x+3)]}.
The collocation method implemented via the ANN shows good convergence as long as the time is short, however, for long
time durations, convergence problems are encountered and the solutions often converge to the trivial solution. To
circumvent this problem, the entire time duration is segmented into a finite number of short time-windows and solved
sequentially. By invoking Huygen’s principle, which states that every point on a wavefront is itself the source of secondary
wavelets, the solution at the end of a particular time-window can be treated as the initial condition for the subsequent time-
window. For this example, the time domain is divided into six time windows (to satisfy the Nyquist criterion). For each
window, an ANN with eight hidden layers each of which consists of sixty four neurons is used to approximate the full
displacement field. For expediting the process and ensuring convergence, the training process in each case is divided into
two phases. The first phase primarily focuses on fitting the ICs and BCs on a data-driven basis, thus the last term in the loss
function is excluded. A gradient descent method is performed first (number of collocation points NI = NB = 50) followed
by quasi-Newton algorithm (a limited-memory quasi-Newton code for bound-constrained optimization, L-BFGS-B) (NI =
NB = 150) to guide the solution closer to the global minimum in a rather coarse but speedy manner. In the second phase,
the term corresponding to the governing equation is added back to the loss function, then L-BFGS-B is employed to
finely optimize the parameters of the ANN. The collocation points are selected from a uniform 150 × 150 grid such that
NI = NB = 150 and NG = 22500. The results displayed in Fig. 4 show good agreement between the exact solution w(x,t)
and the predicted solution wˆ ( x, t ) .
wˆ 0, t
ND NB
1 1
wˆ xi , ti w xi , ti B1 wˆ 1, ti
2 2 2
D i
ND xi ,ti NB ti T
wˆ 0, t
NG NB (5)
1 1
wˆ xxxx xi , ti wˆ tt xi , ti B wˆ xx 1, ti
2 2 2
G xx i
NG xk
2
NB ti T
tk T
where the first term is the purely data-driven loss, the second term is for the Dirichlet boundary conditions, the third term
is for the governing equation, and the fourth term is for high order boundary conditions. The weights D, B1, G and B2
for the four terms are determined heuristically to be 5000, 25000, 1 and 1 respectively. For the simple supported beam
shown in Fig. 5 with w(x,t)= sin(x).cos(2t) and =4, in the first phase, the parameters of the ANN are trained with Adam
optimizer with a learning rate of 0.01 for 10000 iterations. During each iteration, while all the D measurements are used,
the collocation points are selected from the defined domain. To be specific, B = 50 points are sampled for the boundary
conditions and G = 2500 points are sampled for the governing equation. After that, the parameters are further optimized
with L-BFGS-B on a larger number of static collocation points until the loss converges. The collocation points for L-BFGS-
B optimizer are taken from an equally spaced 150×150 grid over the entire domain, so that B = 150 andG = 22500 . Fig.
6(a) shows the comparison between the results obtained with physics-informed learning and those obtained using the sparse
data only and Fig. 6(b) shows the error for the two approaches as the sparsity of the data and SNR changes. Clearly physics-
informed learning outperforms traditional data-driven learning vis-à-vis overfitting when data is sparse and/or noisy. Table
1 presents the comparison in a tabulated manner. The maximum relative error metric used for comparison is defined as
(max |𝑤 ̂ − 𝑤|⁄max |𝑤|).
Figure 6: (a) Comparison of the results for the displacement field reconstruction using physics-informed approach and purely data-
driven approach (b) Variation of the error for physics-informed learning and data-driven learning as SNR value and data sparsity
changes
If the system parameter µ is unknown and is to be identified, it can be set as a trainable parameter and inversely determined
6
through training. For instance, a displacement field w( x, t ) n sin n x (cos n 2 2t / 8) is constructed by linearly
n 1
combining six mode shapes where the coefficients n are random sampled from [-1, 1]. The system parameter for generating
this displacement field is =0.0156 but assumed unknown. Noisy measurements extracted from this field are then provided
to an MLP for it to reconstruct this displacement field and identify the unknown parameter at the same time. Same training
strategy is used as above and the result is shown in Fig. 7. is determined to be 0.01555, with a relative error of only 0.32%.
To summarize, in this section, initial promise has been demonstrated for the use of ANNs for solving PDEs in forward
problems and simple inverse dynamic problems for beams. This can be regarded as a promising first step towards the
ultimate goal of solving more complicated inverse problems for automated damage detection in SHM. This logic is
analogous to the historical evolution of conventional numerical methods for inverse problems through the development of
different regularization schemes after methods for solving forward problems were well matured. Next, two case studies are
presented to demonstrate how knowledge in other forms could be utilized for augmenting the ML/DL models. This is done
through cases directly falling within the context of NDI/SHM. It is worth mentioning that while the discussion in Section 3
dealt primarily with discrete sensor array data, Sections 4 shifts the focus to vision based health monitoring with Section
4.1 being a static problem (still images as input) and Section 4.2 being a dynamic problem (wavefield videos as input).
In this Section, a physics-informed deep learning (DL) approach for enhanced visual inspection via augmented reality is
first presented as a demonstration of how domain knowledge in the form of expert human feedback and historical NDI/SHM
records can be used to augment the training data. This is followed by a demonstration of impact diagnosis using a physics-
informed deep learning model.
The first direct demonstration of the role of domain knowledge in augmenting the ML/DL models for tasks directly related
to NDI/SHM is done through what is referred to as enhanced visual inspection. In the aerospace industry, for safety critical
structures, visual inspection (VI) is usually the first line of defense and given its various advantages, it accounts for at least
80% of all aircraft inspections according to FAA [81]. Apart from being considerably flexible, it is the most direct, intuitive,
straightforward, and economical method for accessing the condition of a structure [82]. However, when performed in its
traditional form (by a human inspector without any external aids), VI has certain inherent shortcomings like, being error-
prone, labor-intensive, tedious, subjective/inconsistent, etc [83-85]. An inspector is required to first acquire relevant training
over a period of time and then to bridge the gap between the acquired knowledge and the physical systems in the field
mentally which limits his/her ability to make the best use of the wealth of information available about the inspection task
being performed.
The use of augmented reality (AR) as an information delivery paradigm has been proposed in order to overcome some of
the major shortcomings of traditional VI. AR encompasses a set of technologies that superimpose digital data and images on
the physical world thereby enhancing the user’s perception of reality [86]. It has the potential to bridge the disconnect between
the physical and digital worlds, thus enabling humans to make efficient use of the information available and consequently make
more informed decisions. As such, it can act as a tool to enhance the traditional visual inspection process. A proof-of-concept
for such an enhanced visual inspection system using a head mounted AR device (AR glasses) is presented here. The primary
focus in the study was the detection of most commonly occurring defects in metallic structures– corrosion, fatigue cracks,
and/or a combination of the two. This was achieved by deploying a deep learning (DL) based computer vision model on
the AR device.
Oftentimes, considerations of system downtime and accessibility prohibit a detailed inspection of the entire system. Under
such circumstances, while traditional VI may give rise to high rates of false-positives (false alarms) and/or false-negatives
(misdetections), using a purely data-driven approach to search for damages/anomalies over the entire region of interest
(ROI) can be extremely expensive computationally making real-time detection almost impractical with the current state-of-
the-art AR devices. Also, depending on the distance and/or angle between the inspector and the area being inspected, a
damage may or may not be easily discernible. As such, with the aim of improving the probability of detection (POD)
enhancing the overall efficiency of the detection process, prior domain knowledge was incorporated into the ML pipeline.
Fig 8 gives an overview of the type of domain knowledge available and its integration into the machine learning pipeline.
The details are explained next.
Rather than searching only for damages (in this case cracks/corrosion) in the actual ROI, regions that are prone to damage
are also categorized and grouped. The information about damage prone areas can come either from expert knowledge about
the system or from historical data. As an example, areas prone to cracking may include regions with similar types of
fasteners like bolts, rivets, etc. In this way, a preliminary analysis of the ROI may lead to the detection of prominent damages
and/or damage prone regions. If a damage prone region is detected, the inspector is prompted to zoom into the region in
order to carry out a more detailed analysis with the aim of detecting less prominent damages i.e., damages that are otherwise
below the detectability threshold of the algorithm at the original scale of view. This can be achieved either by using the
zoom-in function in the AR device camera or by simply walking closer to the region if there are no accessibility issues. In
this way, analyzing the ROI at multiple scales (prompted by expert/domain knowledge about the system) has the potential
to greatly increase the POD while reducing false alarms. It should also be noted that since a detailed analysis is carried out
only of the highlighted damage prone regions, this region-based detection paradigm has the potential to make the overall
inspection process much more efficient than carrying out a detailed inspection of the entire system. A schematic of the
proposed physics-informed methodology is shown in Fig. 9 and some representative results are shown in Fig 10. The results
were obtained using Epsom BT-300 smart glasses (AR/Developer edition) with a pre-trained MobileNet (SSD MobileNet
V2) and k-means clustering was used to group the detected fasteners together. More details about the DL algorithms used
in the study and the details of the dataset used for training can be found in [87].
While the identification of visible damage is an important aspect of monitoring the health of structures, the real challenge
in most SHM applications is posed by hidden damage. This is discussed in the next section.
Figure 10: (a) Original ROI (b) Fasteners detected in the ROI, no cracks detected (c) Regions identified based on the fasteners after
zooming-in into the identified regions, the cracks become more prominent thereby increasing the overall POD. Each detection in
the figure has the associated confidence level with it.
Effective, reliable, and robust identification of impact events that have the potential to damage a structure is an aspect of
SHM that is of prime importance in the aerospace industry. This is commonly referred to as impact diagnosis and its
importance stems from the fact that low-velocity impact events (like tool drops, runway debris, bird strikes etc.) can lead
to what is called barely-visible-impact-damage (BVID) in composites which has been found to be the most prominent cause
of in-service damage to aerospace structures. The existence and/or extent of this damage can be correlated with the impact
energy (which is the area underneath the impact force time-history curve), with no damage being assumed below a certain
energy threshold. Impact diagnosis, which entails the identification of the impact location and reconstruction of the impact
force time-history, can quantify this impact energy and in this way can serve as an indirect method for the detection of an
otherwise invisible damage right at its inception [88].
Low-velocity impact on a structure emanates an elastic wave that propagates through the structure carrying a wealth of
information about the impact event. Most of the classical impact diagnosis methods (both physics-based [89, 90] and data-
driven [91-94]) capture information about the wave propagation phenomenon only at discrete locations by employing a
limited number of sensors spatially distributed on the structure. Both of these approaches are limited in their practicality as
only a limited amount of information about the scattered wavefield can be captured by the discrete sensor array. This
becomes especially problematic in complex real-world structures with geometric features like joints, stiffeners, etc., where
the discrete sensor signals fail to provide sufficient insights as they are often corrupted by high levels of coherent noise
from multiple wave reflections, wave mode conversions etc. In order to address this issue of limited wavefield data, it was
proposed to use the full wavefield for impact diagnosis. A proof-of-concept for a DL based approach for analyzing full
wavefields for impact diagnosis is presented here. For this study, while simulated wavefields were employed, the feasibility
of an integrated high-speed camera system for non-contact full-field wavefield capture is discussed in the next section.
The high spatio-temporal dimensionality of the wavefield mandates the use deep learning for analysis. Also, the nature of
the impact diagnosis problem requires the capturing of context from the wavefield evolution which necessitates learning
across multiple time-frames of the wavefield simultaneously rather than focusing independently on each frame. Fig 11
shows an end-to-end trainable CNN (convolutional neural network)–RNN (recurrent neural network) network architecture
employed for the
Figure 11: A unified CNN-RNN network architecture for spatio-temporal analysis of the impact generated wavefield
spatio-temporal analysis of the scattered wavefield. The model was trained on simulated wavefield data and care was
exercised to ensure that the training data was not significantly deviated from the data the model is expected to encounter in
the real world. This was assured by incorporating different noise levels and location biases in the training data and by testing
the model on simulated wavefields in response to real impact force profiles generated using an impact hammer. Overall,
this lies under the paradigm of letting DL models learn in a simulated environment before transferring their knowledge to
the real world.
Impact diagnosis clearly represents a case which is extremely data-intensive and where collecting sufficient real data is
hard, as such, an ideal candidate for incorporating prior domain knowledge into the ML pipeline. Fig 12 gives an overview
of the
Figure 13: Results for impact diagnosis including impact location and impact force reconstruction using the physics-informed
DL-based methodology
Computer vision techniques for automated damage assessment have been engendering a lot of interest in recent times in
civil infrastructures, especially during post-disaster inspections, when the number of structures to be inspected is far beyond
the capability of available inspectors [98-103]. In the field of civil infrastructure assessment, safety-critical damage is
usually big enough to be visible to the naked eye, as such, static images captured using ordinary digital cameras suffice for
the purpose of automated damage assessment using state-of-the-art computer vision algorithms. For instance, a fusion CNN
was proposed by Dr. Li’s group [104-105] to successfully detect distributed cracks on steel girders of actual bridges.
However, for aerospace structures, the most critical type of damage is barely visible to the naked eye. In such (plate-like)
structures, hidden details of the structure, including information about the location of the damage, if present, and its
characteristics can be unearthed using ultrasonic guided wave based techniques, which entail investigating the scattered
guided waves in the structure using appropriate signal/image processing algorithms. These scattered waves can be sensed
using either contact (e.g., a network of piezoelectric (PZT) sensor arrays) [106] or fully non-contact means (e.g., air-coupled
transducers, laser Doppler vibrometer (LDV)) [107-112]. Because of the multitude of advantages associated with non-
contact full-field sensing, the scanning LDV has attracted a lot of interest in recent times, however, it is limited to point-
by-point measurement/sensing (i.e., one point at a time) [113].
In order to overcome these limitations, a sensing paradigm capable of taking all the measurements simultaneously is
desirable. Digital image correlation (DIC), which is a non-interferometric optical technique, extracts surface deformations
by comparing digital images of the structure’s surface captured before and after deformation [114-118]. Advances in
computers and digital imaging technology have already enabled 2-D/3-D DIC methods to extend from static or quasi-static
to dynamic applications, such as low-frequency vibration (in the range of kHz) and full-field modal analysis for
characterizing the behavior of structures [119-121]. In order to expand DIC applications into the ultrasonic range (> 20
kHz), for example, in the area of ultrasonic guided-wave based damage imaging, image acquisition systems with extremely
high spatial/temporal resolution are required.
High-speed digital cameras, even though not there yet, provide the most viable means for accomplishing this. Not only do
they enable simultaneous data acquisition at all the points in the region-of-interest (ROI) but the reconstructed wavefield
using the sensed data provides the most spatially continuous information about the wave propagation phenomena as each
pixel is considered to effectively act as a sensor. The requirement of extremely high spatial/temporal resolution, however,
continues to remain a bottleneck in the adoption of this technology for ultrasonic guided wavefield reconstruction. However,
the problem is purely related to the processing speed and will improve with the development of computer technology.
Furthermore, as IC technology rapidly advances following Moore’s law, new high-speed cameras promise to have high
potential for capturing the propagating guided wave on the surface of the structures.
Fig. 14 shows the schematic of a proof-of-concept for an integrated high-speed camera system for non-contact full-field
sensing of the scattered wavefield in structures [122] and the damage image generated from the captured wavefield using
modified wavenumber index (WI) imaging condition [123]. It should be noted that since only a single high-speed camera
was used, it was possible to capture only the in-plane scattered wavefield. In order to circumvent the current hardware
limitations associated with using a single high-speed camera for the acquisition of ultrasonic guided in-plane wavefields,
three strategies were adopted. First, for improving the spatial/temporal resolution, sample interleaving and image stitching
techniques were used to get an effective data acquisition rate that was nearly 250 times that of the original rate specified by
the camera. Second, the concept was demonstrated on a thin, flat, low-modulus, high-density polyethylene (HDPE) plate,
which is a flexible polymer sheet with less than one percent of the stiffness of regular carbon fiber reinforced polymers
(CFRPs). The low-modulus and high-density plate enables large deformations and low wave speeds, which make it possible
to successfully detect the in-plane wavefields using the affordable in-house high-speed camera (Photron Fastcam Mini
AX200). Third, a large PZT actuator was used in conjunction with a power amplifier to generate sufficient energy to excite
measurable guided waves in the plate, and the excitation frequency was carefully chosen to be a continuous harmonic wave
such that not only the fundamental A0 and S0 modes exist, but a strong fundamental SH0 wave mode also persists (due to
multiple reflections from the damage and plate boundaries).
While currently the concept could only be demonstrated on the HDPE sheet due to hardware limitations associated with
using a single high-speed camera, the study shows significant initial promise for the adoption of this technology for non-
contact full-field measurements with the ultimate goal of visualizing hidden damage such as barely visible impact damage
(BVID) in complex flat/curved composite structures. In order to continue development of this concept past this successful
feasibility study and begin optimizing it for practical applications, the following tasks are planned. (1) Multiple high-speed
When the structures being monitored (either continuously or intermittently) are significantly large/complex and/or when
the variations in environmental/operational conditions cannot be neglected, a physics-based approach for assessing the
structural health becomes highly unreliable. In recent times, the number of parameters being monitored in-situ in real-world
structures has been steadily increasing through the use of miniaturized wireless sensor networks and that has led to an
abundance of historical data for these structures. The heterogeneous nature of this data (due to multiple sensing modalities)
and the emphasis on developing long-term SHM systems has engendered a lot of interest in a data-driven approach for
SHM. However, for these real-world structures, it is highly impractical to collect sufficient data (corresponding to different
damage scenarios) to train a robust and generalizable model for automated damage detection and characterization.
Consequently, an optimal approach to SHM must encompass, on one hand, some means of augmenting the collected sensor
data in order to compensate for the lack of completeness in the data and on the other hand comprehensive sensing
mechanisms with a framework for subsequent data fusion and processing of the large volumes of collected data.
In this paper, these aspects of such an optimal SHM paradigm were addressed individually through: (1) The use of physics-
informed machine learning for the analysis of the measured sensor data. (2) The use of high-speed digital cameras for
capturing ultrasonic guided wavefields in structures as a means of providing the most spatially continuous information
about the propagating ultrasonic guided waves within the structure. Owing to the attention it has been engendering in recent
times, physics-informed machine learning was discussed in some detail with the eventual goal of SHM in mind. By
demonstrating the applicability of ANNs for solving forward and inverse problems involving PDEs, initial promise was
demonstrated towards solving more complicated inverse problems for the task of automated damage detection in SHM. It
was shown how the overfitting/generalization issue generally associated with the insufficient nature of the training data can
be mitigated by incorporating physical laws/domain knowledge to guide the back-propagation during the training process.
Other ways of incorporating domain knowledge into the ML/DL pipeline were also discussed through the case studies
related to different aspects of SHM i.e., enhanced visual inspection and impact diagnosis.
Internet of Things (IoT) has recently attracted great attention due to its promising potential and capacity to be integrated
into complex systems [124, 125]. With rapid advances in information and sensing technologies such as wireless
communication, cloud computing, and wireless sensor networks (WSNs), IoT is emerging as an enabling technology for
the development of low cost, efficient, reliable, and scalable SHM systems. Recent advances in deep learning (both
supervised and unsupervised) and breakthroughs in the integration of system physics/domain knowledge into the ML/DL
models have had a catalytic effect on the adoption of a physics-informed data-driven approach to learning in many
application domains. It is envisioned that as robust non-contact full-field sensing mechanisms are developed and more
success in physics-informed learning is achieved, especially for solving complex inverse problems in SHM, the goal of in-
situ real-time vision based health monitoring can be achieved in near future.
Acknowledgement
One of the authors, Mr. Sakib A. Zargar would like to express their sincere thanks to Dr. Jeremy Yagle of NASA Langley
Research Center for the support through National Institute of Aerospace (NIA).
[1] Giurgiutiu, V. (2007). Structural health monitoring: with piezoelectric wafer active sensors. Elsevier.
[2] Yuan, F. G. (Ed.). (2016). Structural health monitoring (SHM) in aerospace structures. Woodhead Publishing.
[3] Cawley, P. (2018). Structural health monitoring: Closing the gap between research and industrial deployment. Structural
Health Monitoring, 17(5), 1225-1244.
[4] Doebling, S. W., Farrar, C. R., & Prime, M. B. (1998). A summary review of vibration-based damage identification
methods. Shock and vibration digest, 30(2), 91-105.
[5] Farrar, C. R., Doebling, S. W., & Nix, D. A. (2001). Vibration–based structural damage identification. Philosophical
Transactions of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, 359(1778), 131-
149.
[6] Alvandi, A., & Cremona, C. (2006). Assessment of vibration-based damage identification techniques. Journal of sound
and vibration, 292(1-2), 179-202.
[7] Montalvao, D., Maia, N. M. M., & Ribeiro, A. M. R. (2006). A review of vibration-based structural health monitoring
with special emphasis on composite materials. Shock and vibration digest, 38(4), 295-324.
[8] Fan, W., & Qiao, P. (2011). Vibration-based damage identification methods: a review and comparative study. Structural
health monitoring, 10(1), 83-111.
[9] Raghavan, A., & Cesnik C. E. S. (2007). Review of guided-wave structural health monitoring. The Shock and Vibration
Digest, 39(2), 91-114.
[10] Su, Z., & Ye, L. (2009). Identification of damage using Lamb waves: from fundamentals to applications (Vol. 48).
Springer Science & Business Media.
[11] Mitra, M., & Gopalakrishnan, S. (2016). Guided wave based structural health monitoring: A review. Smart Materials
and Structures, 25(5), 053001.
[12] Wu, X., Ghaboussi, J., & Garrett Jr, J. H. (1992). Use of neural networks in detection of structural damage. Computers
& structures, 42(4), 649-659.
[13] Worden, K., & Manson, G. (2007). The application of machine learning to structural health monitoring. Philosophical
Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 365(1851), 515-537.
[14] Farrar, C. R., & Worden, K. (2012). Structural health monitoring: a machine learning perspective. John Wiley & Sons.
[15] Rytter, A. (1993). Vibrational based inspection of civil engineering structures (Doctoral dissertation, Dept. of Building
Technology and Structural Engineering, Aalborg University).
[16] Allemang, R. J., & Brown, D. L. (1982). A correlation coefficient for modal vector analysis. In Proceedings of the 1st
international modal analysis conference (Vol. 1, pp. 110-116). Orlando, FL: SEM.
[17] Allemang, R. J. (2003). The modal assurance criterion–twenty years of use and abuse. Sound and vibration, 37(8), 14-
23.
[18] Kim, J. T., Ryu, Y. S., Cho, H. M., & Stubbs, N. (2003). Damage identification in beam-type structures: frequency-
based method vs mode-shape-based method. Engineering structures, 25(1), 57-67.
[19] Shi, Z., Law, S. S., & Zhang, L. (2000). Structural damage detection from modal strain energy change. Journal of
engineering mechanics, 126(12), 1216-1223.
[20] Pandey, A. K., Biswas, M., & Samman, M. M. (1991). Damage detection from changes in curvature mode
shapes. Journal of sound and vibration, 145(2), 321-332.
[21] Wahab, M. A., & De Roeck, G. (1999). Damage detection in bridges using modal curvatures: application to a real
damage scenario. Journal of Sound and vibration, 226(2), 217-235.
[22] Pandey, A. K., & Biswas, M. (1994). Damage detection in structures using changes in flexibility. Journal of sound
and vibration, 169(1), 3-17.
[23] Pandey, A. K., & Biswas, M. (1995). Damage diagnosis of truss structures by estimation of flexibility change. Modal
Analysis-the International Journal of Analytical and Experimental Modal Analysis, 10(2), 104-117.
[24] Jaishi, B., & Ren, W. X. (2006). Damage detection by finite element model updating using modal flexibility
residual. Journal of sound and vibration, 290(1-2), 369-387.
[25] Bernal, D. (2002). Load vectors for damage localization. Journal of Engineering Mechanics, 128(1), 7-14.