0% found this document useful (0 votes)
22 views22 pages

Feature Extraction - Shape and Texture

The document discusses using image processing techniques to detect and monitor landslides. It describes extracting texture features from images using edge detection and analyzing morphological changes. The key aspects are identifying cracks at the trailing edge of landslides, using image recognition for feature extraction, and preprocessing images including color space conversion and histogram correction.

Uploaded by

Rider Ranjith
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views22 pages

Feature Extraction - Shape and Texture

The document discusses using image processing techniques to detect and monitor landslides. It describes extracting texture features from images using edge detection and analyzing morphological changes. The key aspects are identifying cracks at the trailing edge of landslides, using image recognition for feature extraction, and preprocessing images including color space conversion and histogram correction.

Uploaded by

Rider Ranjith
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Texture Feature Extraction and Morphological

Analysis of Landslide Based on Image Edge


Detection

. Introduction
As a common kind of geological disasters [1], the frequency of landslides is increasing year by
year. According to the National Geological Disaster Bulletin, 9710 geological disasters occurred
in China in 2020, of which 7403 accounted for 76.2% of the total geological disasters. Therefore,
it is very important to implement sliding monitoring alarm for disaster prevention and mitigation
[2, 3].

Intermittent cracks occur at the trailing edge. If the length of cracks remains unchanged, it
indicates the slope shape that begins to slide. The cracks at the trailing edge occur continuously,
and if the crack length tends to expand, it means that the sliding becomes fierce slowly [4].
Therefore, it is of great significance for monitoring and alarming sliding slope to monitor the
crack change trend at the sliding edge and reflect the sliding displacement trajectory in time.
Independent component analysis is used for feature extraction, and basis function is used as
pattern template for natural image feature detection [5]. The successful application of this
method in edge detection and texture segmentation is given. Remote sensing technology is used
to monitor disasters, thus improving the efficiency. In this paper, CNN and texture change are
proposed to detect landslides intelligently [6].

At present, landslide monitoring methods are mainly divided into displacement monitoring,
physical field monitoring, groundwater monitoring, and external trigger factor monitoring, in
which the surface displacement is an important basis for judging the stability of the slope, and
also an important indicator for studying the evolution process of the landslide and the
management of hidden danger areas, so the accuracy and effectiveness of displacement data in
monitoring is particularly important. The technical scheme of landslide disaster monitoring is
gradually developing towards more accurate, intelligent, and real-time research. There are some
difficulties in landslide monitoring, and there are signal errors and non-real-time characteristics
in the use of sensors. Therefore, using image processing technology to deal with the hillside
covered with vegetation has better recognition effect, shorter recognition time, and higher
recognition rate. Landslide movement is highly complex and affected by many factors, so it is
difficult to understand the internal characteristics of each landslide thoroughly at present, but
landslide monitoring is helpful to master and analyze the evolution and characteristics of the
landslide body. With the continuous progress of landslide monitoring technology, in order to
obtain more detailed landslide data and understand the landslide more deeply, it is necessary to
design a universal landslide early warning system which is easy to install.
2. Identification of Crack Curve at the Trailing Edge of Landslide

2.1. Characteristics of Cracks at the Trailing Edge of Landslide

After the landslide disaster [7], the soil and rock mass, which were originally a part of the
mountain, leave the main body of the mountain due to gravity, and the cracks formed between
the trailing edge of the landslide and the immobile mountain are mainly characterized as shown
in Table 1.
Table 1

Characteristics of cracks at the trailing edge of landslide.

2.2. Image Recognition

The image processing technology is a technology for processing an image such as noise removal
[8], enhancement, restoration, segmentation, and feature extraction. The main contents included
in the image processing are shown in Table 2.
Table 2

Main internal customers of image processing.

Image processing technology is more mature, widely used in various industries, and the work
efficiency is greatly improved [9]:(1)In road detection, image processing technology is used to
quickly detect the crack position and crack width of road [10](2)In residential buildings, image
processing technology is used to detect the crack information of concrete [11](3)In bridge
monitoring, image processing technology is used to monitor whether there are cracks at the
bottom of the bridge [12](4)In the tunnel, image processing technology is used to collide with the
tunnel(5)In edge monitoring, image processing technology is used to monitor the changes of
mountain and obtain real-time changes, but it is difficult to record and compare the changes of
the whole mountain, for example, it is difficult to analyze the more detailed changes of mountain
by using methods such as dividing and strengthening, so as to obtain accurate and
comprehensive monitoring data [13](6)Image processing technology can also be used for small
displacement motion

2.3. Color Model

2.3.1. RGB Model

Because RGB model quantitatively represents the brightness of three basic colors: red, green,
and blue, it is also called additive color mixing model [14]. However, when the brightness values
of the three basic colors are the lowest (0), it is set to black. When the brightness values of the
three primary colors reach the highest value (255), they are white [15].

This is a basic method of mixing colors by adding colors and mixing


colors. red + green = yellow, green + blue = cyan, red + green + blue = white.

The color matching equation of the color mixing model is

2.3.2. HSV Model

HSV is created based on the visual property parameters of three colors. Hue (H), chromaticity
(S), and brightness (V) are different [16]: Hue H: hue H stipulates that the color should be
measured with a 360 disc and driven counterclockwise from 0. Common colors are red at 0,
green at 120, and blue at 240. Saturation S: chroma S is based on spectral color. The closer the
color is to spectral color, the higher the saturation of its color. On the contrary, the lower the
chroma. Brightness : brightness is used to indicate the brightness of color, and the brightness
depends on the brightness of light source.

In the landslide monitoring system, images with different RGB color ratios show different image
effects. The analysis of landslide monitoring images can improve the image processing quality
and analysis accuracy. Usually, hue and saturation are commonly called chromaticity, which is
used to express the category and depth of the color. Because people’s vision is more sensitive to
brightness than to shade of color, HIS color space is often used to facilitate color processing and
recognition, which is more in line with people’s visual characteristics than RGB color space. In
image processing and computer vision, a large number of algorithms can be used in HIS color
space, and they can be processed separately and independently of each other. Therefore, the
workload of image analysis and processing can be greatly simplified in HIS color space.

2.4. Image Preprocessing

2.4.1. Image Preprocessing

The purpose of image digitization is to convert continuous analog images into discrete digital
images [17]. Sampling quantization or coding is usually used to convert the original continuous
space and brightness into discrete space and brightness.

As shown in Figure 1(a) of a digital image sample diagram, the original image is divided into an
array of M ∗ N in a fixed unit size in a two-dimensional space to generate a “dot” image
(Figure 1(b)) in which the number of dots that are finally available is explained. The output
efficiency is high, and the images are easily connected among various system platforms.
(b)
(a)
(b)
(a)
(b)

(a)

(b)
Figure 1

Image sampling schematic. (a) Original drawings. (b) Sampled images.

2.4.2. Domains and Connected Domains

Since the pixel points of the digital image are arranged in a 2-bit array, these remaining pixel
points are close to 4 (upper, lower, left, and right) and D (diagonal) in common with the pixel
points in the vicinity of the pixel point of interest, and as shown in Figure 2, the nearest eight
pixel points of interest are P, and the coordinate points are (Z, Y).
Figure 2

Neighborhood diagram.

Two pixels on the image are adjacent, and the pixel gray values satisfy specific similarity. If they
are further equal, the two pixels are called connection area relationship. As shown in
Figure 3, p and q constitute a connected region.
Figure 3

Schematic diagram of the connected domain.

2.4.3. Grayscale

Because the CCD image is a color image, it takes a lot of time to calculate and process, so the
CCD image is usually converted into a corresponding gray image.
(1) Component Method. Three color fluxes of the three channels of the color image are used as
the gray value of the corresponding image:

We choose the best conversion method according to the conversion effect.

(2) Maximum Value Method. The maximum value of the channel component of the color image 3
is used as the grayscale value of the corresponding grayscale image:
(3) Average Method. The average value of the channel components of the color image 3 is used
as the grayscale value of the corresponding grayscale image:
(4) Weighted Average Method. The weighted average value of the 3-channel components of the
color image is used as the grayscale value of the corresponding grayscale image:
2.4.4. Histogram Correction

Histogram correction belongs to the category of image enhancement, and its essence is to open
gray interval or uniform gray distribution. Balance of histogram and specification of histogram
are two commonly used correction methods.

(1) Histogram Balance. Histogram equalization refers to the use of one-time gray mapping
function to process the original image pixels, and the uniform distribution of the image after the
gray probability distribution:(1)Set the gray level of the original image and the gray level of the
object image as r and s, respectively. If 0 ≤ r, s ≤ 1, s = T(r), then T is a unified gray level
mapping function.(2)When the distribution function of gray level s of the target image is
expressed in f(s), then(3)Unified gray mapping function is(4)Derivative of s in the mapping
function is(5)Substituting equations (11) into (9) results in
(2) Specification of Histograms. Histogram specification refers to the step of transforming the
gray histogram within a specific gray range into a target histogram by using a gray image
function:(1)Integrate the gray probability density function Pr(R) of the original image and the
gray probability density function Pz(Z) of the object image:(2)The gray probability density
function of inverse transformation target image is(3)Replace V in the inverse transformation of
the above formula with the gray level of the original image:

2.4.5. Image Noise Reduction Processing

Due to the limitation of photography conditions, outdoor slider images usually contain various
noises. Identifying these images directly may not achieve the desired effect.

Image noise reduction processing can improve image recognition and image quality. General
noise reduction schemes are average filtering and median filtering in Figure 4.
Figure 4

Schematic diagram of median filtering.

Generally, the discrete subsequence input into the database is assumed to be {X0, X1,…, X8}, and
the corresponding non-negative integer weights are {}. Weighted median filtering is defined
as Y = Med × (), where Y represents the filtered output of the database and Med represents the copy
of image data.

2.4.6. Image Binarization Processing

In the image binarization process, if the preset threshold value is small, the gray value is set to 0,
otherwise it is set to 255, which indicates the effect of whether the image is black or white, and a
foreground region and a background region can be distinguished.

The total number of pixels in the image is


The frequencies of pixels with different gray values are also different. The calculation formula of
frequency is
Set the threshold T to divide the image into foreground and background, and the frequency of
foreground and background is as follows:
The average of foreground and background gray values is
The average value of the overall gray value of the image pixels is
Maximum variance between foreground and background is
T is the optimal threshold. Namely,where , , , , , and are all functions of the threshold value T, is
the maximum, and T is the optimal threshold value.
2.5. Image Morphological Processing

Non-morphological algorithm can obtain processing effect through function modeling,


convolution transformation and other methods, and play an active role in correcting the pixels of
multiple images, but there are few inactive pixels of some images, such as correcting abnormal
points through the consistency of functions. Pixel units are also referred to as structural elements
and the structural elements typically select a relatively small set of pixel points.

The two basic processing methods of mathematical form processing are corrosion and expansion,
and the resulting morphological algorithms include open operation and closed operation.

2.5.1. Expansion

If the original image is Z2, in the mathematical form dilation processing, when the pixel
group X is scanned using the dilation structure element D and crosses the pixel group after the
dilation structure element D moves parallel to Z, the dilation result group can be considered:

The bright part (white area) of the image can be enlarged and all background points in contact
with the foreground area can be integrated with the object to fill the cavity and narrow gap of the
foreground area, and intermittent parts of the image can be connected.

2.5.2. Corrosion

If the original image is Z2, in the mathematical form of etching treatment, if the etching structural
element E scans the pixel group X and the etching structural element E belongs to the pixel
group X after moving Z in parallel, the group is considered as the etching result group:

2.5.3. Open Operation

Open operation will corrode the image and expand. The arithmetic expression is
The calculation may filter the details of the protrusions smaller than the structural element B to
segment the edges of the slender connection destination and the smooth object region. This
method is inconvenient to preserve the cracks in the sliding trailing edge on the fine edge.

2.6. Boundary Detection

In order to obtain the crack curve of sliding trailing edge, it is necessary to extract cracks in the
connection between sliding trailing edge (rock and soil) and immovable mountain (green
vegetation). Because of the difference in color gray between sliding trailing edge and immovable
mountain, the pixel gray ladder of edge pixels and nearby pixels in the image is large, so edge
detection algorithm can be used to extract sliding trailing edge crack curve.
The global search class focuses on the calculation of edge strength, using the main function to
represent the pixel gradient pattern value, and replacing the local direction of edge motion with
the gradient direction. There are Roberts operator and Sobel operator for global search once edge
detection.

2.6.1. Roberts Edge Detection Algorithm

The Roberts operator is proposed by Lawrence Roberts in 1963. The local difference operator is
used to find the edge operator. It is shown in Figure 5.
Figure 5

Schematic diagram of coordinate points of digital image pixels.

The magnitude of the vertical and horizontal difference approximate gradient in the image is
The cross-difference approximate gradient width of f (x, y) in the image is

When G (X, Y) is greater than a preset threshold, points (Z, Y) are regarded as edge points.

2.6.2. Sobel Edge Detection Algorithm

Sobel operator was proposed by Irwin Sobel in 1973. As a weighted average edge detection
operator, Sobel operator thinks that the influence of nearby pixels on the current pixel is not
equal, so different weight operators have different influences on the results of pixels with
different distances.

Sobel’s nuclear accumulation factor is

The factor consists of two sets of 3 ∗ 3 matrices, which represent transverse and longitudinal
directions, respectively. Convolution operations are performed near 3 ∗ 3 centered on F(X, Y) to
calculate the deviation in Z and Y directions.

Set the image to I and the threshold to T:

F(X, Y) is regarded as an edge point when the gray degree is greater than the threshold T.

2.6.3. Gauss–Laplacian Edge Detection Algorithm

If only one differential is performed, the gradient change can be a local extreme value, so it is
impossible to judge the position of the edge point, so we continue to find the first differential
from the second differential. After the zero point is obtained from the meta-pole, there is a peak
and a trough before and after the zero point.

And finally, we have

2.6.4. Canny Edge Detection Algorithm

The Canny operator was proposed by John-F. Conny in 1986. As the most common edge
detection method, the steps are as follows.

(1) Noise Removal. Like the Gaussian Laplace transform, in order to reduce the interference to
the processing result caused by noise or the like, noise removal processing of the object image is
required.
The coordinate point (x, y) means close to 3 ∗ 3, and the coordinate of the center point is (0,
0). X, Y are integrals, and I take the value of 0–8. Because of standard deviation, the smaller the
value, the better the smoothing effect. The formula of quadratic Gaussian function is
The gray values of the 3 ∗ 3 region are assumed to be Z0-Zs:
Plus these nine values. This is the Gaussian ambiguity value of the center point X0:

(2) The Amplitude and Direction of Gradient Are Calculated by Finite Difference of Principal
Deviation. Image edge detection is divided into two parameter attributes: direction and
amplitude, and the gray value is displayed along the moving direction of the edge.

The change is slow, but perpendicular to the moving direction of the edge, the gray value
changes strongly.

The Gaussian filtered image is a 2 ∗ 2 region, and two gradients in the x-direction and y-direction
are calculated by the principal finite difference approximation, and as shown in Figure 6.
Figure 6

Gradient bat value and direction.

The gradients in the X and Y gate directions are


Thus, the distribution value and direction of the point gradient are obtained:

(3) Suppressing the Nonmaximum Value of Gradient Amplitude. For the 8 adjacent spaces of the
3 ∗ 3 region, as shown in Figure 7, the gradient direction can be four directions of 0, 45, 90, and
135.
Figure 7

Gradient direction value.

(4) Edge Connections Are Detected by Two Threshold Algorithms. By setting two default
values T1 and T2 to obtain 2T1 = T2 the two threshold edge images N1 and N2 have values using
low thresholds and include a number of false edges. N2 is intermittent (not off) by using a high
threshold. Therefore, with respect to edge connection, if a cut-off point N2[X, Y] of the edge
appears in the N2 image, the algorithm is looking for eight locations that can connect the cut-off
points before the N1[X, Y] image is disconnected. As shown in Figure 8, the flowchart turns off
the n2 image.

Figure 8

Flowchart of double threshold algorithm.

2.6.5. Comparison of Edge Detection Operators Tab

Table 3 shows a comparison of the advantages and disadvantages of each edge detection
operator.
Table 3

Comparison of edge detection operators.

Here, the image used in the edge detection step is a binary image, and the position of edge points
needs to be correctly determined. In order to integrate the advantages and disadvantages of these
operators, canny operator is used for edge detection.

2.7. Feature Parameter Setting

Image feature is a response feature or feature that distinguishes a certain object from the other
types of objects, and it is the proportion of features or features of objects different from feature
parameters to the whole image source, with numbers ranging from 0 to 1.

Because the interference region and crack curve in the graph are included in the connection
region depicted by Carney edge detection operator, it is necessary to set the characteristic
parameters to remove the gold crystals in the connection region formed between branches and
boulders.

The length of the foreground region projected onto the width of the image length is a projection
coefficient occupying the entire width of the image source. In the case that the ratio coefficient is
larger than a preset characteristic parameter, the connecting region is considered to contain a
sliding trailing edge crack curve.

As shown in Figure 9, the width of the image length is X and Y, respectively, and there are
rectangular boxes cutting off the two connecting areas A and B and their edges.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy