0% found this document useful (0 votes)
31 views13 pages

Object_detection_based_deinterleaving_of_radar_sig

Uploaded by

youssef.amraoui2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views13 pages

Object_detection_based_deinterleaving_of_radar_sig

Uploaded by

youssef.amraoui2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Signal, Image and Video Processing (2024) 18:7789–7800

https://doi.org/10.1007/s11760-024-03428-2

ORIGINAL PAPER

Object detection based deinterleaving of radar signals using deep


learning for cognitive EW
Mehmet Burak Kocamış1,2 · Adnan Orduyılmaz1 · Selçuk Taşcıoğlu2

Received: 12 April 2024 / Revised: 2 July 2024 / Accepted: 5 July 2024 / Published online: 16 July 2024
© The Author(s) 2024

Abstract
In a real-world environment, multifunction radars (MFRs) pose major challenges to the electronic support system that is a part
of cognitive electronic warfare. One of the main problems is deinterleaving of signals belonging to MFRs. The ability of MFRs
to change their carrier frequency, pulse width, and pulse repetition interval (PRI) from pulse to pulse makes the deinterleaving
task challenging. In this paper, an object detection based deinterleaving approach exploiting amplitude patterns caused by radar
beam motions, which are determined based on radar antenna scan types, is proposed to deinterleave MFR signals. Amplitude
patterns are created in two-dimensional images using amplitude (AMP) and time of arrival (TOA) parameters obtained from
radar pulses. Deinterleaving is performed using a deep learning algorithm applied to these images. To the best of the authors’
knowledge, object detection based deinterleaving of radar signals using AMP-TOA images has not been considered so far.
Contrary to the common approaches, in which searching for PRI patterns is required, amplitude patterns formed by the radar
beam motions on the electronic support system are used in the proposed method. This enables robust identification of signals
of MFRs having PRI, carrier frequency, and pulse width agility. With the proposed method, the intention of the radar, i.e.,
search or tracking, can be acquired in the deinterleaving stage, which ensures earlier situational awareness. The performance
of the method is evaluated for varying number of radar signals from one to five through simulations, in which scenarios with
missing pulses at different rates are considered. Simulation results demonstrate that, on the average, more than 0.98 mean
average precision (mAP50) is achieved at 30% missing pulse rate. The performance of the method for search radar signals is
slightly lower compared to that achieved for tracking radar signals due to the lower number of pulses received by the system.
However, more than 0.93 AP50 is achieved for search radar signals in the presence of five radar signals with different missing
pulse rates. Besides, real-time performance can be achieved using the proposed method on the GPU platform.

Keywords Antenna scan type · Cognitive EW · Deep learning · Deinterleaving · Multifunction radars · Object detection

1 Introduction

In electronic support (ES) systems, deinterleaving of multi-


function radar signals is a challenging problem due to the
Adnan Orduyılmaz, Selçuk Taşcıoğlu have equally contributed to this agility of these radars in terms of carrier frequency (RF)
work.
[1], pulse repetition interval [2], pulse width (PW) [3], and
B Mehmet Burak Kocamış modulation on pulse (MOP) [4, 5]. ES systems consist of
burak.kocamis@tubitak.gov.tr two subsystems: Radar Warning Receiver (RWR) and Elec-
Adnan Orduyılmaz tronic Intelligence (ELINT). Deinterleaving of MFR signals
adnan.orduyilmaz@tubitak.gov.tr is error prone in RWR systems since these systems rely on the
Selçuk Taşcıoğlu prior information that is difficult to obtain due to parameter
selcuk.tascioglu@eng.ankara.edu.tr agility [6]. Hence, ELINT systems, which do not use prior
information for deinterleaving, are widely preferred for the
1 Bilgem İltaren, The Scientific and Technological Research identification of MFR signals [2]. In this study, the dein-
Council of Türkiye, 06800 Ankara, Turkey
terleaving problem for ELINT systems is considered within
2 Department of Electrical and Electronics Engineering, the scope of cognitive electronic warfare (EW) [7]. These
Ankara University, 06830 Ankara, Turkey

123

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


7790 Signal, Image and Video Processing (2024) 18:7789–7800

systems process incoming radar pulses to generate pulse ing is performed considering multiple parameters (AOA, RF,
description words (PDWs) consisting of pulse parameters PW, AMP) in the conventional flowchart [33, 34], whereas
called AOA (angle of arrival), RF, PW, AMP, TOA, and MOP. only AOA clustering is carried out in the proposed approach.
In a conventional ELINT system given in Fig. 1 (left), This reduces the computational burden of the clustering
deinterleaving of radar signals is performed using TOA stage. Pulses are extracted based on PRI searching in the
parameter of the pulses. Several approaches for TOA based conventional flow. On the other hand, this task is performed
deinterleaving have been developed so far. Histogram based, by detecting the pixels belonging to objects in the proposed
e.g., CDIF [8, 9], SDIF [10], PRI Transform [11–15], and approach, in which the exhaustive PRI search process is not
Wavelet Transform based [16–18] algorithms have been required. Moreover, the proposed methodology simplifies the
extensively studied in the literature. In these algorithms, antenna scan type recognition by reducing the possible scan
determining a threshold level for varying environmental con- types to be considered. This is because the antenna scan type
ditions poses a problem. Improper setting of the threshold is recognized only by considering the scan types in the class to
could lead to the prediction of false PRI values or missed which the detected object belongs. On the other hand, antenna
detection. For deinterleaving, PRI types were also discussed scan type and period estimations are carried out after some
in recent studies [19–21]. In [19, 20], aperiodic PRI types, data has been captured and stored for conventional methods
e.g., jitter PRI, are not considered, which prevents achieving a considering all scan types [35–37].
complete solution. The method in [21] allows for the recogni- The main contributions of this study are as follows: First,
tion of aperiodic PRI types to a certain extent. For TOA based the proposed deinterleaving method utilizes amplitude pat-
deinterleaving, deep learning models such as autoencoders terns formed by radar beam motion on ES systems, contrary
and recurrent neural networks have also been employed [22– to the common approach in which searching for PRI pat-
27]. The networks in [22, 23, 26] dealt with only periodic terns or values is required. Hence, the method can be used in
PRI types. They are not suitable for the detection of aperiodic the case of any PRI agility from pulse to pulse. Second, the
PRI types. Wavelet [24] and PRI [25] transforms were used intention of the radar, i.e., search or tracking, can be acquired
as preprocessing tasks in order to obtain PRI-TOA images. in the deinterleaving stage using the proposed method. This
Besides, MOP based deinterleaving was studied by applying ensures earlier situational awareness in ES systems. Third,
deep learning models [4, 5, 28–30]. This approach does not the proposed method utilizes a deep learning based object
suggest a complete solution since radar signals without any detection algorithm that can be implemented on a GPU plat-
modulation cannot be classified. form, enabling real-time performance. To the best of our
MFRs can have a wide range of PRI values and change knowledge, deinterleaving using deep learning based object
these values from pulse to pulse. In addition, the number of detection, in which amplitude patterns associated with radar
pulses received from search radars may not be sufficient to antenna scan types are exploited, is considered for the first
extract the true PRI type and values due to the short illumi- time. Lastly, the proposed method provides high accuracy
nation time. Hence, TOA based deinterleaving may yield even in the presence of missing pulses at high rates.
unsatisfactory results for some modes of MFRs. In [31], The rest of the paper is organized as follows. In Sect. 2,
the amplitude parameter, in addition to the PRI parameter, amplitude patterns received by ELINT systems are ana-
was used to deinterleave the signals of mechanical scan- lyzed for search and tracking radars. In Sect. 3, the proposed
ning radars; however, the deinterleaving problem for tracking method is presented, and its training procedure is explained.
radars and electronic scan radars was not discussed. In [32], The performance evaluation results are presented in Sect. 4.
the amplitude parameter was utilized in an unsupervised Finally, Sect. 5 concludes the paper.
blind source separation method based on discrete wavelet
transform (DWT) for deinterleaving of radar signals. Dein-
terleaving of search radar signals with sidelobes was not 2 Received amplitude patterns
considered and was identified as a research topic.
In this paper, we propose a novel deep learning based The tasks of radars are to search or track the target of inter-
object detection method utilizing the amplitude parameter est. The amplitude patterns observed in ELINT system for
along with the TOA parameter for deinterleaving of search different kinds of search and tracking radars are shown in
and tracking radar signals. The overall signal processing Fig. 2. For search radars, e.g., circular, bidirectional sec-
flowchart, including the proposed deep learning based dein- tor, and raster scan radars, the amplitude pattern is generally
terleaving method, is shown in Fig. 1 (right). Our study observed as a sinc function in ELINT systems due to the beam
focuses on the deinterleaving task in this signal processing motion of these radars. Since the amplitude patterns caused
flow. The tasks in the other blocks of the flowchart are out by unidirectional sector and helical scans are similar to those
of the scope of this study. The main differences between the of circular and raster scans, respectively, they are not repre-
conventional and proposed flowcharts are as follows: Cluster- sented in Fig. 2. For tracking radars, e.g., LORO, COSRO,

123

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


Signal, Image and Video Processing (2024) 18:7789–7800 7791

Fig. 1 Signal processing


flowchart for ELINT systems:
conventional flowchart (left),
proposed flowchart including
the proposed method (right)

electronic, lobe switch, and conical scan radars, the ampli- where AM Pn and T O An denote the amplitude and time of
tude pattern is usually observed as a constant or sinusoidal arrival values of nth pulse, respectively. The maximum value
pattern at the receiver. of n is N , i.e., the total number of pulses for each emitter.
The constant amplitude pattern observed in ELINT system S(T O An ) is an offset value indicating the actual amplitude
is modeled as follows: value at the time instant T O An . Wn denotes Gaussian mea-
surement error with zero mean and standard deviation σ . In
AM Pn = S(T O An ) +Wn , (1) (1), S(T O An ) is a constant for LORO, COSRO, lobe switch,
and electronic scans.

123

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


7792 Signal, Image and Video Processing (2024) 18:7789–7800

Fig. 2 Amplitude patterns


observed in ELINT system in
association with different radar
antenna scan type

123

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


Signal, Image and Video Processing (2024) 18:7789–7800 7793

The sinusoidal amplitude pattern caused by conical scan


[2] is formulated as follows:
 
2π T O An
AM Pn = S(T O An ) +Dsin +φ +Wn , (2)
P

where S(T O An ) is an offset value for sinusoid with a period


P and peak value D. The phase of the sinusoid φ is assumed
to be uniformly distributed in the interval [0, 2π ).
Lastly, the sinc amplitude pattern [2] caused by circular,
sector, helical, and raster scan radars is given by the formula
 
 sin 2π T O An 
 T 
AM Pn = S(T O An ) +C 2π T O A
−C+Wn ,
 (3)
n
T

where S(T O An ) is an offset value indicating the maximum


amplitude value in the pattern. C is a constant defined to
control the sidelobe level of the sinc pattern, and T is the
illumination time. Fig. 3 Object detection results for a generated AMP-TOA image con-
taining patterns of three different classes: a constant, b sinusoidal, and
c sinc
3 Proposed deinterleaving method

The details of the proposed object detection based deinter- 3.2 Training procedure
leaving method and its training procedure are explained in
the following subsections. In the training process, information about both the coordi-
nates of the bounding boxes and the class labels of the objects
3.1 Object detection based deinterleaving method in images is required. Bounding boxes are rectangular shapes
that enclose the objects of interest. The center, height, and
In this paper, we propose to use an object detection method width of the bounding boxes are given as input. A deep neural
for deinterleaving of MFR signals, in which AMP and TOA network learns objects inside the bounding boxes consider-
features are used to create 2D images, as shown in Fig. 3. ing their class labels.
In this image, the background pixels are black, and black The parameter values used in generating the training and
pixels corresponding to PDWs are replaced with white pixels validation datasets and the intervals from which these val-
according to their TOA and AMP values in the x- and y-axis, ues were chosen are presented in Table 2. The number of
respectively. The horizontal size of the image corresponds images used during training and validation is 3800 and 600,
to the dwell time, defined as the duration of each equally respectively. The size of the image is set to 416×416 pixels
divided part of the total duration during which the receiver with horizontal and vertical resolutions of 240 µs and 0.1
is tuned to a specific frequency band. The vertical size of the dB, respectively. Consequently, the dwell time and instanta-
image corresponds to the instantaneous dynamic range of the neous dynamic range are 99.84 ms and 41.6 dB, respectively.
receiver. Training and validation images were generated by using a sta-
Three classes, i.e., constant, sinusoidal, and sinc ampli- ble PRI type. PRI values were selected at equidistant points
tude pattern classes, are defined for the problem considered within the interval [2, 2000] µs.
in this study. Class names, labels, and corresponding radar In order to evaluate the robustness of the method against
scan types are given in Table 1. Illustrations of sample various environmental conditions, the following simulation
objects, as well as detection results, can be seen in Fig. 3. parameters given in Table 2 were used: Missing pulse anal-
Object detection is performed using the YOLOv8n model. ysis, a fundamental environmental condition test extensively
This single-stage algorithm [38, 39] is faster than two-stage discussed in deinterleaving literature, was conducted with
object detection algorithms [40–42]. Since the considered different missing pulse rates. Besides, images containing at
images have plain black backgrounds, even the smallest ver- most five radar signals were generated using different AMP
sion of the YOLOv8 family yields favorable deinterleaving measurement error levels. AMP measurement errors can sig-
performance. nificantly change the actual pixel location corresponding to

123

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


7794 Signal, Image and Video Processing (2024) 18:7789–7800

Table 1 Classes and their


Class Label Corresponding scan type
corresponding scan types
Constant a LORO, COSRO, electronic, lobe switching
Sinusoidal b Conical
Sinc c Circular, unidir./bidir. sector, raster, helical

Table 2 Simulation parameters


Parameters Values/intervals
for training and validation
dataset Image size 416×416
# of images 3800 (training)/600 (validation)
Resolution for x/y axis 240 µs/0.1 dB
Size of x/y axis 99.84 ms/41.6 dB
# of radar signals per image {1, 2, 3, 4, 5}
AMP offset value [−60, −18.4] dBm
Electronic illumination time [3, 200] ms
Sinusoidal peak value [2, 5] dB
Sinusoidal period [5, 100] ms
# of lobes {2, 4}
Sinc illumination time [3, 200] ms
Sinc sidelobe level [13, 35] dB
σ for measurement error [0.05, 0.7]
PRI modulation type stable
PRI values [2, 2000] µs
Time shift value [−33, 33] ms
Missing pulse rate {0, 10, 20, 30} %

the radar pulse by up to 21 pixels in AMP-TOA images. the dynamically changing distance between the radar and the
This value is obtained by taking the maximum AMP mea- receiver.
surement error as 3σ corresponding to 2.1 dB for the AMP The performance of object detection algorithms is gen-
resolution of 0.1 dB. Including AMP measurement error is erally measured by a metric called mean average precision
essential to evaluate the method’s robustness in real-world (mAP). This metric is obtained by taking the mean of the
applications. Measurement error is not considered for the average precision (AP) values, which are calculated based
TOA parameter since the TOA resolution value of 240 µs on precision-recall curves for each class. The classification
in the generated AMP-TOA image is significantly larger performance is high when the area under the curve, which
than the TOA measurement error, which is typically in the is defined as AP, is high. Precision and recall are defined as
order of nanoseconds. Hence, measurement errors in the TOA follows:
parameter do not cause noticeable pixel-level changes in the
AMP-TOA images, and the proposed method is robust to the TP
Pr ecision = (4)
TOA measurement errors. T P+F P
In addition, AMP-TOA patterns were shifted at a time shift TP
Recall = (5)
value along the x-axis, which allows the simulation of various T P+F N
time of arrival values for radar pulses. AMP-TOA patterns
were also shifted at an AMP offset value along the y-axis,
which simulates the dynamically changing distance between where T P, F P, and F N are true positive, false positive, and
the radar and the receiver. Furthermore, the parameter values false negative, respectively. Precision is the ratio of the num-
for search radars, i.e., sinc illumination time and sidelobe ber of true positives to the total number of predicted positives,
level, and for tracking radars, i.e., electronic illumination and recall is the ratio of the number of true positives to the
time, sinusoidal peak value, sinusoidal period, number of total number of actual positives. In object detection, TP and
lobes, were selected at equidistant points within the intervals FP are calculated based on a threshold defined for intersec-
given in Table 2. Sinc sidelobe levels were used to simulate tion over union, which is the ratio of the overlap and union
areas between the predicted and the ground truth bounding

123

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


Signal, Image and Video Processing (2024) 18:7789–7800 7795

Table 3 Hyperparameters for training


Hyperparameters Values

Learning rate 0.005


Batch size 16
# of epochs 2000
Optimizer Stochastic gradient descent

Table 4 Simulation parameters for testing dataset


Parameters Values/intervals

# of images 4000
PRI modulation type Jitter
Mean PRI values [2, 2000] µs
PRI jitter deviation 30% around mean value
Missing pulse rate {0, 15, 25, 30} %

boxes. In this study, the intersection over union threshold was


defined as 0.5, which means that the mAP50 metric was used.
mAP50 is calculated by taking the mean of AP50 values of
each class.
The hyperparameters for model training are given in
Table 3. Various combinations of learning rates and the num-
ber of epochs were tested to ensure that both the training
and validation loss values exhibited a smooth decline as
the number of epochs progressed during the training pro-
cess. To prevent overfitting, intervals where the validation
loss changed from decreasing to increasing and subsequent
intervals were excluded. As a result of these tests, the learn-
ing rate and the number of epochs were selected as 0.005
and 2000, respectively. The batch size yielding the highest
mAP50 value on the validation dataset was 16. The perfor-
mance of the Adam and Stochastic Gradient Descent (SGD)
optimizers was evaluated, and it was found that SGD pro-
duced superior results. Fig. 4 Two sample AMP-TOA images containing five (a) and four (b)
radar signals: the x- and y-axis represent TOA and AMP of the radar
pulses, respectively (labels of axes are intentionally removed to have
better visualization). Correctly deinterleaved signals are depicted within
4 Performance evaluation solid line bounding boxes, while missed signals are represented within
dashed lines. In (a), all of the five radar signals are deinterleaved. In
(b), three out of four radar signals are deinterleaved, and one of them
The deinterleaving performance results of the proposed is missed
method for the testing dataset generated under various sce-
narios are presented. Besides, a qualitative comparison with
existing deinterleaving methods is given. number of images used for testing is 4000. The parameter
values in Table 2 were randomly selected from the given
4.1 Test tesults intervals for testing, contrary to the selection procedure used
in the training stage, in which parameter values were selected
Test images were generated to assess the deinterleaving per- at equidistant points within the intervals. In order to test the
formance of the proposed method. The parameter intervals robustness of the proposed method against PRI agility, the
for the testing stage were taken from Table 2, as in the train- jitter PRI type was used to test the model trained with only the
ing stage. The testing dataset parameters differing from those stable PRI type. A significantly large jitter deviation of 30%
used during the training stage are presented in Table 4. The from the mean value within the range of [2, 2000] µs was

123

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


7796 Signal, Image and Video Processing (2024) 18:7789–7800

used for testing dataset. Moreover, different missing pulse


rates were also tested for the trained model.
Figure 4 demonstrates the deinterleaving results for two
sample images. In Fig. 4a, the image contains five different
radar signals corrupted by measurement error with a stan-
dard deviation of 0.4 and a missing pulse rate of 0%. Five
objects corresponding to five radar signals generated from
constant (two signals), sinusoidal (one signal), and sinc (two
signals) classes were correctly detected, which corresponds
to true positives. This means that these radar signals were
deinterleaved correctly. Bounding boxes for detected objects
are depicted with solid lines. A predicted class label and con-
fidence score are given for each bounding box. In Fig. 4b, as a
more challenging scenario, the image contains four different
radar signals corrupted by measurement error with a standard
deviation of 0.7 and a missing pulse rate of 30%. Two signals Fig. 5 Deinterleaving performance versus the number of radar signals
for different missing pulse rates
from both sinusoidal and sinc classes were generated, result-
ing in a total of four radar signals. Three out of four objects
were correctly detected. However, one object belonging to
the sinc class was missed, which corresponds to a false neg-
ative. Rectangle dashed line, which is not a bounding box,
is plotted around the missed object. Generally, the number
of pixels for a signal belonging to the sinc pattern is small
compared to signals belonging to the other two classes since
a search radar causing a sinc pattern illuminates the target of
interest in a limited time. Therefore, it might be difficult to
detect an object belonging to a sinc pattern when the missing
pulse rate is high. The signal shown in Fig. 4b, within dashed
lines, is an example radar signal for this case.
The performance of the proposed method in terms of
mAP50 metric at different missing pulse rates for varying
number of radar signals to be deinterleaved is presented
in Fig. 5. In all scenarios, including the most challenging Fig. 6 Deinterleaving performance versus the number of radar signals
scenario where the number of signals is five, and the miss- for each class
ing pulse rate is 30%, mAP50 is greater than 0.95. At a
30% missing pulse rate, the average mAP50 values calcu- Table 5 Confusion matrix for testing dataset
lated over varying numbers of radar signals are greater than Actual
0.98. The deviation in mAP50 values due to missing pulse
Predicted a b c
rates for any number of signals is small, indicating that the
a 0.96 0 0
method is robust to missing pulses. As the number of signals
increases, deinterleaving performance slightly decreases due b 0 0.97 0.01
to the higher probability of observing overlapping objects. c 0 0.01 0.92
The deinterleaving performance in terms of AP50 metric FN 0.04 0.02 0.07
for each class with respect to a various number of radar sig-
nals is shown in Fig. 6. AP50 values were obtained over all
scenarios with different missing pulse rates. The proposed ping objects increases. In this case, the detection rate for sinc
method shows similar performance for the sinusoidal and class objects with a small number of pixels decreases slightly
constant classes up to four signals. As the number of sig- compared to those for the objects belonging to the other two
nals increases above three, the performance for the sinc class classes. However, the AP50 value for the sinc class in the
slightly decreases compared to those for the other two classes. presence of five radar signals with different missing pulse
The reason for that is the patterns in the sinc class contain a rates is still greater than 0.93.
small number of pixels, as stated in the explanation of Fig. 4b. The confusion matrix obtained over all scenarios with dif-
As the number of signals increases, the number of overlap- ferent missing pulse rates up to 30% and varying number of

123

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


Signal, Image and Video Processing (2024) 18:7789–7800 7797

Table 6 Computation time of the proposed method for a given dwell methods used for MFRs. The methods in [14, 15] are classi-
time of 99.8 ms fied to have a moderate robustness level since periodic and
GPU CPU-computer Raspberry Pi Raspberry Pi-NCNN aperiodic PRI types can be recognized by these methods,
provided that the threshold is adjusted properly. The meth-
6 ms 38 ms 333 ms 35 ms
ods used for comparison in Table 7 except for [32] employ
only the TOA parameter for deinterleaving, and eventually,
their performance depends on PRI value. In the method in
radar signals is presented in Table 5. The ranking of perfor- [32], since the amplitude parameter is also used for dein-
mance in terms of classification is similar to that of the AP50 terleaving, the robustness level is considered to be high. Our
metric given in Fig. 6. The correct classification rate for the proposed method utilizes amplitude patterns formed by radar
constant class is close to that for the sinusoidal class, and the beam motion on ES systems without searching for any PRI
constant class is not confused with the other two classes. In patterns or values, which provides high robustness to PRI
this table, FN denotes the false negatives corresponding to agility.
missed detection. The missed detection rate was obtained in In the literature, test signals are generally generated
ascending order for the sinusoidal, constant, and sinc classes. assuming that the pulses with low PRI values are received
The computation time of the proposed method was also during the whole dwell time, i.e., a large number of pulses
analyzed. For a dwell time of 99.8 ms, the proposed method are usually used for testing. That means, for testing purposes,
was tested in Python on a Tesla V100 server (GPU), an Intel signals from tracking radars that continuously illuminate the
Core i7 computer (CPU), and ARM cortex-A76 Raspberry Pi target are usually employed. All the compared methods are
5 (CPU). The results for deploying the proposed method on tested for tracking radar signals, as can be seen from Table
different hardware are provided in Table 6. CPU (computer) 7. For the proposed method, tracking radars are considered
and GPU (server) results satisfy the real-time requirement within constant and sinusoidal classes, and it has been shown
since computation times for both cases are less than the dwell that the method can be used for deinterleaving of tracking
time of 99.8 ms. However, the CPU (Raspberry Pi 5) result radar signals with high accuracy.
does not satisfy the real-time requirement since it has lim- For search radars, the illumination time is generally
ited processing power. In order to overcome this problem, shorter than the dwell time, and high PRI values are used,
the proposed computer vision model should be optimized which means that the number of received radar pulses is less
for embedded systems. NCNN framework was used for the than that of tracking radars. Hence, it is generally difficult to
optimization, in which case the real-time requirement was deinterleave search radar signals. In [32], it has been shown
satisfied for Raspberry Pi 5. It is possible to meet the real- that the method can be used for search radar signals. For
time requirement with the CPU. However, the importance of our proposed method, search radars are considered in the
optimization increases when limited hardware is used. sinc class. Additionally, it has been demonstrated that the
The computation times provided in Table 6 represent the electronic scan signals in the constant class, which can be
performance of the method for a single image corresponding considered as track-while-scan mode signals, can be success-
to a single AOA cluster. However, depending on different fully deinterleaved by using the proposed method. Signals
hardware configurations, parallel or sequential implementa- belonging to this mode have a short duration, and they are
tion of the proposed method for multiple AOA clusters should generated by MFRs while performing search and tracking
be considered. For parallel computing, in GPUs or CPUs with missions together.
multiple cores, the number of AOA clusters to be processed Missing pulses should be considered to simulate real-
can be increased while the total computation time remains the world conditions. All the methods given in Table 7 are tested
same. For sequential computing, the number of AOA clusters under missing pulse conditions. The real-time performance
to be processed can be calculated by dividing the dwell time is another property to be considered. In [15], this property
by the processing time for a single AOA cluster. of the algorithm is not mentioned. In [32], the method is
expressed to be suitable for real-time implementation; how-
4.2 Comparison with existing methods ever, a quantitative evaluation is not provided. In [14], it is
stated that the algorithm is suitable for real-time implemen-
The proposed method is qualitatively compared with three tation based on the observation that the execution time of the
existing deinterleaving techniques in terms of five differ- algorithm is less than the dwell time. Likewise, as can be seen
ent aspects in Table 7. MFRs that are extensively used in from Table 6, the proposed method can be implemented in
modern radar systems have PRI agility. Therefore, robust- a computation time lower than the dwell time, enabling real-
ness to PRI agility is a significant property for deinterleaving time performance.

123

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


7798 Signal, Image and Video Processing (2024) 18:7789–7800

Real-time performance
5 Conclusion

Not mentioned
In this paper, a deinterleaving method for MFRs, which can
change their carrier frequency, pulse width, and pulse repe-

Suitable*
Suitable
Suitable
tition interval parameters from pulse to pulse, is proposed.
Deinterleaving is performed using an object detection based
deep learning method applied to two dimensional images
constructed from AMP and TOA parameters of received
pulses. Each object in the image corresponds to an amplitude
Effect of missing pulses

pattern formed by the radar beam motion on the ES system.


The performance of the method is evaluated for varying num-
ber of radar signals and different missing pulse rates through
simulations and measured by mAP50 metric as well as a con-
Reliable
Reliable
Reliable

Reliable

fusion matrix. The results show that, on the average, more


than 0.98 mAP50 is achieved at 30% missing pulse rate. The
computation time of the method is shown to be less than the
dwell time, enabling real-time performance.
Search radar signal

In this study, deinterleaving performance is analyzed using


radar signals from a single angle of arrival (AOA) cluster.
However, in real-world scenarios, more than one cluster may
Not tested
Not tested

exist, in which case the proposed method is implemented


Tested

Tested

for each AOA cluster separately. This increases the actual


number of radar signals to be deinterleaved by the factor
of K , which is the number of clusters. For example, when
there are six AOA clusters, each of which contains at most
Tracking radar signal

five radar signals, the performance results provided in this


study are valid for up to 30 radar signals. Another advantage
Table 7 Qualitative comparison of the proposed method with existing deinterleaving techniques

is that deinterleaving for each AOA cluster can be carried


out in parallel within the dwell time, in which case the total
Tested
Tested
Tested
Tested

computation time remains the same.


As a future work, in addition to the detection of objects, the
detection of pixels belonging to objects will also be addressed
using instance segmentation. This will provide the necessary
Level of robustness to PRI agility

input for the pulse extraction stage.


Author contributions Conceptualization, M.B.K., A.O., and S.T.;
methodology, M.B.K.; software, M.B.K.; investigation, M.B.K.; data
curation, M.B.K.; writing—original draft preparation, M.B.K.; writing—
review and editing, M.B.K., A.O., and S.T.; supervision, A.O., S.T. All
authors have read and agreed to the published version of the manuscript.
Moderate

Moderate

Funding Open access funding provided by the Scientific and Techno-


High
High

logical Research Council of Türkiye (TÜBÍTAK). This research was


supported by the Scientific and Technological Research Council of
Türkiye (TUBITAK).
Correlation & PRI transform [15]

Open Access This article is licensed under a Creative Commons


Attribution 4.0 International License, which permits use, sharing, adap-
*Not evaluated quantitatively

tation, distribution and reproduction in any medium or format, as


long as you give appropriate credit to the original author(s) and the
source, provide a link to the Creative Commons licence, and indi-
Synthetic alg. [14]

cate if changes were made. The images or other third party material
Proposed method
Methods

in this article are included in the article’s Creative Commons licence,


unless indicated otherwise in a credit line to the material. If material
DWT [32]

is not included in the article’s Creative Commons licence and your


intended use is not permitted by statutory regulation or exceeds the
permitted use, you will need to obtain permission directly from the copy-

123

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


Signal, Image and Video Processing (2024) 18:7789–7800 7799

right holder. To view a copy of this licence, visit http://creativecomm pulse trains. IEEE Trans. Aerosp. Electron. Syst. 59(1), 394–403
ons.org/licenses/by/4.0/. (2022)
20. Dong, H., Wang, X., Qi, X., Wang, C.: An algorithm for sorting
staggered PRI signals based on the congruence transform. Elec-
tronics 12(13), 2888 (2023)
21. Guo, Q., Huang, S., Qi, L., Wang, Y., Kaliuzhnyi, M.: A radar pulse
References train deinterleaving method for missing and short observations.
Digit. Signal Process. 141, 104162 (2023)
1. Huang, T., Shlezinger, N., Xu, X., Ma, D., Liu, Y., Eldar, Y.C.: 22. Liu, Z.-M., Philip, S.Y.: Classification, denoising, and deinterleav-
Multi-carrier agile phased array radar. IEEE Trans. Signal Process. ing of pulse streams with recurrent neural networks. IEEE Trans.
68, 5706–5721 (2020) Aerosp. Electron. Syst. 55(4), 1624–1639 (2018)
2. Wiley, R.: ELINT: the interception and analysis of radar signals. 23. Li, X., Liu, Z., Huang, Z.: Deinterleaving of pulse streams with
Artech, (2006) denoising autoencoders. IEEE Trans. Aerosp. Electron. Syst. 56(6),
3. Liu, Z., Ren, L., Sun, Y., Fan, H., Mao, E.: Waveform design of 4767–4778 (2020)
lfm pulse train based on pulse width agility. In: IET International 24. Han, J.-W., Park, C.H.: A unified method for deinterleaving and
Radar Conference (IET IRC: vol. 2020), pp. 1679–1684 (2020) PRI modulation recognition of radar pulses based on deep neural
4. Yar, E., Kocamis, M.B., Orduyilmaz, A., Serin, M., Efe, M.: A networks. IEEE Access 9, 89360–89375 (2021)
complete framework of radar pulse detection and modulation clas- 25. Nuhoglu, M.A., Alp, Y.K., Ulusoy, M.E.C., Cirpan, H.A.: Image
sification for cognitive EW. In: 27th European Signal Processing segmentation for radar signal deinterleaving using deep learning.
Conference (EUSIPCO). IEEE, pp. 1–5 (2019) IEEE Trans. Aerosp. Electron. Syst. 59(1), 541–554 (2022)
5. Orduyilmaz, A., Yar, E., Kocamis, M.B., Serin, M., Efe, M.: 26. Al-Malahi, A., Farhan, A., Feng, H., Almaqtari, O., Tang, B.: An
Machine learning-based radar waveform classification for cogni- intelligent radar signal classification and deinterleaving method
tive EW. Signal, Image Video Process. 15(8), 1653–1662 (2021) with unified residual recurrent neural network. IET Radar, Sonar
6. Kocamis, M.B., Abacı, H., Akdemir, S.B., Varma, S., Yildirim, Navig. 17(8), 1259–1276 (2023)
A.: Deinterleaving for radar warning receivers with missed pulse 27. Chen, T., Liu, Y., Guo, L., Lei, Y.: A novel deinterleaving method
consideration. In: European Radar Conference (EuRAD). IEEE, for radar pulse trains using pulse descriptor word dot matrix images
pp. 225–228 (2016) and cascade-recurrent loop network. IET Radar, Sonar Navig.
7. Haigh, K., Andrusenko, J.: Cognitive electronic warfare: an artifi- 17(11), 1626–1638 (2023)
cial intelligence approach. Artech House, (2021) 28. Qu, Z., Hou, C., Hou, C., Wang, W.: Radar signal intra-pulse modu-
8. Mardia, H.: New techniques for the deinterleaving of repetitive lation recognition based on convolutional neural network and deep
sequences. In: IEE Proceedings F (Radar and Signal Processing), Q-learning network. IEEE Access 8, 49125–49136 (2020)
vol. 136, no. 4. IET, pp. 149–154 (1989) 29. Si, W., Wan, C., Deng, Z.: Intra-pulse modulation recognition of
9. Manickchand, K., Strydom, J.J., Mishra, A.K.,: Comparative study dual-component radar signals based on deep convolutional neural
of TOA based emitter deinterleaving and tracking algorithms. In: network. IEEE Commun. Lett. 25(10), 3305–3309 (2021)
IEEE AFRICON. IEEE, pp. 221–226 (2017) 30. Si, W., Luo, J., Deng, Z.: Multi-label hybrid radar signal recognition
10. Milojević, D., Popović, B.: Improved algorithm for the deinter- based on a feature pyramid network and class activation mapping.
leaving of radar pulses. In: IEE Proceedings F (Radar and Signal IET Radar, Sonar Navig. 16(5), 786–798 (2022)
Processing), vol. 139, no. 1. IET, pp. 98–104 (1992) 31. Wang, C., Wang, Y., Li, X., Ke, D.: A deinterleaving method for
11. Nishiguchi, K., Kobayashi, M.: Improved algorithm for estimat- mechanical-scanning radar signals based on deep learning. In: 7th
ing pulse repetition intervals. IEEE Trans. Aerosp. Electron. Syst. International Conference on Intelligent Computing and Signal Pro-
36(2), 407–421 (2000) cessing (ICSP). IEEE, pp. 138–143 (2022)
12. Xi, Y., Wu, X., Wu, Y., Cai, Y., Zhao, Y.: A novel algorithm for 32. Dutt, R., Baloria, A., Prasad, V.R.C., Acharyya, A.: Discrete
multi-signals deinterleaving and two-dimensional imaging recog- wavelet transform based unsupervised underdetermined blind
nition based on short-time PRI transform. In: Chinese Automation source separation methodology for radar pulse deinterleaving using
Congress (CAC). IEEE, pp. 4727–4732 (2019) antenna scan pattern. IET Radar, Sonar Navig. 13(8), 1350–1358
13. Tian, T., Ni, J., Jiang, Y.: Deinterleaving method of complex stag- (2019)
gered PRI radar signals based on EDW fusion. J. Eng. 2019(20), 33. Ata’a, A., Abdullah, S.: Deinterleaving of radar signals and PRF
6818–6822 (2019) identification algorithms. IET Radar, Sonar and Navig. 1(5), 340–
14. Chunjie, Z., Yuchen, L., Weijian, S.: Synthetic algorithm for dein- 347 (2007)
terleaving radar signals in a complex environment. IET Radar, 34. Gençol, K., Kara, A., At, N.: Improvements on deinterleaving of
Sonar Navig. 14(12), 1918–1928 (2020) radar pulses in dynamically varying signal environments. Digit.
15. Cheng, W., Zhang, Q., Dong, J., Wang, C., Liu, X., Fang, G.: An Signal Process. 69, 86–93 (2017)
enhanced algorithm for deinterleaving mixed radar signals. IEEE 35. Barshan, B., Eravci, B.: Automatic radar antenna scan type recog-
Trans. Aerosp. Electron. Syst. 57(6), 3927–3940 (2021) nition in electronic warfare. IEEE Trans. Aerosp. Electron. Syst.
16. Driscoll, D. E., Howard, S. D.: The detection of radar pulse 48(4), 2908–2931 (2012)
sequences by means of a continuous wavelet transform. In: 1999 36. Ayazgok, S., Erdem, C., Ozturk, M.T., Orduyilmaz, A., Serin,
IEEE International Conference on Acoustics, Speech, and Signal M.: Automatic antenna scan type classification for next-generation
Processing. Proceedings. ICASSP99 (Cat. No. 99CH36258), vol. electronic warfare receivers. IET Radar, Sonar Navig. 12(4), 466–
3. IEEE, pp. 1389–1392 (1999) 474 (2018)
17. Aslan, M. K.: Emitter identification techniques in electronic war- 37. Ozmen, E., Ozkazanc, Y.,: DeepASTC: Antenna scan type classi-
fare, Master’s thesis, Middle East Technical University, (2006) fication using deep learning. In: IEEE Radar Conference (Radar-
18. Gencol, K., At, N., Kara, A.: A wavelet-based feature set for recog- Conf23). IEEE, pp. 1–6 (2023)
nizing pulse repetition interval modulation patterns. Turk. J. Electr. 38. Jiang, P., Ergu, D., Liu, F., Cai, Y., Ma, B.: A review of YOLO algo-
Eng. Comput. Sci. 24(4), 3078–3090 (2016) rithm developments. Proc. Comput. Sci. 199, 1066–1073 (2022)
19. Yuan, S., Kang, S.-Q., Shang, W.-X., Liu, Z.-M.: Reconstruction 39. Ultralytics, YOLOv8, (2023). [Online]. Available: https://github.
of radar pulse repetition pattern via semantic coding of intercepted com/ultralytics/ultralytics

123

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


7800 Signal, Image and Video Processing (2024) 18:7789–7800

40. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hier- 42. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-
archies for accurate object detection and semantic segmentation. time object detection with region proposal networks. Adv. Neural
In: Proceedings of the IEEE conference on computer vision and Inf. Process. Syst. 28, 91–99 (2015)
pattern recognition, pp. 580–587 (2014)
41. Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE Interna-
tional Conference on Computer Vision, pp. 1440–1448 (2015)
Publisher’s Note Springer Nature remains neutral with regard to juris-
dictional claims in published maps and institutional affiliations.

123

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:

1. use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
2. use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
3. falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
4. use bots or other automated methods to access the content or redirect messages
5. override any security feature or exclusionary protocol; or
6. share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at

onlineservice@springernature.com

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy