0% found this document useful (0 votes)
18 views

UWB NLOS LOS Classification Using Deep Learning Method

UWB NLOS LOS Classification Using Deep Learning Method

Uploaded by

42100891
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

UWB NLOS LOS Classification Using Deep Learning Method

UWB NLOS LOS Classification Using Deep Learning Method

Uploaded by

42100891
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

2226 IEEE COMMUNICATIONS LETTERS, VOL. 24, NO.

10, OCTOBER 2020

UWB NLOS/LOS Classification Using Deep Learning Method


Changhui Jiang , Jichun Shen, Shuai Chen, Yuwei Chen , Di Liu, and Yuming Bo

Abstract— Ultra-Wide-Band (UWB) was recognized as its great communications and the ability to propagate through the
potential in constructing accurate indoor position system (IPS). wall [3]. In addition, the baseband pulse of the UWB has high
However, indoor environments were full of complex objects, the time resolution (order of nanoseconds), which can contribute
signals might be reflected by the obstacles. Compared with the
Line-Of-Sight (LOS) signal, the signal transmitting path delay
to accurate distance measurement.
contained in None-Line-Of-Sight (NLOS) signal would induce However, indoor environments are usually full of various
positive distance errors and position errors. Before employing objects, e.g. walls, desks, chairs and some other objects, direct
ranging information from the channels to calculate the position, UWB signals might be blocked by them and the reflected
LOS/NLOS classification or identification was necessary for signal is received. Compared with the Line-Of-Sight (LOS)
selecting the “clean” channels. In conventional method, features signal, the signal reception will induce additional ranging bias
extracted from the UWB channel impulse response (CIR) or
some other signal properties were employed as the input vector in the distance measurements. Without None-Line-Of-Sight
of the machine learning methods, e.g. Support Vector Machine (NLOS) signals detection, ultimate location accuracy will be
(SVM), Multi-layer Perception (MLP). Deep learning methods decreased [4], [5]. In the UWB based IPS, NLOS detection
represented by Convolutional neural network (CNN) and Long is important for selecting “clean” distance information for
Short-Term Memory (LSTM) had performed superior perfor- location estimation, the NLOS contaminated distance should
mance in dealing with time series data classification. In this pap be excluded or corrected before being employed for location
er, deep learning method CNN-LSTM was employed in the UWB
NLOS/LOS signal classification. UWB CIR data was directly information determination.
input to the CNN-LSTM. CNN was employed for exploring NLOS detection is the premise of the correction, and there
and extracting the features automatically, and then, the CNN are many approaches presented in the past papers to deal with
outputs were fed into the LSTM for classification. Open source this problem. These methods can be categorized into three
datasets collected from seven different sites were employed in different types:
the experiments. Classification accuracy of CNN-LSTM with
(1) The first method is built based on the statistical
different settings was compared for analyzing the performance.
The results showed that CNN-LSTM obtained stat e-of-art parameters differences of the estimated distance information
classification performance. under LOS/NLOS conditions [6]; distance noise from the
Index Terms— UWB, NLOS, CNN, LSTM. LOS condition is subject to Gaussian distribution with zero
mean values, and while the NLOS occurs, the noise could be
I. I NTRODUCTION
modeled as Gaussian distribution with non-zero mean values,
W ITH the rapid development of the Internet-Of-Thing
(IoT) technology, Location Based Service (LBS) has
gained a boom [1]. In outdoors, Global Navigation Satellite
the NLOS induced bias is the mean value. Hypothesis testing
can be carried out based on the distance variance and mean
values differences [7]. However, it is hard to determine the
System (GNSS) can provide reliable navigation solutions
detection threshold, which might vary at different sites and
while sufficient satellites are visible. However, GNSS signal
environments.
is too weak to pass through wall, and it attenuates heavily
(2) The second one is based on the signal propagation path
indoors. There are still some challenges for building accurate
loss model or the study of the channel impulse response (CIR).
Indoor Positioning System (IPS). Ultra-Wide-Band (UWB)
The main idea behind these solutions is that the energy of the
communication technologies have gained wide attention in
first path is noticeably greater than the energy of the delayed
the community to construct IPS with decimeter accuracy [2].
paths. In addition, some non-parameter machine learning
In UWB, the signal spectrum is consisted of multiple sub-
methods are utilized to NLOS/LOS classification, e.g. Support
bands, which intends to improve the data rate of wireless
Vector Machine (SVM), Multi-layer Perception (MLP), Deci-
Manuscript received May 20, 2020; accepted June 1, 2020. Date of sion Tree and some other machine learning methods [8]–[11].
publication June 4, 2020; date of current version October 9, 2020. The authors Kurtosis, Peak to Lead delay, Mean Excess delay and RMS
acknowledge the support of National Natural Science Foundation of China
(Grant No. 61601225). The associate editor coordinating the review of this
delay spread and some other features are also employed in
article and approving it for publication was S. Bartoletti. (Corresponding these classifiers as features [8], [9]. Effect of the NLOS
authors: Shuai Chen; Yuwei Chen.) propagation was quantified with Monto Carlo simulations,
Changhui Jiang, Shuai Chen, and Yuming Bo are with the School of
Automation, Nanjing University of Science and Technology, Nanjing 210094,
and features representative of the LOS/NLOS conditions were
China (e-mail: chagnhui.jiang1992@gmail.com; chenshuai@njust.edu.cn; extracted for classification [10], [11]. A SVM regressor was
byming@njust.edu.cn). employed for mitigating the NLOS induced errors [10], [11].
Jichun Shen is with Hesai Technology, Building L2-B, Hongqiao World However, the signal propagation path loss model is affect by
Centre, Shanghai 201702, China (e-mail: s365445689@hotmail.com).
Yuwei Chen is with the Department of Photogrammetry and Remote lots of factors, and the manually selected features might be
Sensing, Finnish Geospatial Research Institute, FI-02430 Masala, Finland not enough for the classification with these classifiers.
(e-mail: yuwei.chen@nls.fi). (3) Different from the previous two methods detecting
Di Liu is with the School of Automation, Nanjing Institute of Technology,
Nanjing 210094, China (e-mail: liudinust@163.com). NLOS based on the signal characteristics, the third method
Digital Object Identifier 10.1109/LCOMM.2020.2999904 is built based on the context awareness of the surrendering
1558-2558 © 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.

Authorized licensed use limited to: NASATI. Downloaded on November 06,2024 at 07:06:21 UTC from IEEE Xplore. Restrictions apply.
JIANG et al.: UWB NLOS/LOS CLASSIFICATION USING DEEP LEARNING METHOD 2227

environment. In this method, the channel status is identified


by observing the previous position of the mobile user or
environment data (e.g., geometries and attenuation factors), the
signal ray tracing can be carried out with aiding from the 2D or
3D maps, the NLOS signals can be identified and the NLOS
induced errors can be estimated [5]. However, 3D map and
ray-tracing method need large computation and also a prior
position is necessary.
Inspired by the superior performance of the deep learn-
ing method in the data classification [12], in this letter,
we presented a novel UWB NLOS detection and classification
method based on the Convolution Neural Network (CNN)
and Long-Short Term Memory Recurrent Neural Networks Fig. 1. Channel impulse response (LOS/NLOS).
(LSTM-RNN). CNN was employed to extract the non-
temporal features from the raw CIR signals, and the CNN
outputs were then feed into the LSTM-RNN for classifying Fig. 2. Overview of the CNN-LSTM Fig. 3. CNN structure.
LOS/NLOS signals.
Reminder of the letter is organized as: Section II describes
the NLOS problem and the CIR model; Section III presents
the details of the proposed CNN and LSTM; and then, the
dataset collecting and the experimental results, comparison and
the analysis are given in detail, some discussions and future
works are added.
II. LOS/NLOS P ROBLEM S TATEMENT
Assuming the UWB transmitting with white Gaussian noise,
the UWB signal transmitting model can be listed as [4]: Fig. 3. CNN structure.
∞ N
 c −1
III. P ROPOSED CNN-LSTM
s (t) = αj q (t − iTs − jTc ) (1)
i=−∞ j=0 Brief overview of the CNN-LSTM is presented in Figure 2;
two basic blocks CNN and LSTM are included in the frame-
where, q (t) is the single Gaussian pulse with repetition
work. In the CNN-LSTM, the CIR is employed as the input
time Tc , a symbol duration Ts consists of Nc pulses, αj ∈
vector for the CNN. CNN exploits the non-temporal structure
{−1, +1} is the polarization sequence for the spectrum
from the input signals by learning the frequency characteristics
shaping.
of the CIR. The CNN outputs are feed into the LSTM for
The UWB CIR of the transmitted signals can be given as:
learning long-range dependencies of the time series data and
L
 classifying the NLOS and LOS signals.
h (t) = βl δ (t − τl ) (2)
l=1 A. Convolutional Neural Network (CNN)
where, βl is the fading coefficient and τl is the time delay
There are usually multiple hidden layers between the input
of the lth path. The received signals is the summation of the
and output layer in CNN. Among the hidden layers, there
multiple attenuated and delayed replicas of s (t).
L are usually convolutional layers, pooling layers and fully

r (t) = βl s (t − τl ) + v (t) (3) connected layers. A CNN structure is presented in Figure 3.
l=1
A convolutional layer is designed to extract features. Mul-
tiple kernel filters are employed in the convolutional layer to
where, v (t) is the additive white Gaussian noise.
extract the features and characteristics from the input data.
In UWB based IPS, distance information from different
Assuming the lth conventional layer has NFl kernel filters, and
channels is obtained to calculate the location results. Since the
the filters are employed to convolve the input data. Each kernel
Time of Arrival (TOA) method has the advantages of accuracy,
filter uses the same kernel function to extract the features, and
it is employed in UWB based IPS for distance calculation [12].
the operation is modelled as [12]:
While utilizing the TOA method, the distance information is
⎛ (l) ⎞
extracted through the CIR measurements. CIRs of the LOS Nk

(l)
and NLOS are presented in Figure 1. It can be seen the yi = a ⎝ wr(l) xr+i×NSl + b(l) ⎠ (4)
magnitude of the LOS is much larger than that of NLOS, r=1
and the curves are different. The CIR can be regarded as time (l)
series, and the NLOS and LOS CIRs are different due to their N − NK
0≤i≤ (l)
; l = 1, 2, . . . , L. (5)
different transmitting paths. Especially, the NLOS reception NS
affects the CIR curves heavily. Deep learning methods, e.g. (l) (l)
CNN, LSTM, can be employed to deal with NLOS/LOS where, the NK is the filters’ kernel size, the wr and b(l) are
classification directly utilizing the CIR as the input vector. the weight and bias of the kernel.

Authorized licensed use limited to: NASATI. Downloaded on November 06,2024 at 07:06:21 UTC from IEEE Xplore. Restrictions apply.
2228 IEEE COMMUNICATIONS LETTERS, VOL. 24, NO. 10, OCTOBER 2020

After the convolution operation using the kernel filters, the


outputs are fed into an activation function. Here, the Rectified
Linear Units (ReLU) activation function is selected and the
equation is given by:

x if x > 0
fReLU (x) = (6)
0 otherwise
where, x is the argument of the ReLU function.
Pooling layer is usually added after the convolution layer Fig. 4. Basic Structure of the LSTM unit.
for spatial reduction through down-sampling the outputs from
gate”. The“input gate” function is described as:
the convolutional layer. Pooling layer reduces the computation
load and time complexity through reducing the tensors’ dimen- it = σ (W i · [ht−1 , xt ] + bi ) (8)
sion of the outputs from previous convolutional layer. Here, C̃ t = tanh (W C · [ht−1 , xt ] + bC ) (9)
MaxPooling function is employed to select the maximum
values from current pooling window. where, W i , W C , bi and bC are parameters, and they are
Convolutional and MaxPooling layers work together to determined through the training. W i and bi are the weights
extract the features, and then a fully-connected layer is respon- and the bias for calculating the vector it , W C and bC are the
sible for selecting the classes’ probability using SoftMax weights and bias for calculating the candidate cell state C̃ t
function. The class with highest probability is selected as with the tanh function, then, the candidate cell state vector
the output of the classifier. In the fully-connected layer, the C̃ t is multiplied with the vector it , and the results are fed
neurons are all connected to that in previous layer. into updating the cell state at current epoch.
The CNN structures, layer numbers and the parameters are The last one is the “output gate”, which decides the outputs
explored and adjusted by experiments in Section IV. Training of the LSTM. There are two different functions (sigmoid and
and parameters optimization are important for the CNN, and tanh) employed in this gate. The Equations are written as:
the overfitting problem should be considered for improving the
generality and prediction accuracy. Two methods are included ot = σ (W o · [ht−1 , xt ] + bo ) (10)
for overcoming this problem. ht = ot · tanh (C t ) (11)
(1) A dropout rate is introduced after the fully-connected In addition, the cell state vector is as:
layer, some subsets of the neurons are ignored and the nodes
are dropped at the training stage. The dropout regularization C t = f t · C t−1 + it · C̃ t (12)
rate values of different models are mentioned in Section IV.
Also, the dropout is helpful to prevent building site specific IV. DATASET AND E XPERIMENTAL R ESULTS
model. A. Experimental Setting Up and Dataset Description
(2)Abundant training datasets collected from different sites
For convenient assessing the proposed method, open source
are added to the training stage, which guarantees the neural
datasets were employed in the experiments. NLOS and LOS
networks are fully trained without redundancy.
measurements were collected from seven different indoor
locations: Office1, Office2, Small apartment, Small work-
B. Long Short Term Memory (LSTM) shop, Kitchen with a living room, Bedroom and Boiler
Different from the conventional RNN, LSTM-RNN is com- room [9]. At each location, 3000 NLOS and 3000 LOS
posed of three “gates” with different functions. The basic measurements were collected. Collecting the NLOS/LOS sam-
structure of the LSTM unit is presented in the Figure 4. ples from different sites aimed to avoid the occurrence of
Specifically, the “gates” are “input gate”, “forget gate” and the location-specific model. While building the classifica-
the “output gate” [12], [13]. tion model using the dataset, 35000 samples (5000 samples
As shown in Figure 4, the first “gate” is the “forget gate” from each site) were randomly selected from the dataset.
from the left to right. The function of the “forget gate” is The selected samples were random shuffled, 25000 sam-
to decide the “updating” degree and regulate the cell state ples were randomly employed as the training dataset and
from previous epoch. The input vector is fed into a sigmoid the rest 10000 samples were treated as the testing dataset.
function, and then the output vector f t is multiplied with cell Randomly selecting these samples also helped to prevent the
state vector from the last epoch (C t−1 ). The output vector f t possible overfitting of the model in particular site.
equation is written as:
B. Classification Accuracy of CNN With Different Number of
f t = σ (W f · [ht−1 , xt ] + bf ) (7) Layers
where, σ (·) is the sigmoid function, W f is the updating For selecting the proper number of the convolutional layers
weights vector, bf is the bias vector. The vector f t contains in CNN, the CNN-LSTM models were carried out with
the values between 0 and 1, which decides the keeping degree different numbers of the convolutional layers. The batch size
of the C t−1 through a multiplication operation. was set to 64, the neurons of the fully-connected layer was set
After the “forget gate” is the “input gate”, which regulates to 128. In the convolution layer, we selected “valid” padding
the input data xt and the processed state vector from “forget function. A dropout regularization was added after the feature

Authorized licensed use limited to: NASATI. Downloaded on November 06,2024 at 07:06:21 UTC from IEEE Xplore. Restrictions apply.
JIANG et al.: UWB NLOS/LOS CLASSIFICATION USING DEEP LEARNING METHOD 2229

TABLE I TABLE II
A CCURACY C OMPARISON FOR D IFFERENT N UMBER OF L AYERS A CCURACY C OMPARISON FOR D IFFERENT N UMBER OF L AYERS

LSTM had more parameters needed to be determined during


the training phase.

D. Limitations and Future Work


Although above methods obtained satisfying performance in
the UWB NLOS/LOS classification, there were still following
limitations.
(1) Deep learning methods were all implemented in Python
with Keras and Tensorflow library, and the Alienware PC with
I7 CPU (3.3GHz) and 16GB RAM were employed to run
extraction module and the dropout regularization rate was set these programs. Each method was trained with 10 epochs, and
to 0.5. These CNN-LSTM models were trained and tested the procedure always cost over one hour. Methods should be
for searching the best number of the layers. The input vector considered for reducing the computation load and accelerate
dataset was the CIR data with the size (1×1016). These CNNs the training.
were trained with 10 epochs, the training and testing dataset (2) The NLOS/LOS was just clustered in this letter, and
was described in Section IV. A. influence of the NLOS/LOS classification accuracy on the
The classification results of the CNNs with different amount location estimation was not investigated.
of convolutional layers were listed in Table I. For the filters, In future, we thought following works might promote the
we mean Conv(p, q) a convolution layer with p filters and q development of this method.
kernel size, and we mean the MaxPooling(a, b) with a pooling (1) In the CNN-LSTM, Dropout was employed for prevent-
size and b strides. Among these CNN structures, the CNN with ing the overfitting. Another method Batch Normalization (BN)
two convolutional layers was superior to the others. While was able to actuate the training procedure; it was of signifi-
more convolutional layers were included in the model, the cance to explore BN in this application.
classification accuracy decreased on the contrary. We thought (2) Training dataset was possibly insufficient for the clas-
two reasons might account for this phenomenon. The first sifier training and modeling, NLOS/LOS dataset collecting
reason is that CNN with more than two convolutional layers was also labor intensive, other machine learning methods, e.g.
might cause overfitting here; the second reason is that deeper few-shot learning, which could provide satisfying performance
CNN structure might lead to the learning efficiency decreasing; with limited training dataset. It was of great significance for
residual CNN might be helpful for improving the classification exploring these methods in this application.
accuracy. Therefore, CNN with two convolutional layers were (3) We did not discuss the diffraction effects in UWB lim-
deployed here for the NLOS/LOS classification. ited by the dataset and hardware. It was of great significance
to explore the diffraction effects in UWB based IPS.
C. Classification Accuracy Comparison for Different Methods
After selecting approximate CNN, this Sub-section aimed V. C ONCLUSION
to search for suitable LSTM architectures. Bi-direction LSTM In this letter, CNN-LSTM was investigated in the UWB
and stacked LSTM were deployed to classify the NLOS/LOS. NLOS/LOS classification. Through the experimental results,
For comparing these methods fairly, they were incorporated we could conclude that CNN-LSTM was effective for the
with the same CNN structure described in Section IV.C. UWB NLOS/LOS classification, and CNN effectively reduced
Dataset for training, testing and validating were the same as the redundant information. With the LSTM aiding, the clas-
that described in Section IV. B. The classification accuracy sification accuracy obtained further increase compared with
comparison results were listed in the Table II. LSTM per- CNN. The CNN-LSTM method could be extended to other
formed the lowest classification accuracy. However, with the wireless signals based indoor system, e.g. WLAN, Bluetooth.
CNN included, the CNN-LSTM obtained significant improve-
ment compared with single CNN and LSTM. Without the ACKNOWLEDGMENT
CNN, the redundant information contained in the input data The authors would like to thank Doctor Klemen Bregar pro-
might influence the LSTM training and its performance. viding the UWB NLOS/LOS open source data, and the UWB
The LSTM hidden layers was set with 32, and the learning dataset downloading link is as: https://github.com/ewine-
rate was 0.001. As the classification results listed in Table II, project/ UWB-LOS-NLOS-Data-Set.
the CNN-stacked-LSTM obtained minor improvement of R EFERENCES
the classification accuracy. However, the CNN-bidirectional-
[1] W. Wang et al., “Robust weighted least squares method for TOA-based
LSTM performed a slightly worse in the classification accu- localization under mixed LOS/NLOS conditions,” IEEE Commun. Lett.,
racy, which might be initialed by the fact that the bidirectional vol. 21, no. 10, pp. 2226–2229, Oct. 2017.

Authorized licensed use limited to: NASATI. Downloaded on November 06,2024 at 07:06:21 UTC from IEEE Xplore. Restrictions apply.
2230 IEEE COMMUNICATIONS LETTERS, VOL. 24, NO. 10, OCTOBER 2020

[2] S. Mazuelas et al., “Soft range information for network localization,” [8] K. Bregar et al., “NLOS channel detection with multilayer perceptron
IEEE Trans. Signal Process., vol. 66, no. 12, pp. 3155–3168, Jun. 2018. in low-rate personal area networks for indoor localization accuracy
[3] D. Minoli and B. Occhiogrosso, “Ultrawideband (UWB) technology for improvement,” in Proc. 8th Jožef Stefan Int. Postgraduate School Stu-
smart cities IoT applications,” in Proc. IEEE Int. Smart Cities Conf. dents Conf., Ljubljana, Slovenia, Vol. 31, May 2016, pp. 1–8.
(ISC), Sep. 2018, pp. 1–8. [9] V. Barral et al., “NLOS identification and mitigation using low-cost
[4] X. Yang, “NLOS mitigation for UWB localization based on sparse UWB devices,” Sensors, vol. 19, no. 16, p. 3464, Aug. 2019.
pseudo-input Gaussian process,” IEEE Sensors J., vol. 18, no. 10, [10] H. Wymeersch et al., “A machine learning approach to ranging error
pp. 4311–4316, May 2018. mitigation for UWB localization,” IEEE Trans. Commun., vol. 60, no. 6,
[5] S. Wu et al., “NLOS error mitigation for UWB ranging in dense pp. 1719–1728, Jun. 2012.
multipath environments,” in Proc. IEEE Wireless Commun. Netw. Conf., [11] S. Marano et al., “NLOS identification and mitigation
Oct. 2007, pp. 1565–1570. for localization based on UWB experimental data,” IEEE
[6] J. Khodjaev et al., “Survey of NLOS identification and error mit- J. Sel. Areas Commun., vol. 28, no. 7, pp. 1026–1035,
igation problems in UWB-based positioning algorithms for dense Sep. 2010.
environments,” Ann. Telecommun. Ann. Commun., vol. 65, nos. 5–6, [12] C. Lu et al., “MIMO channel information feedback using deep recurrent
pp. 301–311, Jun. 2010. network,” IEEE Commun. Lett., vol. 23, no. 1, pp. 188–191, Jan. 2019.
[7] A. H. Muqaibel et al., “Practical evaluation of NLOS/LOS parametric [13] C. Jiang et al., “A MEMS IMU de-noising method using long short term
classification in UWB channels,” in Proc. 1st Int. Conf. Commun., Signal memory recurrent neural networks (LSTM-RNN),” Sensors, vol. 18,
Process., Appl. (ICCSPA), Feb. 2013, pp. 1–6. no. 10, p. 3470, Oct. 2018.

Authorized licensed use limited to: NASATI. Downloaded on November 06,2024 at 07:06:21 UTC from IEEE Xplore. Restrictions apply.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy