A New View of Multisensor Data Fusion
A New View of Multisensor Data Fusion
Review Article
A New View of Multisensor Data Fusion: Research on
Generalized Fusion
1
College of Information Engineering, Southwest University of Science and Technology, Mianyang 621000, China
2
Department of Mechanical Engineering, Tsinghua University, Beijing 100084, China
Received 16 June 2021; Revised 26 August 2021; Accepted 27 August 2021; Published 15 October 2021
Copyright © 2021 Guo Chen et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Multisensor data generalized fusion algorithm is a kind of symbolic computing model with multiple application objects based on
sensor generalized integration. It is the theoretical basis of numerical fusion. This paper aims to comprehensively review the
generalized fusion algorithms of multisensor data. Firstly, the development and definition of multisensor data fusion are analyzed
and the definition of multisensor data generalized fusion is given. Secondly, the classification of multisensor data fusion is
discussed, and the generalized integration structure of multisensor and its data acquisition and representation are given,
abandoning the research characteristics of object oriented. Then, the principle and architecture of multisensor data fusion are
analyzed, and a generalized multisensor data fusion model is presented based on the JDL model. Finally, according to the
multisensor data generalized fusion architecture, some related theories and methods are reviewed, and the tensor-based
multisensor heterogeneous data generalized fusion algorithm is proposed, and the future work is prospected.
mechanism of data generation. How to collect relatively future development trend and difficulties of the research on
accurate and complete data and correctly represent is very the generalized fusion algorithm system of multisensor
challenging. data are summarized.
F1
Feature
information
StudentID Longitude Latitude Time
D20148803 114.41225837 30.51989529 07-28 10:36:15
D20148805 114.41209096 30.51987968 07-28 10:36:25
D20148806 114.41194219 30.51992848 07-28 10:36:35 Fn
...
...
...
...
<?xml version='1.0'>
<University> F2
<Category='doctoral'>
</Student>
</University>
(2) According to the attributes of fusion data, multi- Feature-level data fusion is divided into target
sensor data fusion can be divided into homogeneous state data fusion and target characteristic fusion.
data fusion and heterogeneous data fusion. Target state data fusion: the main realization is
parameter correlation and state vector estima-
① Homogeneous data fusion: it is the consistent
tion, which is mainly used in the field of mul-
representation (interpretation and description)
tisensor target tracking. Target feature fusion:
of the fusion process of homogeneous data col-
that is, using the corresponding technology of
lected by multiple identical sensors, also known
pattern recognition, the joint recognition at the
as multisensor homogeneous data fusion.
feature layer requires that the features be related
② Heterogeneous data fusion: that is, the process of
before fusion, and the feature vectors are clas-
consistent representation (interpretation and
sified into meaningful combinations.
description) of heterogeneous data collected by
③ Decision-level data fusion: it is a high-level fusion,
multiple different sensors, also known as multi-
and the result of the fusion is the basis for com-
sensor heterogeneous data fusion.
mand and control decision-making. In this level of
(3) According to the abstract level of fusion data, it is fusion process, each sensor should first establish a
divided into signal-level data fusion, feature-level preliminary judgment and conclusion on the same
data fusion, and decision-level data fusion. target, then perform correlation processing on the
There are essential differences between multisensor decision from each sensor, and finally perform
data fusion and classical signal processing (single- decision-level fusion processing to obtain the final
sensor signal). Multisensor data have complex forms joint judgment. Decision-level fusion has good
and different abstract levels (signal level, feature real-time performance and fault tolerance, but its
level, and decision level). preprocessing cost is high. At present, network-
based signal or information processing often
① Signal-level data fusion: refers to the fusion on adopts this level of data fusion [36, 37].
the original data layer, that is, the original
measurement and report data of various sensors (4) According to the time vector and space vector of the
are directly integrated and analyzed without fusion data, it can be divided into time fusion, space
preprocessing. The advantage is that it can fusion, and space-time fusion.
maintain as much field data as possible, which is ① Time fusion refers to the fusion processing of the
richer, complete, and reliable than other fusion time domain data of a certain sensor in the system
levels. The disadvantages are accurate registra- ② Spatial fusion refers to the fusion processing of
tion must be performed before pixel-level fu- the measurement values of the related targets at
sion, the amount of processed data is too large, the same sampling time for each sensor in the
the processing time is long, and the real-time system
performance is poor. Pixel-level data fusion is ③ Spatiotemporal fusion refers to the fusion pro-
the lowest level of fusion, but it is possible to cessing of the measurement values of the relevant
provide optimal decision-making or optimal targets of the sensors in the system over a period
recognition. It is often used for multisource of time
image composition, image analysis, and
understanding.
② Feature-level data fusion: firstly, extract features 4. Multisensor Generalized Integration and
from the original data from each sensor (features: Data Acquisition and Output
direction, speed, edge of the target, etc.), and then
perform integrated analysis and processing of the With the continuous development of intelligent industry,
feature information, which belongs to the mid- single sensor has been unable to meet the needs of society,
dle-level fusion. Feature-level data fusion fea- and different multisensor integrated systems are needed to
tures: it achieves good information compression match with it, which has shown an increasingly urgent
and is conducive to real-time processing; the situation. Multisensor integration system is generally a
extracted features are related to decision analysis, nonlinear system. Its sensor attributes, integration mode,
so the fusion result can provide feature infor- data acquisition, and output directly affect the way and
mation for decision analysis to the maximum. quality of multisensor data fusion.
Mathematical Problems in Engineering 5
4.1. Multisensor Generalized Integrated System (MGIS) where CSM.HMI.SC represents multiattribute
homogeneous multisensor integrated subsystem
4.1.1. Multisensor Generalized Integration. According to the cluster, S represents the sensor, A1, A2, A3, . . .,
attributes and quantity of sensors in the multisensor system, AN represents sensor and integration subsystem
abandoning specific application requirements, Figure 2 attribute, 1, 2, 3, . . ., n represents number of
shows the generalized integration mode of multiple sensors. sensors, and IS represents an integration
In the multisensor generalized integration method, subsystem.
homogeneous multisensor integration and heterogeneous (ii) Single-attribute (heterogeneous single-sensor
single-sensor integration are the most basic integration integrated) subsystem cluster (S.HSI.SC) struc-
methods, and heterogeneous multisensor integration is an tural model is as follows:
organic combination of different subsystems. A
⎧
⎪ A A A A IS 1
⎪
⎪ s1 1 , s1 2 , s1 3 , . . . , s1 N 1 ,
4.1.2. Multisensor Generalized Integrated Structure Model. ⎪
⎪
⎪
⎪
According to the multisensor generalized integration ⎪
⎪ A A A A IS 1
A
⎪
⎪ s 1 1 , s1 2 , s1 3 , . . . , s 1 N 2 ,
method, the multisensor generalized integration structure ⎪
⎨
can be divided into CSS.HSI.SC � ⎪ A1 A2 A3 A IS 1
A
(4)
⎪
⎪ s1 , s1 , s1 , . . . , s1 N 3 ,
⎪
⎪
① Homogeneous multisensor integration structure ⎪
⎪
⎪
⎪ ⋮
(HMI) refers to a system integrated by multiple ⎪
⎪
⎪
⎩ sA1 , sA2 , sA3 , . . . , sAN IS 1 ,
A
A1
SA1 , SA2 , SA3 , ..., SnA
1 1 1 1 (SA1 , SA2 , SA3 , ..., SnA )IS
1 1 1 1
Multi-
A2
Homogeneous multi-sensor attribute (SA1 , SA2 , SA3 , ..., SnA )IS
2 2 2 2
homogene
integration ous multi- A3
(SA1 , SA2 , SA3 , ..., SnA )IS
3 3 3 3
sensor
integrated
...
subsystem
SA1 , SA1 , SA1 , ..., S1A
1 2 3 N
clusters (SA1 , SA2 , SA3 , ..., SAn )IS
N N N N
AN
Sensor
integration Heterogeneous single- A1
sensor integration Single- (SA1 , SA1 , SA1 , ..., SA1 )1IS
1 2 3 N
mode attribute A1
heterogene (SA1 , SA1 , SA1 , ..., SA1 )2IS
1 2 3 N
ous single- A1
sensor (SA1 , SA1 , SA1 , ..., SA1 )3IS
1 2 3 N
integrated
...
Heterogeneous multi- subsystem
clusters (SA1 , SA1 , SA1 , ..., SA1 )nIS
A1
sensor clusters 1 2 3 N
Subsystem attribute
Figure 2: Multisensor generalized integration.
4.2. Multisensor Data Acquisition and Output. Multisensor 5. Multisensor Data Fusion Principle
refers to a sensor integrated system platform and a dis- and Architecture
tributed fusion sensor subsystem. The main function is to
use it as a signal source to collect data and output. (1) Data Humans and animals are born and use extremely natural
collection: data science refers to the data collection of and reasonable multisensor data fusion capabilities, such as
physical sensors as the sensing equipment in the physical the observation, smell, and inquiry of traditional Chinese
space, which is called data collection, that is, the mea- medicine. Bat’s judgment of prey is the most primitive
surement behavior and process of the object being measured multisensor data fusion.
by the physical sensor in time sequence. (2) Data output: it
refers to the recording method and recording result of the
measurement result of the multisensor system. The re- 5.1. Principles of Multisensor Data Fusion. In the field of
cording method is divided into real-time recording and automation research, multisensor data fusion technology is
abnormal recording. Real-time recording: it refers to the derived from the way of imitating human and animal
complete recording of sampling results based on time se- cognition of the world, which is essentially similar to that
quence; abnormal recording it refers to recording only humans or animals obtain information through various
abnormal data outside the threshold range based on the senses, and the acquired information with memory or
given normal threshold range. experience is compared and distinguished. The basic
The data collected by the multisensor generally have the principle is as follows: just like the comprehensive pro-
characteristics of heterogeneity, and its heterogeneity arises cessing of information by the human brain, make full use of
from differences in expression, source differences, and multiple sensors (multi-information sources) integrated in
human factors. Representation difference refers to the an orderly manner. First, the data collected by each sensor
diversity of order and dimension (some data are high-order is the observation data. We carry out consistency expres-
tensors and some data are matrices); source difference sion, then combine with multiple sensors in space or time
refers to different types of sensors or from different de- redundant or complementary data according to a certain
tection purposes; human factors refers to the construction criterion, and finally obtain a consistent explanation or
of data space, implementation, and technology of data description of the measured object. The specific expression
management system [29]. is as follows [38, 39]:
Mathematical Problems in Engineering 7
(1) Orderly integrate n sensors of N different types to advancement, and high-level fusion is based on the
collect and observe data related to the target (N, results of low-level fusion.
n � 2, 3, 4, . . .) ④ Mixed model: in 2009, Bedworth and O’Brien pro-
(2) Consistent representation of homogeneous data posed the model, as shown in Figure 5 [45]:
collected by each sensor The model consists of four parts: observation, ori-
(3) Perform feature extraction on various heterogeneous entation, decision-making, and action. Observation
data (such as output vector, imaging data, discrete or includes data collection and processing; the orien-
continuous time function data, or a direct attribute tation part includes feature extraction and pattern
description) and extract the feature vector Yi rep- recognition; the decision-making part includes state
resenting the observation data estimation and decision fusion; the action part in-
(4) Perform pattern recognition processing on the fea- cludes control and resource allocation.
ture vector Yi (such as clustering algorithm, adaptive
neural network, and tensor expansion operator), to 5.2.2. Multisensor Data Generalized Fusion Model.
complete the description of various sensors about the According to the principle of multisensor data fusion,
target abandoning application objects, and improving the JDL
(5) Knowledge fusion: group and correlate the de- information fusion model (Steinberg version in 1999) [21], a
scription data of various sensors about the target and generalized model of multisensor data fusion is obtained: it
then use the fusion algorithm to synthesize to obtain consists of data source, level 4 data fusion, human-computer
a consistent interpretation and description of the interaction, and data management7. Its functions and re-
target lationships are shown in Figure 6.
5.2. Multisensor Data Fusion Architecture. Multisensor data 5.2.3. Multisensor Data Fusion Generalized Architecture
fusion architecture refers to the whole process of multi- Model Analysis
sensor data fusion, the components of the fusion system, the ① Data source includes (1) physical sensor integrated
main functions of each part, the relationship between each system platform (organic physics system or sensor
part, the relationship between the subsystems and the sys- and integrated system platform), (2) distributed fu-
tem, the fusion location, etc. [40, 41]. sion sensor subsystem, and (3) reference data, geo-
graphic information, supporting databases, etc.
5.2.1. Typical Multisensor Data Fusion Architecture Model ② Human-computer interaction includes (1) manual
input of commands, information requests, manual
① Multisensor integrated fusion structure model: in
inference and evaluation, manual operator reports,
1988, Luo and Kay proposed the model, as shown in
etc., and (2) a mechanism for integrating system
Figure 3 [42].
alarms, displaying location, and identity information
The model is composed of 4 parts: sensor, data fusion, to dynamically cover and deliver results geographi-
database auxiliary system, and fusion level. The cally. It includes both multimedia methods of human
sensor part is composed of n(n ≥ 2) sensors; the data interaction (graphics, sound, tactile interfaces, etc.) as
fusion part is described as a progressive fusion well as methods to attract human attention and help
method; the database auxiliary system part is de- overcome cognitive limitations.
scribed as the intervention or impact on each fusion;
③ Level 1 (source data fusion): based on pixel-level or
the fusion level part is described as the fusion that the
signal-level data association and representation, it
model can be used for level.
prepares for signal/target observable state estimation
② Thomopoulos structural model: in 1990, Thomo- or prediction. This means the data source signal is
poulos proposed the model, as shown in Figure 4(a) compressed under the condition of ensuring the data
[43]. acquired by the sensor as little as possible, so as to
The model is composed of three parts: sensor, data retain the effective information to the maximum
fusion, and database. The data fusion part is de- extent for higher level data fusion.
scribed as three levels of fusion, and each level of ④ Level 2 (feature and state estimation): based on the
fusion supports (or influences) each other; the data fusion results of data sources, it estimates and pre-
fusion part supports (or influences) each other with dicts the state, attribute, feature, event, or action
the sensor part and supports (or influences) each feature vectors of the target related to heterogeneous
other with the database part. data, and according to the feature vector, it estimates
③ Waterfall model: in 1998, Harris et al. proposed the and predicts the relationship between entities (data),
model, as shown in Figure 4(b) [44]. impact of association and perception, and physical
The model consists of three parts: sensors, data fu- environment and constructs the state trend.
sion, and control. The data fusion part is described as ⑤ Level 3 (situation fusion): based on the results of
a five-level fusion, a process of incremental feature fusion, it analyzes the advantages and
8 Mathematical Problems in Engineering
(a) (b)
View test
Sensor management Signal processing Sensor data fusion
Detection
Set to
Action
Relationship between
processing
Hard decision fusion Soft decision fusion
Definitely ce
Figure 5: Mixed model.
Source Data
Features and states merge Trend of fusion
Fusion
Human-
Database computer
Management System interaction
Knowledge fusion and
Megre Support process optimization
Database Database
disadvantages of various plans, actions, and state classification, statistics, compression, and estimation (the
trends and estimates and predicts the interaction homogeneous raw signal output by the same type of sensor).
between plans and actions to be taken, the impact on Consistent representation of quality data is gained.
the overall situation, and the possible results. Finally,
it combined with the support data, the decision data
are obtained. 6.1.1. Data Representation. The knowledge and rules dis-
⑥ Level 4 (knowledge fusion and process optimization): covered from the original data are based on data repre-
knowledge fusion is the fusion of sensor data and sentation. In recent years, many researchers have discussed
supporting database data. Process optimization refers and described the work of data representation [46, 47]. The
to the adaptive data collection and processing; it is most basic data representation methods include ontology
responsible for monitoring all links in the entire fusion representation, graph representation, tensor representation,
process and forming a more effective resource allo- and matrix representation [48].
cation plan to support mission goals. It is the feedback ① Ontology representation: ontology is the description
part of the whole system, thought of as a process that of specific domain concepts, also known as the set of
manages other processes, and is shown outside the concepts [49]. Ontology generation includes two
fusion process. The main functions are to (i) monitor steps: first, mapping the real world (such as entity,
the performance of each link in the data fusion process attribute, and process) to a set of concepts and then
and provide it with real-time and long-term control extracting the relationship between concepts. It can
information, (ii) identify what information is needed represent objects as conceptual models at the se-
to improve the multilevel fusion results (inference, mantic level. It simplifies the transformation of
location, identity, etc.), (iii) determine the collection of knowledge and is the mainstream method of data
relevant information of the specific source (which type representation [50, 51].
of sensor, which specific sensor, which database, etc.), ② Graph representation: it is the representation of
and (iv) allocate data, realize knowledge fusion, and natural data with the matrix, which has some limi-
complete task goals. tations. The graph is composed of many points called
⑦ Data management: it is the most extensive support nodes, which are connected with edges [52]. The
function required for data fusion processing. This commonly used graph representation matrix is ad-
function provides access and management of the jacency matrix [53].
fusion database, including data retrieval, storage, ③ Matrix representation: the matrix is also called bi-
archiving, compression, relational query, and data directional array, which is a parallel description of
protection. Database management in data fusion time domain and space domain. Multichannel signals
systems is particularly difficult because the amount of are generally represented by the matrix [29]. The
data managed is large and diverse (images, signals, rows of the matrix contain all sensors or channels, the
vectors, and textures). columns contain all measurement times, and the
Among them, the two parts of human-computer in- elements represent signal values. In data mining and
teraction and process optimization run through the whole machine learning, rectangular arrays describe the
process of data fusion; source data fusion belongs to pixel- attributes or observations of samples as each row
level fusion; situation fusion belongs to decision-level fusion; corresponds to one sample or observation, and each
support database refers to soft sensor data; fusion database column corresponds to multiple attributes or ob-
contains fusion rules and fusion results. servations related to the sample.
④ Tensor representation: a tensor is a sequential ex-
6. Multisensor Data Fusion Theory pansion of a vector. It is a multidimensional array.
and Algorithm Each element has multiple indicators. Each indicator
represents a model or an order. It is a general tool for
Multisensor is used to obtain the consistent interpretation or representing various heterogeneous data [29]. For
description of the measured object, which is mainly realized example, gait video data can be expressed as a fourth-
by data fusion algorithm. At present, the research results of order tensor composed of pixels, angles, motion, and
multisensor data fusion are very rich, which provide an objects [54]; network link data can be expressed as a
important reference for the research of multisensor data- third-order tensor [55]; the electronic nose data can
generalized fusion algorithm. Next, according to the 4-level be expressed as a third-order tensor [56].
fusion in the multisensor data-generalized fusion model, the
related theories and algorithms are sorted out step by step in
the following sections. 6.1.2. Consistency Test of Homogeneous Data. Homogeneity
sensors are arranged in different spatial positions, and their
monitoring data have some differences. According to the
6.1. Source Data Fusion. Source data fusion refers to the data principle of consistency test, if the difference is greater than
collection and output data sorting with multiple sensors as the set threshold, the monitoring data are considered as
the signal source, that is, the output data are processed by abnormal data, and the accuracy will be seriously affected if
10 Mathematical Problems in Engineering
it is fused directly [57]. In order to ensure the consistency, interferences in the data acquisition process, and the ac-
continuity, and accuracy of the monitoring data, it is most quired data may be distorted or unrecoverable. Therefore, a
reasonable to replace the abnormal data with the average single feature is used as the fusion object, and the fusion
value of the normal value in this period. Therefore, only after result is unreliable.
the homogeneity monitoring data passes the consistency test
can the data fusion be carried out and the correct consistency
representation be obtained. 6.2.1. Feature Extraction. The premise of feature fusion is
Homogeneous multisensor data consistency test prin- feature extraction. Feature extraction refers to the process of
ciple: suppose there are n homogeneous sensors to measure performing various mathematical transformations on data
the same attribute of the monitored object, and the mea- to obtain the indirect target characteristics contained in the
surement results are X1, X2, . . ., Xn, expressed as Xi (i � 1, 2, data. Indirect target characteristics refer to the recessive
. . ., n), and we perform consistency test on Xi (i � 1, 2, . . ., n); features that can reflect target features (geometry, move-
the test principle is that the difference between two adjacent ment, statistics, etc.) from the side [61]. A large number of
numbers is less than or equal to the threshold ε; the specific theories and practices have shown that when the direct
calculation formula is as follows: features are not obvious, extracting the indirect features and
finding the comprehensive characteristics of the target is the
X2 − X1 ≤ ε,
key to multisensor data feature fusion, and it is also an
X3 − X2 ≤ ε, important idea of data fusion in the contemporary infor-
(5) mation technology field.
...
Xn − Xn−1 ≤ ε.
6.2.2. Data Association. In a distributed multisensor system,
judging whether the information from different subsystems
represents the same target, this is data association (inter-
6.1.3. Weighted Average Fusion Algorithm [58]. The connection). The purpose of data association is to distin-
weighted average method is often used for the fusion of guish different targets and to solve the problem of
homogeneous data of homogeneous sensor systems to overlapping sensor spatial coverage areas. The classic data
monitor dynamic objects. It is a direct fusion method for association algorithms are nearest neighbor method [62],
data sources and the simplest signal-level fusion method. probabilistic data association algorithm (PDA) [63, 64],
Homogeneous data of a homogeneous sensor system is a multiple hypothesis method (MHT) [65, 66], and proba-
description of the same attribute. If K sensors are used to bilistic multiple hypothesis algorithm (PMHT) [67, 68].
measure the target, the average value is defined as
k
x � w i xi , 6.2.3. State Estimation. Multisensor systems are generally
i�1 nonlinear systems. The optimal solution of nonlinear
(6) function filtering can be obtained through Bayesian optimal
k
wi � 1, estimation. Therefore, from the Bayesian theory, the state
i�1 estimation of the system can be obtained by approximating
the nonlinear function of the system or the probability
where wi represents the weight of the ith sensor. This density function of the nonlinear function.
method is simple and intuitive, but the fusion accuracy is not There are two types of approximation methods for state
high, which is suitable for data fusion of the homogeneous estimation of nonlinear systems: one is the approximate
multisensor system. linearization of the nonlinear links of the system, retaining
the low-order terms and ignoring the high-order terms, that
6.1.4. Kalman Filter Fusion Algorithm [59]. This method is a is, the direct linear approximation of the nonlinear function.
data fusion method based on minimum variance estimation, Among which the most widely used are the extended Kal-
which is used to estimate homogeneous data with moni- man filter algorithm (EKF) [69] and divided differential filter
toring errors. The goal is to maximize the representation of (DDF) [70, 71]. The other is to approximate the nonlinear
true values. As proposed in the 1960s, it is the most com- distribution by the sampling method, that is, to approximate
monly used technology in target tracking and navigation the probability density function of the nonlinear function.
systems [60]. The disadvantage is that each local sensor Such methods include particle filter algorithm (PF) [72],
requires global estimation and two-way communication, unscented Kalman filter algorithm (UKF) [73], and cubature
negating some of the advantages of parallelization. Kalman filter algorithm (CKF) [74] be used for generalized
Kalman filter for special cases, etc.
deep learning method, clustering algorithm, fuzzy set theory, machines (RBM) “series” stacks. In the network and
decision tree, and other methods. stack, the hidden layer of the previous RBM is the
explicit layer of the next RBM, and the output of the
previous RBM is the input of the next RBM, see
6.3.1. Artificial Neural Network (ANN). ANN is a non- Figure 9(a). In the course of their training, the
programmed, nonlinear adaptive, and brain-style parallel previous RBM must be fully trained before the
distributed information processing system proposed on the current RBM of the layer up to the last layer [123].
basis of modern neuroscience. The essence is through the After layer-by-layer stacking of RBM, the DBN
transformation of the network structure and dynamic be- model can extract features layer by layer from the
havior, with varying degrees at different levels, to imitate the original data and obtain some high-level expressions
human brain nervous system to process information [124, 125], see Figure 9(b).
[106, 107]. A neural network is a computational model,
which is composed of a large number of neurons (nodes) ③ Stack automatically encodes network model: the
connected to each other. The connection mode of the structure of SAEN is similar to that of DBN, consisting
neurons is different, and the composed network is also of a stack of several structural units. The difference
different. The neuronal structure is shown in Figure 8(a) between the two is that the structural unit of SAEN is
[108, 109]. The calculation model of the artificial neural autoencoder, while the structural unit of DBN is RBM.
network is shown in Figure 8(b) [110, 111]. The self-encoder is composed of a three-layer network.
In the figure, a1 − an indicates the components of the The input layer and the hidden layer form an encoder,
input vector, w1 − wn indicates the weight of each neuron’s which converts the input signal x into a. The hidden
synapse, b indicates the bias value, f is the transfer function layer and the output layer constitute a decoder, which
(usually a nonlinear function), t is the nerve. The output of transforms the code into an output signal y; a mul-
the element is t � f(WA′ + b), W is the weight vector, A is tilayer sparse self-encoder can form a stacked self-
the input vector, and A′ is the transpose of the A vector. It encoder. That is, the output of the sparse self-encoder
can be seen that the function of the neuron is to obtain the of the previous layer is used as the input of the self-
scalar result of the nonlinear transfer function after the inner encoder of the subsequent layer [118].
product of the weight vector and the input vector are
transposed. 6.3.3. Fuzzy Set Theory. Fuzzy set theory refers to the use of
mathematics to describe fuzzy concepts and extend the exact
6.3.2. Deep Learning. Deep learning is derived from artificial set to fuzzy sets from the extension. It is also called fuzzy
neural network, which is a general term for a class of pattern mathematics. Mathematically, it can eliminate the impris-
analysis methods. It has made rich achievements in data onment that the computer cannot handle fuzzy concepts
mining, machine learning, natural language processing, and [126]. The proposal of “membership function” breaks
other related fields [112]. The purpose of studying deep through the absolute relationship of belonging or not be-
learning is to establish a neural network that imitates human longing to in the classic set theory and describes the am-
brain mechanism to interpret data (such as image, sound, biguity of things [127, 128].
and text) [113]. Deep learning includes supervised learning Definition of ambiguity [128, 129]: a measure of the
and unsupervised learning. Classical learning models in- ambiguity of fuzzy set A, which is reflected as the degree of
clude convolutional neural network (CNN), deep belief ambiguity of A, is intuitively defined.
network (DBN), and stack automatically encodes network Let map D: F (U) ⟶ [0, 1], where D is the ambiguity
(SAEN) [114]. At present, deep network has been success- function defined on F (U); then, D (A) is the ambiguity of
fully applied to the fusion of single-mode data (such as text fuzzy set A, which should have the following five
and image) and has also been rapidly developed in the fusion characteristics:
of multimode data (such as video) [115, 116]. ① Clarity: D (A) � 0 if and only if A ∈ P (U) (the am-
① Convolutional neural network model: CNN is a kind biguity of a classical set is always 0)
of feedforward neural network with convolution ② Fuzziness: D (A) � 1 if and only if ∀u ∈ U have A (u) �
calculation and depth structure. It has the ability of 0.5 (the fuzzy set with membership degree of 0.5 is
representation learning and is one of the represen- the fuzziest)
tative algorithms of depth learning [117, 118], which ③ Monotonicity: ∀u ∈ U if A (u) ≤ B (u) ≤ 0.5, or A (u) ≥
is composed of the input layer, the hidden layer, and B (u) ≥ 0.5 and D (A) ≤ D (B)
the output layer. It is a neural network that can be ④ Symmetry: ∀A ∈ F (U); there are D (A) � D (A) � (the
used for supervised learning and unsupervised complements have the same degree of ambiguity)
learning, and its hidden layer has the characteristics
of less computation [119–121]. ⑤ Additivity: D(A ∪ B) + D(A ∩ B) � D (A) + D (B)
② Deep belief network model: the DBN model can also
be interpreted as a Bayesian probability generation 6.3.4. Decision Tree. The machine learning technology of
model [122], which is a multihidden layer neural generating decision tree from data is called decision tree for
network composed of multiple restricted Boltzmann short. It is a basic classification and regression method [130].
Mathematical Problems in Engineering 13
a1 x1
w1 w1
x·
w2 1 w Bias
a2 1
t
w3 SUM f x2 w2 x2 ·w + −
a3 2
+
x3·w3 + ∑ f
x3 w3
wn
+
an b
wn
x n·
1 xn wn
(a) (b)
Figure 8: Artificial neural network. (a) Neuronal structure. (b) Artificial neural network computing model.
Output Layer
Hidden Layer 4
RBM 4
Hidden Layer 3
RBM 3
Hidden Layer 2
RBM 2
Hidden Layer 1
RBM 1
Input Layer
(a) (b)
Figure 9: (a) Deep belief network. (b) Deep belief network training process.
It is a graph theory method of intuitively using probability the knowledge fusion process to predict the future be-
analysis; that is, on the basis of known occurrence proba- havior of a target or entity.
bility of various situations, it can classify its objects or mine Process optimization is usually realized by “effect theory”
data by constructing decision tree [131]. The output of [145, 146]; that is, a variety of system evaluation indexes and
decision tree is single. When facing complex output, mul- methods are used to monitor and evaluate the performance
tiple independent decision trees can be established to deal of each link (subsystem) and form an effective resource
with different outputs. allocation scheme, which is equivalent to the feedback part
In recent years, multisensor data fusion has developed of the whole system.
rapidly; when dealing with situation fusion, many scholars The effectiveness evaluation of the data fusion system is
also use nonprobabilistic fusion methods such as random set generally quantitatively evaluated by Monte Carlo simulation
[132–134], rough set [135–138], fuzzy logic [139–142], and [147, 148] or covariance error analysis technology [18, 149].
Dempster–Shafer [77, 143, 144] to achieve ideal results. To optimize the data fusion system, the following basic issues
must be considered and solved [150, 151]: (1) choose what
algorithm or technology is the most suitable and optimal; (2)
6.4. Knowledge Fusion and Process Optimization. choose which fusion framework to use (that is, where the data
Knowledge fusion is the fusion of sensor data and sup- flow is processed in the fusion process) is most appropriate;
porting database data. Process optimization refers to the (3) select which sensor integration method can extract the
global optimization process based on knowledge fusion. maximum amount of information; (4) ensure the actual
Knowledge fusion includes selection and automatic accuracy that each process of data fusion can achieve; (5)
reasoning. Selection is mainly reflected in the selection of optimize the fusion process in a dynamic sense; (6) deal with
fusion mode and method, with emphasis on location the impact of the data collection environment; (7) improve
information fusion and parameter data fusion. Automatic the conditions of system operation.
reasoning technology is to interpret the observed data
environment, the relationship between observed entities, 6.5. Multisensor Data-Generalized Fusion-Proposed Method.
and the hierarchical grouping of targets or objects The data fusion theory and algorithm summarized in Sec-
according to the actual rules, framework, and scripts of tions 6.2 and 6.3 have strong application object, relatively
14 Mathematical Problems in Engineering
Iy
Ix y
a
Ih
If k
Iw j
z l
Ic x
StudentID Longitude Latitude Time Ien i
m
D20148803 114.41225837 30.51989529 07-28 10:36:15
Iec t
D20148805 114.41209096 30.51987968 07-28 10:36:25
Ier
D20148806 114.41194219 30.51992848 07-28 10:36:35
...
...
...
...
<?xml version='1.0'>
Iy ...
<University>
<Category='doctoral'>
Ix c
It u
</Student>
</University>
Iid
f : (du ∪dseml ∪ds)→Tu∪TsemlTs
(MC) and tensor, transform tensor in multiple Temporal and spatial information fusion:
steps, fuse the space, time, and supporting data of
count t1 , l1 ⟶ t2 , l2
the system migration process, realize the system Tt1l1t2l2 � . (12)
migration fusion, and obtain the constrained count t1 , l1
tensor fusion (CTF) calculation model (equations Support database fusion: the influence degree of
(12) and (15)). The position sequence in GPS data support data is quantified as the greater the
can constitute a spatial transformation model. The correlation coefficient between fusion data and
corresponding transformation matrix is the row support data tensor, the greater the influence. The
random matrix, and the sum of elements in each knowledge influence coefficient is described as
row of the matrix is equal to 1.
Spatial data fusion: spatial data fusion is realized ni�1 Ki − K Ii − I
ρKI � ����������������������� �. (13)
by the Markov model. MC includes the following: 2 2
ni�1 Ki − K ni�1 Ii − I
(i) according to the theory of the discrete sto-
chastic process, the transformation matrix cor-
responds to the stationary distribution, that is, The correlation coefficient matrix Γ � RKN ×IN can be
any initial distribution vector can converge to the obtained by calculating the correlation coefficients between
steady distribution vector after infinitely multi- global tensors. By normalizing the rows in the correlation
plying with the transformation matrix; (ii) matrix coefficient matrix Γ, the influence matrix (IM) Λ can be
theory shows that the steady distribution vector is obtained:
the eigenvector with the largest eigenvalue cor- ρK I ρK1 I2 . . . ρK1 In
responding to the transformation matrix. Re- ⎡⎢⎢⎢ 1 1 ⎤⎥⎥
ferring to [154], the transformation of spatial ⎢⎢⎢ ρK I ρK I
⎢ . . . ρK2 In ⎥⎥⎥⎥⎥
Γ � ⎢⎢⎢⎢ 2 1 2 2 ⎥⎥⎥,
tensor is shown in Figure 12. ⎢⎢⎢ ⋮ ⋮ ⋮ ⋮ ⎥⎥⎥⎥
Figure 12 shows the mobility model of the ⎣ ⎦
platform. The elements in the matrix represent ρKn I1 ρKn I2 . . . ρKn In
(14)
the probability of the platform moving from one λK1 I1 λK1 I2 . . . λK1 In
point to another. ⎡⎢⎢⎢ ⎥⎤
⎢⎢⎢ λK I λK I . . . λK I ⎥⎥⎥⎥⎥
Spatiotemporal data fusion: the fusion of time ⎢⎢ 2 1 2 2 2 n ⎥ ⎥⎥⎥.
Λ � ⎢⎢⎢
data and spatial data needs spatiotemporal tensor ⎢⎢⎢ ⋮ ⋮ ⋮ ⋮ ⎥⎥⎥⎥
⎣ ⎥⎦
transformation. Mobile behavior is always related
λKn I1 λKn I2 . . . λKn In
to time, and time is an important element of the
mobile behavior model. The motion time is By fusing the influence matrix of supporting data into
discretely represented as i, and its new state space the spatiotemporal transformation tensor, the system mi-
is composed of pose information and time in- gration behavior fusion can be obtained:
formation and is represented as S � T1 , P1 ,
Ti , PJ , . . . , TI , PJ }. From [154], the space- A � Λ ⊙ TST , (15)
time tensor transformation is shown in Figure 13.
Figure 13 shows the migration model of the where ⊙ is the Kronecker product.
platform. The elements of the tensor represent
the probability of the platform moving from one 7. Development Trend and Urgent
point to another at a certain point. Based on the Difficulties to Be Solved
transformation of space tensor, the fourth-order
space-time transition tensor TST can be obtained At present, most of the work on multisensor data fusion in
by combining time information through equa- academic circles is carried out for specific applications,
tion (12). without forming the basic theoretical framework and
16 Mathematical Problems in Engineering
Destination Destination
To Starting point To Starting point
g
tin
From From
un
Co
Destination 0 1 1 Destination 0 1/2 1/2 =1
Normalization in row
Starting point 1 0 2 Starting point 1/3 0 2/3
1
Cou
POI2 − A.M.
POI2 − A.M. POI2 − A.M. POI2 − A.M.
algorithm system. Therefore, the establishment of the basic tolerance or robustness of the multisensor integrated
theoretical framework and the generalized algorithm system system directly affects the quality of data acquisition
of multisensor data fusion is the main trend of the future and overcomes the difficulties of sensor measure-
development of this field. Based on its future development ment error modeling, real-time response of the
trend, the following problems are urgently needed to be system to complex dynamic environment, estab-
solved: lishment of knowledge base, and reasonable ar-
rangement of sensors. It is the key to avoid blind
(1) Establish the optimal management scheme of sensor design of the fusion system to design the generalized
resources. In a multisensor data fusion system, sensor integration scheme and perfect the general-
sensing is the source of fusion data; the number, ized fusion architecture of multisensor data.
attributes, and integration methods of sensors di-
(3) Establish theories and methods that can fully and
rectly determine the quality of the fusion data, which
effectively utilize multiple sensors to provide redun-
is one of the key factors affecting the fusion result.
dant information. The more sufficient the amount of
The sensor resource optimization program will op-
information is, the closer the fusion result is to the
timize the scheduling of sensor resources from three
essence of things. With the help of some new tech-
aspects, space management, time management, and
nologies in other fields, theories, and algorithms that
mode management, so that they can be used to the
can fully and effectively utilize redundant information
fullest and most rationally and achieve the optimal
of multisensors, reduce the impact of data defects
performance of the sensor system.
(imprecise and uncertain) and alleviate outliers, and
(2) Evaluation criterion multisensor system is estab- false data [155] are developed, which is one of the key
lished; avoid blind design fusion system. The fault factors to improve the accuracy of data fusion.
Mathematical Problems in Engineering 17
(4) Establish criteria for judging data fusion to reduce the References
ambiguity of data association; inconsistent fusion data,
also known as data association ambiguity, is one of the [1] M. Liggins II, D. Hall, and J. Llinas, Handbook of Multisensor
main obstacles to overcome in data fusion. In the Data Fusion: Theory and Practice, Vol. 39, CRC Press, Boca
Raton, FL, USA, 2008.
process of multisensor data fusion, data consistency is
[2] E. Waltz, “Data fusion for C3I: a tutorial,” Command,
the key factor that affects the fusion result. Data as-
Control, Communications Intelligence (C3I) Handbook,
sociation is the key to ensuring the consistency of fused
pp. 217–226, EW Communications, Palo Alto, CA, USA,
data, that is, to ensure that the fused information is 1986.
information about the same goal or phenomenon. [3] P. S. Rossi, P. K. Varshney, and D. Ciuonzo, “Distributed
(5) Develop and improve the basic theory of data fusion. detection in wireless sensor networks under multiplicative
Academia has conducted extensive research on data fading via generalized score tests,” IEEE Internet of Things
fusion technology and has achieved a lot of suc- Journal, vol. 8, no. 11, 2021.
cessful experience, but until today, the theoretical [4] R. Rucco, A. Sorriso, M. Liparoti et al., “Type and location of
foundation is still incomplete, and effective basic wearable sensors for monitoring falls during static and dy-
algorithms are still missing. The development and namic tasks in healthy elderly: a review,” Sensors, vol. 18,
improvement of the basic theory of data fusion are no. 5, p. 1613, 2018.
key factors for the rapid development of this field. [5] P. I. Corke, “Machine vision,” Moldes, vol. 19, 2000.
[6] D. Lahat, T. Adali, and C. Jutten, “Multimodal data fusion:
(6) Improve the fusion algorithm to improve the fusion an overview of methods, challenges, and prospects,” Pro-
performance. Fusion algorithm is the core of data ceedings of the IEEE, vol. 103, no. 9, pp. 1449–1477, 2015.
fusion. In the new development, introducing new [7] B. Khaleghi, A. Khamis, F. O. Karray, and S. N. Razavi,
mathematical methods to improve the fusion algo- “Multisensor data fusion: a review of the state-of-the-art,”
rithm is the long-cherished wish of countless Information Fusion, vol. 14, no. 1, pp. 28–44, 2013.
scholars. The introduction of modern statistical [8] F. E. White, “Data fusion lexico,” The Data Fusion Lexicon
theory, random set theory, fuzzy set theory, rough set Subpanel of the Joint Directors of Laboratories, San Diego,
theory, Bayes theory, evidence theory, support vector CA, USA, 1991.
machine, and other intelligent computing technol- [9] V. D. Calhoun and T. Adali, “Feature-based fusion of
ogies will bring new development opportunities to medical imaging data,” IEEE Transactions on Information
the state estimation of nonlinear non-Gauss systems Technology in Biomedicine: A Publication of the IEEE En-
and heterogeneous data fusion. gineering in Medicine & Biology Society, vol. 13, no. 5,
(7) Establish a knowledge base for data fusion appli- pp. 711–720, 2008.
[10] H. Boström, S. F. Andler, M. Brohede et al., “On the defi-
cations. In the field of data fusion, it is necessary to
nition of information fusion as a field of research,” Neoplasia,
establish databases and knowledge bases, form-op-
vol. 13, no. 2, pp. 98–107, 2007.
timized storage mechanisms, high-speed parallel [11] L. A. Klein, Sensor and Data Fusion Concepts and Appli-
retrieval and reasoning mechanisms, etc., and to cations, SPIE Optical Engineering Press, Bellingham, WA,
improve the operating efficiency of the cluster fusion USA, 1999.
system and the reliability of the fusion results. [12] F. E. White, Data Fusion Lexicon, Joint Directors of Labs,
(8) Established generalized fusion algorithm system of Washington, DC, USA, 1991.
multisensor data. The generalized algorithm based [13] H. Durrant-Whyte, Integration, Coordination, and Control of
on the basic integrated structure model of the Multi-Sensor Robot Systems, Kluwer Academic Publishers
multisensor should have the advantages of reducing Group, Alphen aan den Rijn, Netherlands, 1988.
data defects, alleviating abnormal values and false [14] F. Mastrogiovanni, A. Sgorbissa, and R. Zaccaria, “A dis-
data, processing highly conflicting data, processing tributed architecture for symbolic data fusion,” in Pro-
data multimodality, processing data correlation, ceedings of the IJCAI 2007, Hyderabad, India, 2007.
processing data alignment/registration, and pro- [15] J. Llinas and D. L. Hall, “An introduction to multi-sensor
data fusion,” in Proceedings of the 1998 IEEE International
cessing data association. It also should have other
Symposium on Circuits and Systems, Monterey, CA, USA,
capabilities as to select a fusion framework for
1998.
complex system data fusion, implement timing
[16] E. L. J. Waltz, Multi Sensor Data Fusion, Artech House Inc,
operations, process static and dynamic data states Norwood, MA, USA, 1990.
[156], and compress data dimensions [30]. [17] M. A. Abidi and R. C. Gonzalez, Data Fusion in Robotics and
Machine Intelligence, Academic Press, San Diego, CA, USA,
Conflicts of Interest 1992.
[18] D. L. Hall and S. A. H. Mcmullen, Mathematical Techniques
The authors declare that they have no conflicts of interest. in Multisensor Data Fusion, Artech House, Boston, MA,
USA, 2004.
Acknowledgments [19] R. Malhotra and L. Wright, “Temporal considerations in
sensor management,” in Proceedings of the IEEE 1995 Na-
This work was supported by the National Natural Science tional Aerospace and Electronics Conference, Dayton, OH,
Foundation of China, under Grant no. 51905302. USA, 1995.
18 Mathematical Problems in Engineering
[20] S. Paradis, B. A. Chalmers, R. Carling, and P. Bergeron, 2010 panel discussion,” in Proceedings of the 2010 13th In-
“Toward a generic model for situation and threat assess- ternational Conference on Information Fusion, Edinburgh,
ment,” Proceedings of SPIE, vol. 3080, pp. 171–182, 1997. UK, 2010.
[21] A. N. Steinberg, C. L. Bowman, and F. E. White, Revisions to [41] E. P. Blasch, R. Breton, P. Valin, and E. Bosse, “User in-
the JDL Data Fusion Model, SPIE, Bellingham, WA, USA, formation fusion decision making analysis with the
1999. C-OODA model,” in Proceedings of the International Con-
[22] F. E. White, Data Fusion Lexicon. Joint Directors of Labo- ference on Information Fusion, Chicago, IL, USA, 2011.
ratories, Technical Panel for C3, Data Fusion Sub-Panel, [42] R. C. Luo and M. G. Kay, Multisensor Integration and Fusion:
Naval Ocean Systems Center, San Diego, CA, USA, 1987. Issues and Approaches, SPIE, Bellingham, WA, USA, 1988.
[23] I. R. Goodman, R. P. Mahler, and H. T. Nguyen, Mathematics [43] S. C. A. Thomopoulos, “Sensor integration and data fusion,”
of Data Fusion, Springer, Berlin, Germany, 1997. Journal of Robotic Systems, vol. 7, no. 3, pp. 337–372, 1990.
[24] D. Hall and J. Llinas, Handbook of Multisensor Data Fusion, [44] C. J. Harris, A. Bailey, and T. J. Dodd, “Multi-sensor data
CRC Press, Boca Raton, FL, USA., 2001. fusion in defence and aerospace,” Aeronautical Journal New
[25] B. V. Dasarathy, “Information fusion—what, where, why, Series, vol. 102, no. 1015, pp. 229–244, 1998.
when, and how?” Information Fusion, vol. 2, 2001. [45] M. Bedworth and J. O’Brien, “The Omnibus model: a new
[26] J. M. Richardson and K. A. Marsh, “Fusion of multisensor model of data fusion?” Aerospace & Electronic Systems
data,” The International Journal of Robotics Research, vol. 7, Magazine, vol. 15, no. 4, pp. 30–36, 2009.
no. 6, pp. 78–96, 1988. [46] A. G. Ciancio, S. Pattem, A. Ortega, and B. Krishnamachari,
[27] R. Mckendall and M. Mintz, Robust Fusion of Location In- “Energy-efficient data representation and routing for wire-
formation, IEEE Computer Society Press, Washington, DC, less sensor networks based on a distributed wavelet com-
USA, 1988. pression algorithm,” in Proceedings of the 2006 5th
[28] S. A. M. Desforges, “Strategies in data fusion sorting through International Conference on Information Processing in Sensor
the tool box,” in Proceedings of 1998 European Conference on Networks, Nashville, TN, USA, 2006.
Data Fusion, Malvern, PA, USA, 1998. [47] R. Verbeek and K. Weihrauch, “Data representation and
[29] P. Wang, L. T. Yang, J. Li, J. Chen, and S. Hu, “Data fusion in computational complexity,” Theoretical Computer Science,
cyber-physical-social systems: state-of-the-art and perspec- vol. 7, no. 1, pp. 99–116, 1978.
tives,” Information Fusion, vol. 51, pp. 42–57, 2019. [48] S. J. Wilson, “Data representation for time series data
[30] S. Alonso, D. Pérez, A. Morán, J. J. Fuertes, I. Dı́az, and mining: time domain approaches,” Wiley Interdisciplinary
M. Domı́nguez, “A deep learning approach for fusing sensor
Reviews: Computational Statistics, vol. 9, no. 1, Article ID
data from screw compressors,” Sensors, vol. 19, no. 13,
e1392, 2017.
p. 2868, 2019.
[49] S. Vigneshwari and M. Aramudhan, “Social information
[31] I. Bloch, A. Hunter, A. Appriou et al., “Fusion: general
retrieval based on semantic annotation and hashing upon the
concepts and characteristics,” International Journal of In-
multiple ontologies,” Indian Journal of Science and Tech-
telligent Systems, vol. 16, no. 10, pp. 1107–1134, 2010.
nology, vol. 8, no. 2, pp. 103–107, 2015.
[32] Z. Ning and Z. Jinfu, “Study on image compression and
[50] T. D. Cao, T. H. Phan, and A. D. Nguyen, “An ontology
fusion based on the wavelet transform technology,” Inter-
based approach to data representation and information
national Journal on Smart Sensing and Intelligent Systems,
search in smart tourist guide system,” in Proceedings of the
vol. 8, no. 1, pp. 480–496, 2015.
[33] A. Mohebi and P. Fieguth, “Statistical fusion and sampling of 3rd International Conference on Knowledge & Systems
scientific images,” in Proceedings of the 2008 15th IEEE In- Engineering, Hanoi, Vietnam, 2011.
ternational Conference on Image Processing, San Diego, CA, [51] S. Hachem, T. Teixeira, and V. Issarny, “Ontologies for the
USA, 2008. internet of things,” in Proceedings of the 8th Middleware
[34] Q.-S. Sun, S.-G. Zeng, Y. Liu, P.-A. Heng, and D.-S. Xia, “A Doctoral Symposium, Lisbon, Portugal, 2011.
new method of feature fusion and its application in image [52] S. T. Roweis and L. K. Saul, “Nonlinear dimensionality re-
recognition,” Pattern Recognition, vol. 38, no. 12, duction by locally linear embedding,” Science, vol. 290,
pp. 2437–2448, 2005. no. 5500, pp. 2323–2326, 2000.
[35] B. Garner and D. Lukose, “Knowledge fusion,” in Proceedings [53] L. Sorber, Data Fusion: Tensor Factorizations by Complex
of the 1992 Workshop on Conceptual Structures: Theory & Optimization, Faculty of Engineering, KU Leuven, Leuven,
Implementation, Las Cruces, NM, USA, 1992. Belgium, 2014.
[36] A. Goel, A. Patel, K. G. Nagananda, and P. K. Varshney, [54] I. Kotsia and I. Patras, “Support tucker machines,” in Pro-
“Robustness of the counting rule for distributed detection in ceedings of the 2011 Computer Vision and Pattern
wireless sensor networks,” IEEE Signal Processing Letters, Recognition, Colorado Springs, CO, USA, 2011.
vol. 25, no. 8, pp. 1191–1195, 2018. [55] T. G. Kolda, B. W. Bader, and J. P. Kenny, “Higher-order web
[37] D. Ciuonzo, S. H. Javadi, A. Mohammadi, and P. S. Rossi, link analysis using multilinear algebra,” in Proceedings of the
“Bandwidth-constrained decentralized detection of an un- 5th IEEE International Conference on Data Mining, Houston,
known vector signal via multisensor fusion,” IEEE Trans- TX, USA, 2005.
actions on Signal and Information Processing over Networks, [56] M. Signoretto, L. De Lathauwer, and J. A. K. Suykens, “A
vol. 6, pp. 744–758, 2020. kernel-based framework to tensorial data analysis,” Neural
[38] E. Waltz and J. Llinas, Multi Sensor Data Fusion, IET, Networks, vol. 24, no. 8, pp. 861–874, 2011.
London, UK, 2002. [57] K. Zheng, G. Si, Z. Zhou, J. Chen, and W. Yue, “Consistency
[39] J. Z. Sasiadek, “Sensor fusion,” Annual Reviews in Control, test based on self-support degree and hypothesis testing for
vol. 26, no. 2, pp. 203–228, 2002. multi-sensor data fusion,” in Proceedings of the 2017 IEEE
[40] E. Blasch, J. Llinas, D. Lambert et al., “High level information 2nd Advanced Information Technology,Electronic and Au-
fusion developments, issues, and grand challenges: fusion tomation Control Conference, Chongqing, China, 2017.
Mathematical Problems in Engineering 19
[58] F. Garcia, B. Mirbach, B. Ottersten, F. Grandidier, and the 6th World Congress on Intelligent Control & Automation,
Á. Cuesta, “Pixel weighted average strategy for depth sensor Dalian, China, 2004.
data fusion,” in Proceedings of the 2010 IEEE International [77] R. R. Yager, “On the Dempster-Shafer framework and new
Conference on Image Processing, Hong Kong, China, 2010. combination rules,” Information Sciences, vol. 41, no. 2,
[59] R. E. Kalman, “A new approach to linear filtering and pp. 93–137, 1987.
prediction problems,” Journal of Basic Engineering, vol. 82, [78] Siklóssy and S. Laurent, Representation and Meaning,
no. 1, pp. 35–45, 1960. Prentice-Hall, Hoboken, NJ, USA, 1972.
[60] R. E. Kalman and R. S. Bucy, “New results in linear filtering [79] A. Skowron and J. Grzymala-Busse, From Rough Set Theory
and prediction theory,” Journal of Basic Engineering, vol. 83, to Evidence Theory, John Wiley & Sons, Hoboken, NJ, USA,
no. 5, pp. 95–108, 1961. 1994.
[61] I. Guyon, M. Nikravesh, S. Gunn, and L. A. Zadeh, “Feature [80] D. Bell, Evidence Theory and Its Applications, Vol. 2, Elsevier
extraction,” Studies in Fuzziness and Soft Computing, Science Inc, Amsterdam, Netherlands, 1991.
Springer, vol. 31, pp. 1737–1744, Berlin, Germany, 2006. [81] E. L. Post, The Two-Valued Iterative Systems of Mathematical
[62] H. A. Fayed and A. F. Atiya, “A novel template reduction Logic, Princeton University Press, Princeton, NJ, USA, 1941.
approach for the K-nearest neighbor method,” IEEE [82] H. A. Simon, “Complexity and the representation of pat-
Transactions on Neural Networks, vol. 20, no. 5, pp. 890–896, terned sequences of symbols,” Psychological Review, vol. 79,
2009. no. 5, pp. 369–382, 1972.
[63] P. L. Ainsleigh, T. E. Luginbuhl, and P. K. Willett, “A se- [83] X. Sun, W. Gao, and Y. Duan, “MR brain image segmen-
quential target existence statistic for joint probabilistic data tation using a fuzzy weighted multiview possibility clustering
association,” IEEE Transactions on Aerospace and Electronic algorithm with low-rank constraints,” Journal of Medical
Systems, vol. 57, pp. 371–381, 2020. Imaging & Health Informatics, vol. 11, 2021.
[64] S. He, H. S. Shin, and A. Tsourdos, “Information-theoretic [84] X. Li, B. Kao, C. Shan, D. Yin, and M. Ester, “CAST: a
joint probabilistic data association filter,” IEEE Transactions correlation-based adaptive spectral clustering algorithm on
on Automatic Control, vol. 66, no. 3, pp. 1262–1269, 2020. multi-scale data,” 2020, https://arxiv.org/abs/2006.04435.
[65] S. Liu, H. Li, Y. Zhang, and B. Zou, “Multiple hypothesis [85] A. Treshansky and R. M. Mcgraw, “Overview of clustering
method for tracking move-stop-move target,” Journal of algorithms,” Proceedings of SPIE—The International Society
Engineering, vol. 2019, no. 19, pp. 6155–6159, 2019. for Optical Engineering, vol. 4367, pp. 41–51, 2001.
[66] A. O. T. Hogg, C. Evers, and P. A. Naylor, “Multiple hy- [86] M. Hassani, “Overview of efficient clustering methods for
pothesis tracking for overlapping speaker segmentation,” in high-dimensional big data streams: techniques, toolboxes
Proceedings of the 2019 IEEE Workshop on Applications of and applications,” Clustering Methods for Big Data Analytics,
Signal Processing to Audio and Acoustics, New Paltz, NY, Springer, Berlin, Germany, 2019.
USA, 2019. [87] C. L. Liv, Introduction to Combinatorial Mathematics,
[67] R. L. Streit and T. E. Luginbuhl, “Maximum likelihood McGraw Hill, New York, NY, USA, 1968.
method for probabilistic multihypothesis tracking,” in [88] K. Krishna and M. Narasimha Murty, “Genetic K-means
Proceedings of the SPIE-The International Society for Optical algorithm,” IEEE Transactions on Systems, Man and Cy-
Engineering, p. 2235, Rome, Italy, September 1994. bernetics, Part B, vol. 29, no. 3, pp. 433–439, 1999.
[68] R. L. Streit, S. G. Greineder, and T. E. Luginbuhi, “Maximum [89] Y. Lu, S. Lu, F. Fotouhi, Y. Deng, and S. J. Brown, “FGKA: a
likelihood training of probabilistic neural networks with fast genetic K-means clustering algorithm,” in Proceedings of
rotationally related covariance matrices,” in Proceedings of the 2004 ACM Symposium on Applied Computing, Nicosia,
the 1995 IEEE International Conference on Neural Networks, Cyprus, 2004.
Perth, Australia, 1995. [90] E. Schubert and P. Rousseeuw, “Faster K-medoids clustering:
[69] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan, Estimation with improving the PAM, CLARA, and CLARANS algorithms,”
Applications to Tracking and Navigation: Theory, Algorithms, in Proceedings of the 2019 International Conference on
and Software, John Wiley & Sons, New York, NY, USA, 2001. Similarity Search and Applications, Newark, NJ, USA, 2019.
[70] M. Nørgaard, N. K. Poulsen, and O. Ravn, “New develop- [91] H. H. Nguyen, “Privacy-preserving mechanisms for k-modes
ments in state estimation for nonlinear systems,” Automa- clustering,” Computers & Security, vol. 78, pp. 60–75, 2018.
tica, vol. 36, pp. 1627–1638, 2000. [92] R. Gelbard, O. Goldman, and I. Spiegler, “Investigating
[71] J. C. Spall, “Estimation via Markov chain Monte Carlo,” IEEE diversity of clustering methods: an empirical comparison,”
Control Systems, vol. 23, no. 2, pp. 34–45, 2003. Data & Knowledge Engineering, vol. 63, no. 1, pp. 155–166,
[72] A. Doucet, S. Godsill, and C. Andrieu, “On sequential Monte 2007.
Carlo sampling methods for Bayesian filtering,” Statistics and [93] H. Yin, “ViSOM—a novel method for multivariate data
Computing, vol. 10, no. 3, pp. 197–208, 2000. projection and structure visualization,” IEEE Transactions on
[73] S. J. Julier and J. K. Uhlmann, “Unscented filtering and Neural Networks, vol. 13, no. 1, pp. 237–243, 2002.
nonlinear estimation,” Proceedings of the IEEE, vol. 92, no. 3, [94] R. Amami, “An incremental method combining density
pp. 401–422, 2004. clustering and support vector machines for voice pathology
[74] I. Arasaratnam and S. Haykin, “Cubature Kalman filters,” detection,” Computers & Electrical Engineering, vol. 57,
IEEE Transactions on Automatic Control, vol. 54, no. 6, pp. 257–265, 2016.
pp. 1254–1269, 2009. [95] R. T. Ng and J. Han, “CLARANS: a method for clustering
[75] B. P. Carlin and T. A. Louis, “Bayes and empirical Bayes objects for spatial data mining,” IEEE Transactions on
methods for data analysis,” Statistics and Computing, vol. 7, Knowledge & Data Engineering, vol. 14, no. 5, pp. 1003–1016,
no. 2, pp. 153-154, 1998. 2002.
[76] L. Xu, Y. Chen, and P. Cui, “Improvement of D-S evidential [96] Y. Zhang, J. Sun, Y. Zhang, and X. Zhang, “Parallel
theory in multisensor data fusion system,” in Proceedings of implementation of CLARANS using PVM,” in Proceedings of
20 Mathematical Problems in Engineering
2004 International Conference on Machine Learning and multimodal deep learning,” Computer Networks, vol. 165,
Cybernetics, Shanghai, China, 2004. Article ID 106944, 2019.
[97] S. Gaffney and P. Smyth, “Trajectory clustering with mixtures [117] D. H. Hubel and T. N. Wiesel, “Receptive fields, binocular
of regression models,” in Proceedings of the 5th International interaction and functional architecture in the cat’s visual
Conference on Knowledge Discovery and Data Mining, San cortex,” Joumal of Physiology, vol. 160, pp. 106–154, 1962.
Diego, CA, USA, 1999. [118] K. Fukushima, “Neocognitron: a self-organizing neural
[98] C. C. Aggarwal and C. K. Reddy, Data Clustering: Algorithms network model for a mechanism of pattern recognition
and Applications, Taylor and Francis Group, London, UK, unaffected by shift in position,” Biological Cybernetics,
2013. vol. 36, pp. 193–202, 1980.
[99] R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan, [119] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-
“Automatic Subspace clustering of high dimensional data for based learning applied to document recognition,” Proceed-
data mining applications,” ACM SIGMOD Record, vol. 27, ings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
no. 2, pp. 94–105, 1998. [120] Y. Lecun, B. Boser, J. Denker et al., “Backpropagation applied
[100] L. Parsons, E. Haque, and H. Liu, “Subspace clustering for to handwritten zip code recognition,” Neural Computation,
high dimensional data: a review,” ACM SIGKDD Explora- vol. 1, no. 4, pp. 541–551, 2014.
tions Newsletter, vol. 6, no. 1, pp. 90–105, 2004. [121] Y. Lecun, K. Kavukcuoglu, and C. M. Farabet, “Convolu-
[101] M. Yin, S. Xie, Z. Wu, Y. Zhang, and J. Gao, “Subspace tional networks and applications in vision,” in Proceedings of
clustering via learning an adaptive low-rank graph,” IEEE 2010 IEEE International Symposium on Circuits and Systems,
Transactions on Image Processing, vol. 27, no. 8, pp. 3716– Paris, France, 2010.
3728, 2018. [122] E. Hinton, “Distributed representations,” Technical report,
[102] B. Sandipan, “An efficient approach of election algorithm in University of Toronto, Toronto, Canada, 1984.
distributed systems,” Indian Journal of Computer Science & [123] H. Wang and P. Liu, “Image recognition based on improved
Engineering, vol. 2, no. 1, 2011. convolutional deep belief network model,” Multimedia Tools
[103] B. Awerbuch, “A new distributed depth-first-search algo- & Applications, vol. 80, pp. 2031–2045, 2020.
rithm,” Information Processing Letters, vol. 20, no. 3, [124] E. G. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning
pp. 147–150, 1985. algorithm for deep belief nets,” Neural Computation, vol. 18,
[104] C. E. Shannon, “A mathematical theory of communication,”
pp. 1527–1554, 2006.
Bell System Technical Journal, vol. 27, 1948.
[125] C. M. Bishop, Pattem Recognition and Machine Iearning,
[105] E. T. Jaynes, “Information theory and statistical mechanics,”
Springer, New York, NY, USA, 2006.
Physical Review, vol. 106, 1957.
[126] L. A. Zadeh, “Fuzzy sets,” Information and Control, vol. 8,
[106] W. S. McCulloch and W. Pitts, “A logical calculus of the ideas
no. 3, pp. 338–353, 1965.
immanent in nervous activity,” Bulletin of Mathematical
[127] C. H. Wang, W. Y. Wang, T. T. Lee, and P. S. Tseng, “Fuzzy
Biophysics, vol. 5, no. 4, pp. 115–133, 1943.
B-spline membership function (BMF) and its applications in
[107] W. Pitts, “The linear theory of neuron networks: the dynamic
fuzzy-neural control,” IEEE Transactions on Systems, Man,
problem,” Bulletin of Mathematical Biophysics, vol. 5, no. 1,
and Cybernetics, vol. 25, no. 5, pp. 841–851, 1995.
pp. 23–31, 1943.
[128] J. Chleboun, “A new membership function approach to
[108] W. T. Katz, J. W. Snell, and M. B. Merickel, “Artificial neural
networks,” Methods in Enzymology, vol. 210, no. 210, uncertain functions,” Fuzzy Sets and Systems, vol. 387,
pp. 610–636, 1992. pp. 68–80, 2020.
[109] E. Judith and J. M. Deleo, “Artificial neural networks,” [129] S.-U.-D. Khokhar, Q. Peng, A. Asif, M. Y. Noor, and
Cancer, vol. 91, no. S8, pp. 1615–1635, 2001. A. Inam, “A simple tuning algorithm of augmented fuzzy
[110] Y. Xin, “Evolving artificial neural networks,” Proceedings of membership functions,” IEEE Access, vol. 8, pp. 35805–
the IEEE, vol. 87, no. 9, pp. 1423–1447, 1999. 35814, 2020.
[111] E. Judith and J. M. Deleo, “Artificial neural networks,” [130] J. R. Quinlan, “Induction on decision tree,” Machine
Cancer, vol. 91, no. S8, pp. 1615–1635, 2001. Learning, vol. 1, 1986.
[112] A. Hazra and S. M. S. Prakashchoudhary, “Recent advances [131] R. C. Barros, M. P. Basgalupp, A. C. P. L. F. de Carvalho, and
in deep learning techniques and its applications: an over- A. A. Freitas, “A survey of evolutionary algorithms for de-
view,” Advances in Biomedical Engineering and Technology, cision-tree induction,” IEEE Transactions on Systems, Man,
Springer, Berlin, Germany, pp. 103–122, 2020. and Cybernetics, Part C (Applications and Reviews), vol. 42,
[113] A. Mathew, P. Amudha, and S. Sivakumari, “Deep learning no. 3, pp. 291–312, 2012.
techniques: an overview,” in Proceedings of the 2021 Inter- [132] J. Mullane, B.-N. Vo, M. D. Adams, and B.-T. Vo, “A
national Conference on Advanced Machine Learning Tech- random-finite-set approach to bayesian SLAM,” IEEE
nologies and Applications, Cairo, Egypt, 2021. Transactions on Robotics, vol. 27, no. 2, pp. 268–282, 2011.
[114] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and [133] H. E. Robbins, On the Measure of a Random Set II, Springer,
P.-A. Manzagol, “Stacked denoising autoencoders: learning New York, NY, USA, 1985.
useful representations in a deep network with a local [134] B. Ristic, Particle Filters for Random Set Models, Springer
denoising criterion,” Journal of Machine Learning Research, Publishing Company, Berlin, Germany, 2013.
vol. 11, no. 12, pp. 3371–3408, 2010. [135] Z. Pawlak, “Rough set,” International Journal of Computer &
[115] J. Ngiam, A. Khosla, and M. Kim, “Multimodal deep Information Sciences, vol. 11, no. 5, 1982.
learning,” in Proceedings of the 28th International Conference [136] J. W. Grzymała-Busse, Z. Pawlak, R. Słowiński, and
on Machine Learning, pp. 689–696, ICML, Bellevue, WA, W. Ziarko, “Rough set,” Communications of the ACM,
USA, 2011. vol. 38, no. 11, 1995.
[116] G. Aceto, D. Ciuonzo, A. Montieri, and A. Pescapè, “MI- [137] W. Ji, Y. Pang, X. Jia et al., “Fuzzy rough sets and fuzzy rough
METIC: mobile encrypted traffic classification using neural networks for feature selection: a review,” Wiley
Mathematical Problems in Engineering 21
Interdisciplinary Reviews Data Mining and Knowledge Dis- [156] X. L. Dong, L. Berti-Equille, and D. Srivastava, “Truth dis-
covery, vol. 11, no. 3, 2021. covery and copying detection in a dynamic world,” Pro-
[138] Y. Zhang and Y. Wang, “Research on classification model ceedings of the VLDB Endowment, vol. 2, no. 1, pp. 562–573,
based on neighborhood rough set and evidence theory,” 2009.
Journal of Physics: Conference Series, vol. 1746, no. 1, Article
ID 12018, 2021.
[139] L. A. Zadeh, “Fuzzy logic � computing with words,” IEEE
Transactions on Fuzzy Systems, vol. 4, pp. 3–23, 1999.
[140] L. Běhounek and P. Cintula, “From fuzzy logic to fuzzy
mathematics: a methodological manifesto,” Fuzzy Sets and
Systems, vol. 157, no. 5, pp. 642–646, 2006.
[141] L. Z. Zadeh, “Fuzzy logic, neural networks and soft com-
puting,” Microprocessing and Microprogramming, vol. 38,
no. 1, p. 13, 1993.
[142] X. Xiang, C. Yu, L. Lapierre, J. Zhang, and Q. Zhang, “Survey
on fuzzy-logic-based guidance and control of marine surface
vehicles and underwater vehicles,” International Journal of
Fuzzy Systems, vol. 20, pp. 572–586, 2018.
[143] Z. Luo and Y. Deng, “A matrix method of basic belief as-
signment’s negation in Dempster-Shafer theory,” IEEE
Transactions on Fuzzy Systems, vol. 28, no. 9, pp. 2270–2276,
2020.
[144] P. Liu and X. Zhang, “A new hesitant fuzzy linguistic ap-
proach for multiple attribute decision making based on
Dempster-Shafer evidence theory,” Applied Soft Computing,
vol. 86, Article ID 105897, 2019.
[145] D. L. Hall and J. Llinas, “An introduction to multisensor data
fusion,” Proceedings of the IEEE, vol. 85, pp. 6–23, 1997.
[146] K. Cho, B. Jacobs, B. Westerbaan, and A. Westerbaan, “An
introduction to effectus theory,” Arctic & Alpine Research,
vol. 29, no. 1, pp. 122–125, 2015.
[147] W. K. Hastings, “Monte Carlo sampling methods using
Markov chains and their applications,” Biometrika, vol. 57,
no. 1, pp. 97–109, 1970.
[148] H. Peng and Z. Peng, “An iterative method of statistical
tolerancing based on the unified Jacobian-Torsor model and
Monte Carlo simulation,” Journal of Computational Design
& Engineering, vol. 7, no. 2, p. 165, 2020.
[149] D. Hall and S. Waligora, “Orbit/attitude estimation with
LANDSAT landmark data,” in Proceedings of the 1979 GSFC
Flight Mechanics/Estimation Theory Symposium, pp. 67–110,
NASA, Goddard Space Flight Center Flight Mechanics, 1979.
[150] C. L. Miao, J. S. Nan, and N. Guo, “Effectiveness evaluation
architecture for intelligence reconnaissance system based on
multi-source data fusion technique,” Telecommunication
Engineering, vol. 4, pp. 429–434, 2012.
[151] Z. Rong, G. Jing-Wei, and Y. Hang, “Study of operational
effectiveness evaluation of multisensor data fusion system,”
Radio Engineering of China, vol. 38, no. 3, pp. 31–33, 2008.
[152] L. Kuang, F. Hao, L. T. Yang, M. Lin, C. Luo, and G. Min, “A
tensor-based approach for big data representation and di-
mensionality reduction,” IEEE Transactions on Emerging
Topics in Computing, vol. 2, no. 3, pp. 280–291, 2014.
[153] A. Singh and G. Gordon, Relational Learning via Collective
Matrix Factorization, ACM, New York, NY, USA, 2008.
[154] P. Wang, L. T. Yang, Y. Peng, J. Li, and X. Xie, “M2T2: the
multivariate multistep transition tensor for user mobility
pattern prediction,” IEEE Transactions on Network Science
and Engineering, vol. 7, no. 2, pp. 907–917, 2020.
[155] M. Kumar, D. P. Garg, and R. A. Zachery, “A generalized
approach for inconsistency detection in data fusion from
multiple sensors,” in Proceedings of the American Control
Conference 2006, Minneapolis, MN, USA, 2006.