0% found this document useful (0 votes)
13 views21 pages

A New View of Multisensor Data Fusion

This review article examines multisensor data fusion, focusing on generalized fusion algorithms and their applications. It discusses the development, classification, and theoretical framework of multisensor data fusion, proposing a new generalized fusion model and tensor-based algorithms for heterogeneous data. The paper aims to provide a comprehensive understanding of multisensor data fusion and outlines future research directions in this field.

Uploaded by

dick.chan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views21 pages

A New View of Multisensor Data Fusion

This review article examines multisensor data fusion, focusing on generalized fusion algorithms and their applications. It discusses the development, classification, and theoretical framework of multisensor data fusion, proposing a new generalized fusion model and tensor-based algorithms for heterogeneous data. The paper aims to provide a comprehensive understanding of multisensor data fusion and outlines future research directions in this field.

Uploaded by

dick.chan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Hindawi

Mathematical Problems in Engineering


Volume 2021, Article ID 5471242, 21 pages
https://doi.org/10.1155/2021/5471242

Review Article
A New View of Multisensor Data Fusion: Research on
Generalized Fusion

Guo Chen,1 Zhigui Liu,1 Guang Yu ,2 and Jianhong Liang 1,2

1
College of Information Engineering, Southwest University of Science and Technology, Mianyang 621000, China
2
Department of Mechanical Engineering, Tsinghua University, Beijing 100084, China

Correspondence should be addressed to Guang Yu; gyu@tsinghua.edu.cn

Received 16 June 2021; Revised 26 August 2021; Accepted 27 August 2021; Published 15 October 2021

Academic Editor: Jude Hemanth

Copyright © 2021 Guo Chen et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Multisensor data generalized fusion algorithm is a kind of symbolic computing model with multiple application objects based on
sensor generalized integration. It is the theoretical basis of numerical fusion. This paper aims to comprehensively review the
generalized fusion algorithms of multisensor data. Firstly, the development and definition of multisensor data fusion are analyzed
and the definition of multisensor data generalized fusion is given. Secondly, the classification of multisensor data fusion is
discussed, and the generalized integration structure of multisensor and its data acquisition and representation are given,
abandoning the research characteristics of object oriented. Then, the principle and architecture of multisensor data fusion are
analyzed, and a generalized multisensor data fusion model is presented based on the JDL model. Finally, according to the
multisensor data generalized fusion architecture, some related theories and methods are reviewed, and the tensor-based
multisensor heterogeneous data generalized fusion algorithm is proposed, and the future work is prospected.

1. Introduction form the so-called optimal fusion scheme on this basis, as a


whole, it is characterized by object orientation, which fails to
Multisensor data fusion, also known as multisource data form the basic theoretical framework and generalized al-
fusion [1], is essentially the fusion of heterogeneous data, and gorithm system necessary for this independent discipline.
the fusion is accurate and inaccurate (with uncertainty) data, The lack of basic theoretical framework and generalized
similar to natural creatures acquiring information through algorithm system not only hinders scholars’ deep under-
various senses and comparing, discriminating, and com- standing of multisensor data fusion but also prevents
prehensively analyzing the acquired information with scholars from synthesizing and evaluating the fusion system.
memory or experience to understand the objective world. It Generalized fusion algorithm system, as a basic paradigm
played a huge role in several recent local wars; the C3I fusion with universal applicability, must carry out universal re-
system [2] of multinational forces attracted the attention of search in sensor integration, data collection and represen-
the whole world. As a result, multisensor data fusion tech- tation, fusion framework, data fusion, etc.
nology has become a very active research field in academia.
The goal of multisensor data fusion is to process and
1.1. Sensor Integration. In a multisensor integrated system
synthesize multisource data (or information) related to the
platform, the integration method, attributes, and quantity of
measured object to obtain a more accurate, more complete,
sensors are the three most basic factors. Any complex system
and more reliable, consistent interpretation and description
is composed of generalized integrated basic subsystems.
of the measured object than using a single sensor. Most of
the current related work is carried out for specific appli-
cation fields, such as monitoring and detection [3, 4] and 1.2. Data Collection and Presentation. Data are generated by
environmental perception [5]; according to actual applica- the measurement of observation objects by different sen-
tion problems, each establish intuitive fusion criteria and sors; different sensors have great differences in the
2 Mathematical Problems in Engineering

mechanism of data generation. How to collect relatively future development trend and difficulties of the research on
accurate and complete data and correctly represent is very the generalized fusion algorithm system of multisensor
challenging. data are summarized.

2. Development and Definition of Multisensor


1.3. Data Fusion Framework Design. Due to the heteroge- Data Fusion
neity and heterogeneity of sensors, to design a generalized
data fusion model, it is necessary to abandon its application Multisensor data fusion is an interdisciplinary research
objects and take the sensor integration method and data field, which involves a wide range of content. With the
fusion principle as the theoretical basis. expanding of application fields, its function and definition
Multisensor data generalized fusion algorithm is a kind connotation have constantly been enriched. People’s re-
of symbolic computing model with multiple application search on multisensor data fusion has inherent object-
objects, which is based on the sensor generalized integration oriented characteristics: researchers in different fields put
structure. It is the theoretical basis of numerical fusion, and a forward different views based on the expected functions
basic algorithm to provide modularization support for and use functional definitions suitable for specific fields to
complex system data fusion. describe or explain the functions (or purposes) and limi-
At present, abundant data fusion research results pro- tations of multisensor data fusion [6–9]. Therefore, the
vide important reference for the research of multisensor data development process of the definition of multisensor data
generalized fusion algorithm; however, there is a lack of fusion represents the development process of multisensor
comprehensive review of the research on the generalized data fusion.
fusion algorithm of multisensor data. In order to fill this gap, In the past studies, the functional definition describing
the basic research on multisensor generalized integration, multisensor data fusion (function and limitation) is detailed
data acquisition and presentation, multisensor data gener- in [10]. Among them, Klein and White [11, 12], Durrant-
alized fusion model, and generalized fusion method is Whyte [13], and Mastrogiovanni et al. [14] gave represen-
carried out in this paper: tative functional definitions. In short, scholars in various
application fields are describing the functions and limita-
(1) The definition of generalized fusion of multisensor tions of multisensor data fusion in different applications. The
data is proposed, which defines the research methods function description can be summarized into eight aspects;
and objectives of this research direction the limited description can be roughly summarized into
(2) The generalized integrated model of multisensor is three aspects, and the statistics are shown in Table 1.
proposed, which is the basis of designing multisensor In recent studies, scholars believe that multisensor data
system, thus avoiding the blindness of the multi- refers to multisource data [29], including direct data (sensor
sensor system design historical data value) and indirect data (prior knowledge of
(3) The architecture model of multisensor data gener- environment and human input). The data source is not
alized fusion is proposed; the idea of studying specified to be the same sensor, including heterogeneous
generalized fusion algorithm of multisensor data is sensor, database, and human. The content involved is ex-
determined tensive, covering all possible combination or aggregation
methods, and transforms information from different sources
(4) The tensor-based generalized fusion algorithm for
and at different times.
heterogeneous data of multiple sensors is proposed,
Scholars also believe that multisensor data fusion is an
which proves the reachability of this research di-
interdisciplinary research field, using technologies from
rection and provides a theoretical basis for subse-
different fields [30], such as artificial intelligence and in-
quent research studies
formation theory. Henrik Boström thinks traditional mul-
The organization of this paper is as follows. Section 2 tisensor data fusion focuses on online sensor data, while
discusses the development and concept of multisensor data modern data fusion should consider other sources. Boström
fusion, abandons the characteristics of object-oriented et al. [10], Bloch et al. [31], and Wang et al. [29] gave
research in the past, and proposes the definition of gen- representative definitions.
eralized fusion of multisensor data. Section 3 introduces Based on the current development status of multisensor
the classification of multisensor data fusion. In Section 4, data fusion and abandoning its application objects, the
the model of multisensor generalized integration platform multisensor data generalized fusion can be defined as follows
and data acquisition and representation are studied in (Figure 1). Using the intelligent computing method, the
detail. In Section 5, the principle and architecture of multimodal time-series data collected by orderly integrated
multisensor data fusion are studied, and the generalized multiple (or class) sensors are sorted, analyzed, and com-
architecture model of multisensor data fusion is proposed prehensively processed according to certain criteria, and
and analyzed in detail. In Section 6, four-level fusion based more accurate and comprehensive inferences than using
on the generalized fusion model of multisensor data and some information sources alone are gained, and the con-
the theories and algorithms related to it are reviewed, and sistency of the interpretation and description of the mea-
the generalized fusion algorithm of multisensor hetero- sured object of artificial intelligence technology are
geneous data based on tensor is given. In Section 7, the ultimately obtained.
Mathematical Problems in Engineering 3

Table 1: Multisensor data fusion functional definition description analysis.


Function description Limited description
① Obtain (more comprehensive/complete and higher quality)
[15, 16] information that is greater than the sum of each contribution ① Source limitation: that is, limiting the data source, such as data or
part information from sensors
② Accurately understand and describe the given scene [17]
③ Realize inferences that cannot be achieved with a single sensor
[18]
④ Infer events related to the observed object [15, 19] ② Scenario limitation: that is, the application type or decision-
⑤ Improve state estimation, prediction, and risk assessment making situation is limited, such as decision-making with strict
[13, 16, 20, 21] timing requirements
⑥ Realize precise positioning, tracking, and identification
[20, 22–24]
⑦ Realize accurate (accuracy, robustness, qualitative, and
quantitative) decision-making and action [23, 25] ③ Characteristic limitation: that is, to limit the fusion
⑧ Maximize useful information, improve reliability or recognition characteristics, such as continuous refinement
ability, and minimize the amount of retained data [14, 26–28]

F1

Feature
information
StudentID Longitude Latitude Time
D20148803 114.41225837 30.51989529 07-28 10:36:15
D20148805 114.41209096 30.51987968 07-28 10:36:25
D20148806 114.41194219 30.51992848 07-28 10:36:35 Fn
...

...

...

...

<?xml version='1.0'>
<University> F2
<Category='doctoral'>
</Student>
</University>

Figure 1: Generalized fusion of multisensor data.

3. Classification of Multisensor Data Fusion ② Statistical fusion: in a multisensor system, the


reliability of each sensor directly affects the fusion
Multisensor data refer to the effective use of multisensor result. Statistical fusion is the use of statistical
resources to obtain the most accurate and inaccurate (un- method theory to blur the reliability of each
certain and unknown) data about the detected target and the sensor, calculate the comprehensive reliability,
environment. The uncertainty of multisensor data deter- and then perform data fusion. The method is
mines the complexity of data fusion and the diversity of simple to calculate and its conclusions are rela-
classification, see Table 2. tively stable [33].
(1) According to the multisensor data fusion method, it ③ Feature fusion: that is, the feature of the fusion
can be roughly divided into compression fusion, data and the premise of feature fusion are
statistical fusion, feature fusion, knowledge fusion feature extraction. Features are also called
so on. target characteristics, which refer to the vari-
ous characteristics of the target carried in the
① Compression and fusion: the data compression data obtained by different sensors reflecting
process is realized by using a specific compres- the same target. Feature extraction refers to the
sion model, and the input data are usually con- process of performing various mathematical
verted into representations similar to bases and transformations on data to obtain the indirect
coefficients. The compressed and transformed target characteristics contained in the data
data remove redundant information, and the [34].
scale is effectively reduced. It can be inversely ④ Knowledge fusion: knowledge fusion is the
transformed according to the required accuracy process of forming new knowledge by inter-
and reconstructed to restore the approximate acting and supporting knowledge from differ-
original data result. Compression fusion is an ent sources of knowledge. Knowledge fusion
important means to realize data visualization, can not only fuse data and information but also
and data visualization is the main method of data fuse methods, experience, and even human
research and analysis [32]. thoughts [35].
4 Mathematical Problems in Engineering

Table 2: Multisensor data fusion classification.


Data fusion method Abstract level of fusion data Spatiotemporal vector of fusion data Limited description
Compression and fusion Pixel-level data fusion Time fusion Homogeneous data fusion
Statistical fusion Feature-level data fusion Spatial fusion
Feature fusion Decision-level data fusion Temporal-spatial fusion Heterogeneous data fusion
Knowledge fusion

(2) According to the attributes of fusion data, multi- Feature-level data fusion is divided into target
sensor data fusion can be divided into homogeneous state data fusion and target characteristic fusion.
data fusion and heterogeneous data fusion. Target state data fusion: the main realization is
parameter correlation and state vector estima-
① Homogeneous data fusion: it is the consistent
tion, which is mainly used in the field of mul-
representation (interpretation and description)
tisensor target tracking. Target feature fusion:
of the fusion process of homogeneous data col-
that is, using the corresponding technology of
lected by multiple identical sensors, also known
pattern recognition, the joint recognition at the
as multisensor homogeneous data fusion.
feature layer requires that the features be related
② Heterogeneous data fusion: that is, the process of
before fusion, and the feature vectors are clas-
consistent representation (interpretation and
sified into meaningful combinations.
description) of heterogeneous data collected by
③ Decision-level data fusion: it is a high-level fusion,
multiple different sensors, also known as multi-
and the result of the fusion is the basis for com-
sensor heterogeneous data fusion.
mand and control decision-making. In this level of
(3) According to the abstract level of fusion data, it is fusion process, each sensor should first establish a
divided into signal-level data fusion, feature-level preliminary judgment and conclusion on the same
data fusion, and decision-level data fusion. target, then perform correlation processing on the
There are essential differences between multisensor decision from each sensor, and finally perform
data fusion and classical signal processing (single- decision-level fusion processing to obtain the final
sensor signal). Multisensor data have complex forms joint judgment. Decision-level fusion has good
and different abstract levels (signal level, feature real-time performance and fault tolerance, but its
level, and decision level). preprocessing cost is high. At present, network-
based signal or information processing often
① Signal-level data fusion: refers to the fusion on adopts this level of data fusion [36, 37].
the original data layer, that is, the original
measurement and report data of various sensors (4) According to the time vector and space vector of the
are directly integrated and analyzed without fusion data, it can be divided into time fusion, space
preprocessing. The advantage is that it can fusion, and space-time fusion.
maintain as much field data as possible, which is ① Time fusion refers to the fusion processing of the
richer, complete, and reliable than other fusion time domain data of a certain sensor in the system
levels. The disadvantages are accurate registra- ② Spatial fusion refers to the fusion processing of
tion must be performed before pixel-level fu- the measurement values of the related targets at
sion, the amount of processed data is too large, the same sampling time for each sensor in the
the processing time is long, and the real-time system
performance is poor. Pixel-level data fusion is ③ Spatiotemporal fusion refers to the fusion pro-
the lowest level of fusion, but it is possible to cessing of the measurement values of the relevant
provide optimal decision-making or optimal targets of the sensors in the system over a period
recognition. It is often used for multisource of time
image composition, image analysis, and
understanding.
② Feature-level data fusion: firstly, extract features 4. Multisensor Generalized Integration and
from the original data from each sensor (features: Data Acquisition and Output
direction, speed, edge of the target, etc.), and then
perform integrated analysis and processing of the With the continuous development of intelligent industry,
feature information, which belongs to the mid- single sensor has been unable to meet the needs of society,
dle-level fusion. Feature-level data fusion fea- and different multisensor integrated systems are needed to
tures: it achieves good information compression match with it, which has shown an increasingly urgent
and is conducive to real-time processing; the situation. Multisensor integration system is generally a
extracted features are related to decision analysis, nonlinear system. Its sensor attributes, integration mode,
so the fusion result can provide feature infor- data acquisition, and output directly affect the way and
mation for decision analysis to the maximum. quality of multisensor data fusion.
Mathematical Problems in Engineering 5

4.1. Multisensor Generalized Integrated System (MGIS) where CSM.HMI.SC represents multiattribute
homogeneous multisensor integrated subsystem
4.1.1. Multisensor Generalized Integration. According to the cluster, S represents the sensor, A1, A2, A3, . . .,
attributes and quantity of sensors in the multisensor system, AN represents sensor and integration subsystem
abandoning specific application requirements, Figure 2 attribute, 1, 2, 3, . . ., n represents number of
shows the generalized integration mode of multiple sensors. sensors, and IS represents an integration
In the multisensor generalized integration method, subsystem.
homogeneous multisensor integration and heterogeneous (ii) Single-attribute (heterogeneous single-sensor
single-sensor integration are the most basic integration integrated) subsystem cluster (S.HSI.SC) struc-
methods, and heterogeneous multisensor integration is an tural model is as follows:
organic combination of different subsystems. A

⎪ A A A A IS 1

⎪ 􏼐s1 1 , s1 2 , s1 3 , . . . , s1 N 􏼑1 ,
4.1.2. Multisensor Generalized Integrated Structure Model. ⎪



According to the multisensor generalized integration ⎪
⎪ A A A A IS 1
A

⎪ 􏼐s 1 1 , s1 2 , s1 3 , . . . , s 1 N 􏼑 2 ,
method, the multisensor generalized integration structure ⎪

can be divided into CSS.HSI.SC � ⎪ A1 A2 A3 A IS 1
A
(4)

⎪ 􏼐s1 , s1 , s1 , . . . , s1 N 􏼑3 ,


① Homogeneous multisensor integration structure ⎪


⎪ ⋮
(HMI) refers to a system integrated by multiple ⎪


⎩ 􏼐sA1 , sA2 , sA3 , . . . , sAN 􏼑IS 1 ,
A

sensors with the same attributes, and the data collected 1 1 1 1 n


by all the sensors represent the same attribute of the
monitoring target; its structural model is as follows: where CSS.HSI.SC represents single-attribute
heterogeneous single-sensor integration sub-
A A A
ISHMI � 􏼐S1 1 , S2 1 , S3 1 , . . . , SAn 1 􏼑, (1) system cluster, S represents the sensor, A1, A2,
A3, . . ., AN represents sensor and integration
Where ISHMI represents homogeneous multisensor subsystem attribute, 1, 2, 3, . . ., n represents the
integrated system, S represents the sensor, A1 rep- number of sensors and integrated subsystems,
resents the sensor attribute, and 1, 2, 3, . . ., n rep- and IS represents an integration subsystem.
resents the number of sensors. (iii) Hybrid subsystem clusters (HSC) are mainly
② Heterogeneous single-sensor integration structure composed of single-sensor subsystems and ho-
(HSI) refers to a system integrated by a single sensor mogeneous multisensor subsystems according
with different attributes, and the data collected by to application needs. They are generally used in
each sensor represent different attributes of the complex systems.
monitoring target; its structural model is as follows:
A A A A
ISHSI � 􏼐S1 1 , S1 2 , S1 3 , . . . , S1 N 􏼑, (2) 4.1.3. Performance Analysis of Multisensor Generalized In-
tegrated Structure. At present, the complex data fusion
where ISHSI represents the heterogeneous single- technology of multisensor integrated systems is in a high
sensor integrated system, S represents the sensor, A1, development stage. According to the structure of fusion data
A2, A3, . . ., AN represents sensor attributes, and 1 sources, the research of data fusion algorithm is a new re-
represents the number of sensors. search idea in the field of data fusion. Different structures of
③ Heterogeneous multisensor integration structure multisensor data acquisition system, their output data
refers to a complex cluster system composed of properties, and complexity are different, resulting in dif-
multiple (or multiple) subsystems. The clustering ferent data fusion modes, fusion process models, algorithm
methods include multiattribute homogeneous mul- structures, calculation amounts, and fusion accuracy. Based
tisensor integrated subsystem cluster, single-attribute on the multisensor generalized integration structure model,
heterogeneous single-sensor integrated subsystem make “1, 2, and 3” correspond to “low, medium, and high”
cluster, and hybrid system cluster. three levels and represent the corresponding vector ad-
vantages and disadvantages. Comparing and analyzing the
(i) Multiattribute (homogeneous multisensor inte-
comprehensive performance of the five structure models, the
grated) subsystem cluster (M.HMI.SC) struc-
results are shown in Table 3:
tural model is as follows:
According to the performance comparison analysis in
A1
⎪ A IS ⎪ Table 3, it is not difficult to see that the homogeneous
⎧ 􏼐s1 1 , s2 1 , s3 1 , . . . , sn 1 􏼑 , ⎫
A A A

⎪ ⎪


⎪ ⎪
⎪ multisensor integrated structure has the best performance,

⎪ IS A2 ⎪


⎪ s
A 2
􏼐1 2 3, s
A2
, s
A 2
, . . . , s A
, ⎪
⎪ and the heterogeneous single-sensor integrated structure has
n 􏼑
2

⎪ ⎪

⎨ ⎬ the second best performance; in the cluster system, the
CSM.HMI.SC � ⎪ A3 A3 A3 IS A3
(3)

⎪ 􏼐s1 , s2 , s3 , . . . , sn 􏼑 , ⎪ A3

⎪ cluster structure of multiattribute homogeneous multisensor

⎪ ⎪


⎪ ⋮ ⎪
⎪ subsystem and the cluster structure of hybrid subsystem are

⎪ ⎪


⎪ A ⎪
N ⎪ better than the cluster structure of the single-attribute
⎩ AN AN AN AN IS ⎭
􏼐 s1 , s 2 , s3 , . . . , s n 􏼑 , heterogeneous single-sensor integrated subsystem cluster.
6 Mathematical Problems in Engineering

A1
SA1 , SA2 , SA3 , ..., SnA
1 1 1 1 (SA1 , SA2 , SA3 , ..., SnA )IS
1 1 1 1
Multi-
A2
Homogeneous multi-sensor attribute (SA1 , SA2 , SA3 , ..., SnA )IS
2 2 2 2

homogene
integration ous multi- A3
(SA1 , SA2 , SA3 , ..., SnA )IS
3 3 3 3

sensor
integrated

...
subsystem
SA1 , SA1 , SA1 , ..., S1A
1 2 3 N
clusters (SA1 , SA2 , SA3 , ..., SAn )IS
N N N N
AN

Sensor
integration Heterogeneous single- A1
sensor integration Single- (SA1 , SA1 , SA1 , ..., SA1 )1IS
1 2 3 N

mode attribute A1
heterogene (SA1 , SA1 , SA1 , ..., SA1 )2IS
1 2 3 N

ous single- A1
sensor (SA1 , SA1 , SA1 , ..., SA1 )3IS
1 2 3 N

integrated

...
Heterogeneous multi- subsystem
clusters (SA1 , SA1 , SA1 , ..., SA1 )nIS
A1
sensor clusters 1 2 3 N

Attributes and quantities


Hybrid subsystem clusters

Subsystem attribute
Figure 2: Multisensor generalized integration.

Table 3: Performance analysis of multisensor generalized integrated system.


MGIS HMI.S HIS.S M.HMI.SC S.HSI.SC HSC
The data structure 3 2 3 1 1
Amount of calculation 3 2 2 1 1
Fusion results 3 1 2 2 3
Stability 3 1 3 2 2
Practicability 3 3 3 1 3
Comprehensive performance x � 􏽐ki�1 wi xi , 􏽐ki�1 wi � 1 3 1.8 2.6 1.4 2

4.2. Multisensor Data Acquisition and Output. Multisensor 5. Multisensor Data Fusion Principle
refers to a sensor integrated system platform and a dis- and Architecture
tributed fusion sensor subsystem. The main function is to
use it as a signal source to collect data and output. (1) Data Humans and animals are born and use extremely natural
collection: data science refers to the data collection of and reasonable multisensor data fusion capabilities, such as
physical sensors as the sensing equipment in the physical the observation, smell, and inquiry of traditional Chinese
space, which is called data collection, that is, the mea- medicine. Bat’s judgment of prey is the most primitive
surement behavior and process of the object being measured multisensor data fusion.
by the physical sensor in time sequence. (2) Data output: it
refers to the recording method and recording result of the
measurement result of the multisensor system. The re- 5.1. Principles of Multisensor Data Fusion. In the field of
cording method is divided into real-time recording and automation research, multisensor data fusion technology is
abnormal recording. Real-time recording: it refers to the derived from the way of imitating human and animal
complete recording of sampling results based on time se- cognition of the world, which is essentially similar to that
quence; abnormal recording it refers to recording only humans or animals obtain information through various
abnormal data outside the threshold range based on the senses, and the acquired information with memory or
given normal threshold range. experience is compared and distinguished. The basic
The data collected by the multisensor generally have the principle is as follows: just like the comprehensive pro-
characteristics of heterogeneity, and its heterogeneity arises cessing of information by the human brain, make full use of
from differences in expression, source differences, and multiple sensors (multi-information sources) integrated in
human factors. Representation difference refers to the an orderly manner. First, the data collected by each sensor
diversity of order and dimension (some data are high-order is the observation data. We carry out consistency expres-
tensors and some data are matrices); source difference sion, then combine with multiple sensors in space or time
refers to different types of sensors or from different de- redundant or complementary data according to a certain
tection purposes; human factors refers to the construction criterion, and finally obtain a consistent explanation or
of data space, implementation, and technology of data description of the measured object. The specific expression
management system [29]. is as follows [38, 39]:
Mathematical Problems in Engineering 7

(1) Orderly integrate n sensors of N different types to advancement, and high-level fusion is based on the
collect and observe data related to the target (N, results of low-level fusion.
n � 2, 3, 4, . . .) ④ Mixed model: in 2009, Bedworth and O’Brien pro-
(2) Consistent representation of homogeneous data posed the model, as shown in Figure 5 [45]:
collected by each sensor The model consists of four parts: observation, ori-
(3) Perform feature extraction on various heterogeneous entation, decision-making, and action. Observation
data (such as output vector, imaging data, discrete or includes data collection and processing; the orien-
continuous time function data, or a direct attribute tation part includes feature extraction and pattern
description) and extract the feature vector Yi rep- recognition; the decision-making part includes state
resenting the observation data estimation and decision fusion; the action part in-
(4) Perform pattern recognition processing on the fea- cludes control and resource allocation.
ture vector Yi (such as clustering algorithm, adaptive
neural network, and tensor expansion operator), to 5.2.2. Multisensor Data Generalized Fusion Model.
complete the description of various sensors about the According to the principle of multisensor data fusion,
target abandoning application objects, and improving the JDL
(5) Knowledge fusion: group and correlate the de- information fusion model (Steinberg version in 1999) [21], a
scription data of various sensors about the target and generalized model of multisensor data fusion is obtained: it
then use the fusion algorithm to synthesize to obtain consists of data source, level 4 data fusion, human-computer
a consistent interpretation and description of the interaction, and data management7. Its functions and re-
target lationships are shown in Figure 6.

5.2. Multisensor Data Fusion Architecture. Multisensor data 5.2.3. Multisensor Data Fusion Generalized Architecture
fusion architecture refers to the whole process of multi- Model Analysis
sensor data fusion, the components of the fusion system, the ① Data source includes (1) physical sensor integrated
main functions of each part, the relationship between each system platform (organic physics system or sensor
part, the relationship between the subsystems and the sys- and integrated system platform), (2) distributed fu-
tem, the fusion location, etc. [40, 41]. sion sensor subsystem, and (3) reference data, geo-
graphic information, supporting databases, etc.
5.2.1. Typical Multisensor Data Fusion Architecture Model ② Human-computer interaction includes (1) manual
input of commands, information requests, manual
① Multisensor integrated fusion structure model: in
inference and evaluation, manual operator reports,
1988, Luo and Kay proposed the model, as shown in
etc., and (2) a mechanism for integrating system
Figure 3 [42].
alarms, displaying location, and identity information
The model is composed of 4 parts: sensor, data fusion, to dynamically cover and deliver results geographi-
database auxiliary system, and fusion level. The cally. It includes both multimedia methods of human
sensor part is composed of n(n ≥ 2) sensors; the data interaction (graphics, sound, tactile interfaces, etc.) as
fusion part is described as a progressive fusion well as methods to attract human attention and help
method; the database auxiliary system part is de- overcome cognitive limitations.
scribed as the intervention or impact on each fusion;
③ Level 1 (source data fusion): based on pixel-level or
the fusion level part is described as the fusion that the
signal-level data association and representation, it
model can be used for level.
prepares for signal/target observable state estimation
② Thomopoulos structural model: in 1990, Thomo- or prediction. This means the data source signal is
poulos proposed the model, as shown in Figure 4(a) compressed under the condition of ensuring the data
[43]. acquired by the sensor as little as possible, so as to
The model is composed of three parts: sensor, data retain the effective information to the maximum
fusion, and database. The data fusion part is de- extent for higher level data fusion.
scribed as three levels of fusion, and each level of ④ Level 2 (feature and state estimation): based on the
fusion supports (or influences) each other; the data fusion results of data sources, it estimates and pre-
fusion part supports (or influences) each other with dicts the state, attribute, feature, event, or action
the sensor part and supports (or influences) each feature vectors of the target related to heterogeneous
other with the database part. data, and according to the feature vector, it estimates
③ Waterfall model: in 1998, Harris et al. proposed the and predicts the relationship between entities (data),
model, as shown in Figure 4(b) [44]. impact of association and perception, and physical
The model consists of three parts: sensors, data fu- environment and constructs the state trend.
sion, and control. The data fusion part is described as ⑤ Level 3 (situation fusion): based on the results of
a five-level fusion, a process of incremental feature fusion, it analyzes the advantages and
8 Mathematical Problems in Engineering

Sensor 1 high level


Merge Symbolic convergence
Sensor 2
Database Feature level fusion
auxiliary
Sensor 3 Merge
system
Pixel level fusion

Signal level fusion low level


Sensor n Merge Fusiono
utput

Figure 3: Multisensor integration and fusion model.

Signal level fusion Decision inference


Situation assessment
Pattern recognition
Sensor Evidence-level fusion Database Control action
Feature extraction
Pretreatment
Power stage fusion Sensor

(a) (b)

Figure 4: (a) Thomopoulos fusion model. (b) Waterfall fall model.

View test
Sensor management Signal processing Sensor data fusion

Detection

Resource allocation Pattern processing

Set to
Action

Control Feature extraction


Decision

Relationship between
processing
Hard decision fusion Soft decision fusion
Definitely ce
Figure 5: Mixed model.

Data Origin Data Fusion

Source Data
Features and states merge Trend of fusion
Fusion

Consistent Extracting Pattern


Multi- Situation and
representation of characteristic recognition
sensor impact estimation Fusion
various vector from realizes target
integration unknown state based on state results
homogeneous heterogeneous
platform data estimation fusion.
data

Human-
Database computer
Management System interaction
Knowledge fusion and
Megre Support process optimization
Database Database

Figure 6: Multisensor data fusion generalized model.


Mathematical Problems in Engineering 9

disadvantages of various plans, actions, and state classification, statistics, compression, and estimation (the
trends and estimates and predicts the interaction homogeneous raw signal output by the same type of sensor).
between plans and actions to be taken, the impact on Consistent representation of quality data is gained.
the overall situation, and the possible results. Finally,
it combined with the support data, the decision data
are obtained. 6.1.1. Data Representation. The knowledge and rules dis-
⑥ Level 4 (knowledge fusion and process optimization): covered from the original data are based on data repre-
knowledge fusion is the fusion of sensor data and sentation. In recent years, many researchers have discussed
supporting database data. Process optimization refers and described the work of data representation [46, 47]. The
to the adaptive data collection and processing; it is most basic data representation methods include ontology
responsible for monitoring all links in the entire fusion representation, graph representation, tensor representation,
process and forming a more effective resource allo- and matrix representation [48].
cation plan to support mission goals. It is the feedback ① Ontology representation: ontology is the description
part of the whole system, thought of as a process that of specific domain concepts, also known as the set of
manages other processes, and is shown outside the concepts [49]. Ontology generation includes two
fusion process. The main functions are to (i) monitor steps: first, mapping the real world (such as entity,
the performance of each link in the data fusion process attribute, and process) to a set of concepts and then
and provide it with real-time and long-term control extracting the relationship between concepts. It can
information, (ii) identify what information is needed represent objects as conceptual models at the se-
to improve the multilevel fusion results (inference, mantic level. It simplifies the transformation of
location, identity, etc.), (iii) determine the collection of knowledge and is the mainstream method of data
relevant information of the specific source (which type representation [50, 51].
of sensor, which specific sensor, which database, etc.), ② Graph representation: it is the representation of
and (iv) allocate data, realize knowledge fusion, and natural data with the matrix, which has some limi-
complete task goals. tations. The graph is composed of many points called
⑦ Data management: it is the most extensive support nodes, which are connected with edges [52]. The
function required for data fusion processing. This commonly used graph representation matrix is ad-
function provides access and management of the jacency matrix [53].
fusion database, including data retrieval, storage, ③ Matrix representation: the matrix is also called bi-
archiving, compression, relational query, and data directional array, which is a parallel description of
protection. Database management in data fusion time domain and space domain. Multichannel signals
systems is particularly difficult because the amount of are generally represented by the matrix [29]. The
data managed is large and diverse (images, signals, rows of the matrix contain all sensors or channels, the
vectors, and textures). columns contain all measurement times, and the
Among them, the two parts of human-computer in- elements represent signal values. In data mining and
teraction and process optimization run through the whole machine learning, rectangular arrays describe the
process of data fusion; source data fusion belongs to pixel- attributes or observations of samples as each row
level fusion; situation fusion belongs to decision-level fusion; corresponds to one sample or observation, and each
support database refers to soft sensor data; fusion database column corresponds to multiple attributes or ob-
contains fusion rules and fusion results. servations related to the sample.
④ Tensor representation: a tensor is a sequential ex-
6. Multisensor Data Fusion Theory pansion of a vector. It is a multidimensional array.
and Algorithm Each element has multiple indicators. Each indicator
represents a model or an order. It is a general tool for
Multisensor is used to obtain the consistent interpretation or representing various heterogeneous data [29]. For
description of the measured object, which is mainly realized example, gait video data can be expressed as a fourth-
by data fusion algorithm. At present, the research results of order tensor composed of pixels, angles, motion, and
multisensor data fusion are very rich, which provide an objects [54]; network link data can be expressed as a
important reference for the research of multisensor data- third-order tensor [55]; the electronic nose data can
generalized fusion algorithm. Next, according to the 4-level be expressed as a third-order tensor [56].
fusion in the multisensor data-generalized fusion model, the
related theories and algorithms are sorted out step by step in
the following sections. 6.1.2. Consistency Test of Homogeneous Data. Homogeneity
sensors are arranged in different spatial positions, and their
monitoring data have some differences. According to the
6.1. Source Data Fusion. Source data fusion refers to the data principle of consistency test, if the difference is greater than
collection and output data sorting with multiple sensors as the set threshold, the monitoring data are considered as
the signal source, that is, the output data are processed by abnormal data, and the accuracy will be seriously affected if
10 Mathematical Problems in Engineering

it is fused directly [57]. In order to ensure the consistency, interferences in the data acquisition process, and the ac-
continuity, and accuracy of the monitoring data, it is most quired data may be distorted or unrecoverable. Therefore, a
reasonable to replace the abnormal data with the average single feature is used as the fusion object, and the fusion
value of the normal value in this period. Therefore, only after result is unreliable.
the homogeneity monitoring data passes the consistency test
can the data fusion be carried out and the correct consistency
representation be obtained. 6.2.1. Feature Extraction. The premise of feature fusion is
Homogeneous multisensor data consistency test prin- feature extraction. Feature extraction refers to the process of
ciple: suppose there are n homogeneous sensors to measure performing various mathematical transformations on data
the same attribute of the monitored object, and the mea- to obtain the indirect target characteristics contained in the
surement results are X1, X2, . . ., Xn, expressed as Xi (i � 1, 2, data. Indirect target characteristics refer to the recessive
. . ., n), and we perform consistency test on Xi (i � 1, 2, . . ., n); features that can reflect target features (geometry, move-
the test principle is that the difference between two adjacent ment, statistics, etc.) from the side [61]. A large number of
numbers is less than or equal to the threshold ε; the specific theories and practices have shown that when the direct
calculation formula is as follows: features are not obvious, extracting the indirect features and
􏼌􏼌 􏼌 finding the comprehensive characteristics of the target is the
􏼌􏼌X2 − X1 􏼌􏼌􏼌 ≤ ε,
key to multisensor data feature fusion, and it is also an
􏼌􏼌 􏼌
􏼌􏼌X3 − X2 􏼌􏼌􏼌 ≤ ε, important idea of data fusion in the contemporary infor-
(5) mation technology field.
...
􏼌􏼌 􏼌􏼌
􏼌􏼌Xn − Xn−1 􏼌􏼌 ≤ ε.
6.2.2. Data Association. In a distributed multisensor system,
judging whether the information from different subsystems
represents the same target, this is data association (inter-
6.1.3. Weighted Average Fusion Algorithm [58]. The connection). The purpose of data association is to distin-
weighted average method is often used for the fusion of guish different targets and to solve the problem of
homogeneous data of homogeneous sensor systems to overlapping sensor spatial coverage areas. The classic data
monitor dynamic objects. It is a direct fusion method for association algorithms are nearest neighbor method [62],
data sources and the simplest signal-level fusion method. probabilistic data association algorithm (PDA) [63, 64],
Homogeneous data of a homogeneous sensor system is a multiple hypothesis method (MHT) [65, 66], and proba-
description of the same attribute. If K sensors are used to bilistic multiple hypothesis algorithm (PMHT) [67, 68].
measure the target, the average value is defined as
k
x � 􏽘 w i xi , 6.2.3. State Estimation. Multisensor systems are generally
i�1 nonlinear systems. The optimal solution of nonlinear
(6) function filtering can be obtained through Bayesian optimal
k
􏽘 wi � 1, estimation. Therefore, from the Bayesian theory, the state
i�1 estimation of the system can be obtained by approximating
the nonlinear function of the system or the probability
where wi represents the weight of the ith sensor. This density function of the nonlinear function.
method is simple and intuitive, but the fusion accuracy is not There are two types of approximation methods for state
high, which is suitable for data fusion of the homogeneous estimation of nonlinear systems: one is the approximate
multisensor system. linearization of the nonlinear links of the system, retaining
the low-order terms and ignoring the high-order terms, that
6.1.4. Kalman Filter Fusion Algorithm [59]. This method is a is, the direct linear approximation of the nonlinear function.
data fusion method based on minimum variance estimation, Among which the most widely used are the extended Kal-
which is used to estimate homogeneous data with moni- man filter algorithm (EKF) [69] and divided differential filter
toring errors. The goal is to maximize the representation of (DDF) [70, 71]. The other is to approximate the nonlinear
true values. As proposed in the 1960s, it is the most com- distribution by the sampling method, that is, to approximate
monly used technology in target tracking and navigation the probability density function of the nonlinear function.
systems [60]. The disadvantage is that each local sensor Such methods include particle filter algorithm (PF) [72],
requires global estimation and two-way communication, unscented Kalman filter algorithm (UKF) [73], and cubature
negating some of the advantages of parallelization. Kalman filter algorithm (CKF) [74] be used for generalized
Kalman filter for special cases, etc.

6.2. Feature Fusion. Feature fusion is the fusion of data


features. Features are also called target characteristics, in- 6.2.4. Pattern Recognition. Pattern recognition is generally
cluding the target state, which refers to various character- used for target feature fusion. The common methods include
istics or states of the target carried in the data obtained by Bayesian inference, DS evidence theory, generation rule
different sensors reflecting the same target. There are various method, clustering algorithm, election method, maximum
Mathematical Problems in Engineering 11

entropy method, fuzzy set theory, and artificial nerve con-


Minimum level of The highest level of
vergence of networks. trust in hypothesis X trust in hypothesis X
① Bayesian reasoning: it is a conditional probability (or
marginal probability) theorem about random events A Bel (x) Recognize the uncertain Bel (x)
and B [75]. This method is based on the hypothesis region of hypothesis X
prior probability, the probability of observing different
0 1
data with a given hypothesis, and the observed data
itself. It is the main method used to fuse uncertain and Pls (X)
incomplete multisensor data in the early days [75]:
Figure 7: Relationship between trust function and likelihood
P A|Bi 􏼁P Bi 􏼁
P Bi |A􏼁 � n , i � 1, 2, . . . , n. (7) function.
􏽐j�1 P􏼐Bj 􏼑P􏼐A|Bj 􏼑
⑤ Election algorithm: this algorithm is a common
② D-S evidence theory: it is a mathematical method to
computing type in distributed systems. It selects a
fuse uncertain data and an expansion of classical
process from the process to perform special tasks.
probability theory [76, 77], It has the ability to express
Based on the type of network used, it can be divided
uncertainty [78]; the measurement of uncertainty is
into ① election algorithm based on ring topology, ②
very close to people’s habits of thought. It can gradually
election algorithm based on fully connected topology
reduce the hypothesis through evidence accumulation
[102], ③ election algorithm based on comparison,
and synthesis rules, which is applicable to multisource
commonly used election algorithms: election algo-
data fusion [79, 80]. It does not require prior infor-
rithm based on ring topology [103], overbearing
mation and uses the “interval” method to describe
election algorithm [102], etc.
uncertain information, which is flexible in dis-
tinguishing “do not know” from “uncertain” and ac- ⑥ Entropy method: entropy refers to the degree of
curately reflecting the aggregation of evidence. Among confusion of the system. Information entropy refers
them, the relationship between the trust function and to the degree of uncertainty of a random variable
the likelihood function is Pls(X) ≥ Bel(X) and [104]. Claude Elwood Shannon used i to mark all
Pls(X) � 1 − Bel(X), as shown in Figure 7. possible samples in the probability space, Pi to
represent the probability of the occurrence of i
③ Generation rule method: it is the knowledge repre-
samples, and K to represent arbitrary constants re-
sentation mode proposed by Post according to the
lated to the selection of units. Based on cybernetics
string substitution rule [81]. At present, many suc-
and information theory, he believed that when en-
cessful expert systems adopt the production knowl-
tropy is the largest, it means that the random variable
edge representation method [78, 82].
is the most uncertain, that is, the random variable is
Basic form: P ⟶ Q or IF P THEN Q. The meaning of the most random, and it is the most difficult to make
the production is as follows: if premise P is satisfied, accurate prediction of its behavior. The calculation
conclusion Q can be deduced or the operation formula of information entropy S is as follows:
specified by Q can be performed. P is the premise of n
the production, also called the antecedent, which is S p1 , p2 , . . . , pn 􏼁 � −K 􏽘 pi log2 pi . (8)
composed of a logical combination of facts and is the i�1
conjunction of some facts Ai. Q is a set of conclusions
or operations, also called the consequent of the Principle of maximum entropy: the main idea is when
production, which indicates the current proposal. only part of the knowledge about the unknown distribution
When P is satisfied, the conclusion that should be is mastered, the probability distribution that conforms to the
derived or the action that should be performed is a maximum entropy value of these knowledge should be se-
certain fact B. lected to infer the unknown distribution [105]. The essence is
④ Clustering algorithm: it is a statistical analysis method on the premise of known partial knowledge; the most
for studying (samples or indicators) classification reasonable inference to the unknown distribution is the most
problems [83, 84], which is divided into clustering, uncertain or random inference in accordance with the
hierarchical clustering, artificial neural network clus- known knowledge. Characteristics: all uncertainties are
tering, nuclear clustering, sequence data clustering, retained to minimize risks. It is the criterion to select the
complex network clustering, intelligent search clus- statistical characteristics of random variables that most
tering, distributed clustering, parallel clustering, conforms to the objective situation.
high-dimensional clustering, etc. [85, 86]. Common
algorithms are K-MEANS [87–90], K-MEDOIDS [91],
Binary-positive [92], VISOM [93], incremental support 6.3. Situation Fusion. Situation fusion: based on the result of
vector clustering [94], CLARANS [95, 96], Trajectory feature fusion, situation estimation and impact estimation
clustering algorithm [97], DBDC [98], CLIQUE [99], are performed. The main methods used in situation esti-
subspace clustering [100, 101], etc. mation and impact estimation are artificial neural network,
12 Mathematical Problems in Engineering

deep learning method, clustering algorithm, fuzzy set theory, machines (RBM) “series” stacks. In the network and
decision tree, and other methods. stack, the hidden layer of the previous RBM is the
explicit layer of the next RBM, and the output of the
previous RBM is the input of the next RBM, see
6.3.1. Artificial Neural Network (ANN). ANN is a non- Figure 9(a). In the course of their training, the
programmed, nonlinear adaptive, and brain-style parallel previous RBM must be fully trained before the
distributed information processing system proposed on the current RBM of the layer up to the last layer [123].
basis of modern neuroscience. The essence is through the After layer-by-layer stacking of RBM, the DBN
transformation of the network structure and dynamic be- model can extract features layer by layer from the
havior, with varying degrees at different levels, to imitate the original data and obtain some high-level expressions
human brain nervous system to process information [124, 125], see Figure 9(b).
[106, 107]. A neural network is a computational model,
which is composed of a large number of neurons (nodes) ③ Stack automatically encodes network model: the
connected to each other. The connection mode of the structure of SAEN is similar to that of DBN, consisting
neurons is different, and the composed network is also of a stack of several structural units. The difference
different. The neuronal structure is shown in Figure 8(a) between the two is that the structural unit of SAEN is
[108, 109]. The calculation model of the artificial neural autoencoder, while the structural unit of DBN is RBM.
network is shown in Figure 8(b) [110, 111]. The self-encoder is composed of a three-layer network.
In the figure, a1 − an indicates the components of the The input layer and the hidden layer form an encoder,
input vector, w1 − wn indicates the weight of each neuron’s which converts the input signal x into a. The hidden
synapse, b indicates the bias value, f is the transfer function layer and the output layer constitute a decoder, which
(usually a nonlinear function), t is the nerve. The output of transforms the code into an output signal y; a mul-
the element is t � f(WA′ + b), W is the weight vector, A is tilayer sparse self-encoder can form a stacked self-
the input vector, and A′ is the transpose of the A vector. It encoder. That is, the output of the sparse self-encoder
can be seen that the function of the neuron is to obtain the of the previous layer is used as the input of the self-
scalar result of the nonlinear transfer function after the inner encoder of the subsequent layer [118].
product of the weight vector and the input vector are
transposed. 6.3.3. Fuzzy Set Theory. Fuzzy set theory refers to the use of
mathematics to describe fuzzy concepts and extend the exact
6.3.2. Deep Learning. Deep learning is derived from artificial set to fuzzy sets from the extension. It is also called fuzzy
neural network, which is a general term for a class of pattern mathematics. Mathematically, it can eliminate the impris-
analysis methods. It has made rich achievements in data onment that the computer cannot handle fuzzy concepts
mining, machine learning, natural language processing, and [126]. The proposal of “membership function” breaks
other related fields [112]. The purpose of studying deep through the absolute relationship of belonging or not be-
learning is to establish a neural network that imitates human longing to in the classic set theory and describes the am-
brain mechanism to interpret data (such as image, sound, biguity of things [127, 128].
and text) [113]. Deep learning includes supervised learning Definition of ambiguity [128, 129]: a measure of the
and unsupervised learning. Classical learning models in- ambiguity of fuzzy set A, which is reflected as the degree of
clude convolutional neural network (CNN), deep belief ambiguity of A, is intuitively defined.
network (DBN), and stack automatically encodes network Let map D: F (U) ⟶ [0, 1], where D is the ambiguity
(SAEN) [114]. At present, deep network has been success- function defined on F (U); then, D (A) is the ambiguity of
fully applied to the fusion of single-mode data (such as text fuzzy set A, which should have the following five
and image) and has also been rapidly developed in the fusion characteristics:
of multimode data (such as video) [115, 116]. ① Clarity: D (A) � 0 if and only if A ∈ P (U) (the am-
① Convolutional neural network model: CNN is a kind biguity of a classical set is always 0)
of feedforward neural network with convolution ② Fuzziness: D (A) � 1 if and only if ∀u ∈ U have A (u) �
calculation and depth structure. It has the ability of 0.5 (the fuzzy set with membership degree of 0.5 is
representation learning and is one of the represen- the fuzziest)
tative algorithms of depth learning [117, 118], which ③ Monotonicity: ∀u ∈ U if A (u) ≤ B (u) ≤ 0.5, or A (u) ≥
is composed of the input layer, the hidden layer, and B (u) ≥ 0.5 and D (A) ≤ D (B)
the output layer. It is a neural network that can be ④ Symmetry: ∀A ∈ F (U); there are D (A) � D (A) � (the
used for supervised learning and unsupervised complements have the same degree of ambiguity)
learning, and its hidden layer has the characteristics
of less computation [119–121]. ⑤ Additivity: D(A ∪ B) + D(A ∩ B) � D (A) + D (B)
② Deep belief network model: the DBN model can also
be interpreted as a Bayesian probability generation 6.3.4. Decision Tree. The machine learning technology of
model [122], which is a multihidden layer neural generating decision tree from data is called decision tree for
network composed of multiple restricted Boltzmann short. It is a basic classification and regression method [130].
Mathematical Problems in Engineering 13

a1 x1
w1 w1

w2 1 w Bias
a2 1
t
w3 SUM f x2 w2 x2 ·w + −
a3 2

+
x3·w3 + ∑ f
x3 w3
wn
+
an b
wn
x n·

1 xn wn

(a) (b)

Figure 8: Artificial neural network. (a) Neuronal structure. (b) Artificial neural network computing model.

Output Layer

Hidden Layer 4
RBM 4
Hidden Layer 3
RBM 3
Hidden Layer 2
RBM 2
Hidden Layer 1
RBM 1
Input Layer

(a) (b)

Figure 9: (a) Deep belief network. (b) Deep belief network training process.

It is a graph theory method of intuitively using probability the knowledge fusion process to predict the future be-
analysis; that is, on the basis of known occurrence proba- havior of a target or entity.
bility of various situations, it can classify its objects or mine Process optimization is usually realized by “effect theory”
data by constructing decision tree [131]. The output of [145, 146]; that is, a variety of system evaluation indexes and
decision tree is single. When facing complex output, mul- methods are used to monitor and evaluate the performance
tiple independent decision trees can be established to deal of each link (subsystem) and form an effective resource
with different outputs. allocation scheme, which is equivalent to the feedback part
In recent years, multisensor data fusion has developed of the whole system.
rapidly; when dealing with situation fusion, many scholars The effectiveness evaluation of the data fusion system is
also use nonprobabilistic fusion methods such as random set generally quantitatively evaluated by Monte Carlo simulation
[132–134], rough set [135–138], fuzzy logic [139–142], and [147, 148] or covariance error analysis technology [18, 149].
Dempster–Shafer [77, 143, 144] to achieve ideal results. To optimize the data fusion system, the following basic issues
must be considered and solved [150, 151]: (1) choose what
algorithm or technology is the most suitable and optimal; (2)
6.4. Knowledge Fusion and Process Optimization. choose which fusion framework to use (that is, where the data
Knowledge fusion is the fusion of sensor data and sup- flow is processed in the fusion process) is most appropriate;
porting database data. Process optimization refers to the (3) select which sensor integration method can extract the
global optimization process based on knowledge fusion. maximum amount of information; (4) ensure the actual
Knowledge fusion includes selection and automatic accuracy that each process of data fusion can achieve; (5)
reasoning. Selection is mainly reflected in the selection of optimize the fusion process in a dynamic sense; (6) deal with
fusion mode and method, with emphasis on location the impact of the data collection environment; (7) improve
information fusion and parameter data fusion. Automatic the conditions of system operation.
reasoning technology is to interpret the observed data
environment, the relationship between observed entities, 6.5. Multisensor Data-Generalized Fusion-Proposed Method.
and the hierarchical grouping of targets or objects The data fusion theory and algorithm summarized in Sec-
according to the actual rules, framework, and scripts of tions 6.2 and 6.3 have strong application object, relatively
14 Mathematical Problems in Engineering

isolated application field, and weak interoperability and lack Is


a unified, reliable, efficient, and flexible idea of generalized
fusion; in order to solve this very challenging problem, we
tentatively carried out some research: GPS
Video
(1) The generalized fusion method of multisensor ho-
mogeneous data is as follows:
Homogeneous data fusion of the homogeneous
multisensor system has been very mature in the past XML Document
research. It belongs to the source data level fusion in It
the generalized fusion model, and its universal al-
gorithm has been described in detail in Section 6.1.
(2) Tensor-based generalized fusion method for multi- Iu
sensor heterogeneous data:
① Consistent tensor fusion (CTF): Figure 10: Unified tensor representation model.
In 2016, Kuang et al. proposed the unified tensor
fusion (UTF) model (Figure 10) [152]; that is, the unified data tensor (UDT) is implemented to
stretching method is used to model heteroge- achieve heterogeneous fusion, and the consistent
neous data as subtensors, and the two-layer tensor fusion calculation model (formulas
(whole layer and inner layer) model is inserted to (9)–(11)) is obtained. Figure 11 shows the con-
generate a unified tensor, which better realizes sistent tensor fusion process.
the consistent representation of large (multi- Consistency tensor representation: the LIDAR
modal/heterogeneous) data. It provides a foun- point cloud data is represented as the second-
dation for the research of multisensor order tensor Tpoint cloud ∈ RIx ×Iy . The depth
heterogeneous data fusion. camera video data is represented as fourth-order
In Figure 10, the global layer is a third-order tensor Tvideo ∈ RIh ×Ic ×If ×Iw ; GPS data is expressed
tensor including time TI, space SP, and state U, as a third-order tensor TGPS ∈ RIec ×Ien ×Ier . The
which is expressed as It × Is × Iu . The inner layer database data can be expressed as the third-order
represents three subtensors of three spatial data, tensor TXML ∈ RIt ×Iid ×Iy .
and the subtensors are embedded into the whole CMF equation:
layer to obtain a unified tensor. �� ��2 �� ��2
Most of the past studies on tensors are based on f(A, B, W) � ��X − ABT �� + ��Y − AWT �� . (9)
single-mode and low-order data. Tensor is a high-
order generalization of the matrix and represents Tensor extension operator:

the variability of various types of data in a high f: A× B ⟶ C, C ∈ RIt ×Is ×Iu ×I1 ×I2 . (10)
dimension. Tensor factorization is the joint
matrix factorization of coupling different factors Unified data sheet quantification:
by sharing factors [53]. When this idea is applied
f: du ∪ dsemi ∪ ds 􏼁 ⟶ 􏽼√√√√√
Tu ∪ T√􏽻􏽺√√√√√
semi ∪ T√􏽽
s,
to multisensor heterogeneous data fusion, mul- (11)
T
tisensor heterogeneous data can be fused by
tensor decomposition of factors in each subsys- where X ∈ RI×J and Y ∈ RI×K are given matrices
tem dataset. with common mode I, A ∈ RIt ×Is ×Iu ×I1 , B
Given heterogeneous single-sensor integration ∈ RIt ×Is ×Iu ×I2 , du represents unstructured data,
platform S consists of four vectors: lidar, depth dsemi stands for semistructured data, and ds
camera, GPS, and support database. The system represents structured data.
model is YHSIS ∈ (sLR , sDC , sGPS , sSD ). The fre- ② Constrained tensor fusion (CTF)
quency domain data fusion process is as follows: Given heterogeneous single-sensor integration
Based on the UTF model, firstly, the consistency of platform S consists of four vectors: lidar, depth
heterogeneous data collected by sensors is camera, GPS, and support database. The system
expressed as the tensor; it can be seen that, after model is YHSIS ∈ (sLR , sDC , sGPS , sSD ), and its
the consistency tensor representation of hetero- frequency domain data fusion process is as
geneous data, the diversity of order and dimension follows:
still exists. Based on the set matrix factorization Migration data fusion is the fusion of time domain
(CMF) [153] equation proposed by Singh and and spatial domain data, which can be achieved by
Gordon, the feature tensor is fused to obtain the adding probability constraints on the basis of
feature subtensor. Then, the tensor expansion consistency tensor fusion (CTF). The so-called
operator (Teo) is used to obtain the higher order increase of probability constraint is to increase the
unified tensor: T ∈ RIu ×Ic ×It ×Ix ×···×Iy ×Iz . Finally, the cooperation between multivariable Markov chain
Mathematical Problems in Engineering 15

Iy
Ix y
a
Ih
If k
Iw j
z l
Ic x
StudentID Longitude Latitude Time Ien i
m
D20148803 114.41225837 30.51989529 07-28 10:36:15

Iec t
D20148805 114.41209096 30.51987968 07-28 10:36:25

Ier
D20148806 114.41194219 30.51992848 07-28 10:36:35

...

...

...

...
<?xml version='1.0'>
Iy ...
<University>
<Category='doctoral'>
Ix c
It u
</Student>
</University>

Iid
f : (du ∪dseml ∪ds)→Tu∪TsemlTs

Figure 11: Consistency tensor fusion process.

(MC) and tensor, transform tensor in multiple Temporal and spatial information fusion:
steps, fuse the space, time, and supporting data of
count t1 , l1 􏼁 ⟶ t2 , l2 􏼁􏼁
the system migration process, realize the system Tt1l1t2l2 � . (12)
migration fusion, and obtain the constrained count t1 , l1 􏼁
tensor fusion (CTF) calculation model (equations Support database fusion: the influence degree of
(12) and (15)). The position sequence in GPS data support data is quantified as the greater the
can constitute a spatial transformation model. The correlation coefficient between fusion data and
corresponding transformation matrix is the row support data tensor, the greater the influence. The
random matrix, and the sum of elements in each knowledge influence coefficient is described as
row of the matrix is equal to 1.
Spatial data fusion: spatial data fusion is realized 􏽐ni�1 Ki − K􏼁 Ii − I􏼁
ρKI � 􏽱����������������������� �. (13)
by the Markov model. MC includes the following: 2 2
􏽐ni�1 Ki − K􏼁 􏽐ni�1 Ii − I􏼁
(i) according to the theory of the discrete sto-
chastic process, the transformation matrix cor-
responds to the stationary distribution, that is, The correlation coefficient matrix Γ � RKN ×IN can be
any initial distribution vector can converge to the obtained by calculating the correlation coefficients between
steady distribution vector after infinitely multi- global tensors. By normalizing the rows in the correlation
plying with the transformation matrix; (ii) matrix coefficient matrix Γ, the influence matrix (IM) Λ can be
theory shows that the steady distribution vector is obtained:
the eigenvector with the largest eigenvalue cor- ρK I ρK1 I2 . . . ρK1 In
responding to the transformation matrix. Re- ⎡⎢⎢⎢ 1 1 ⎤⎥⎥
ferring to [154], the transformation of spatial ⎢⎢⎢ ρK I ρK I
⎢ . . . ρK2 In ⎥⎥⎥⎥⎥
Γ � ⎢⎢⎢⎢ 2 1 2 2 ⎥⎥⎥,
tensor is shown in Figure 12. ⎢⎢⎢ ⋮ ⋮ ⋮ ⋮ ⎥⎥⎥⎥
Figure 12 shows the mobility model of the ⎣ ⎦
platform. The elements in the matrix represent ρKn I1 ρKn I2 . . . ρKn In
(14)
the probability of the platform moving from one λK1 I1 λK1 I2 . . . λK1 In
point to another. ⎡⎢⎢⎢ ⎥⎤
⎢⎢⎢ λK I λK I . . . λK I ⎥⎥⎥⎥⎥
Spatiotemporal data fusion: the fusion of time ⎢⎢ 2 1 2 2 2 n ⎥ ⎥⎥⎥.
Λ � ⎢⎢⎢
data and spatial data needs spatiotemporal tensor ⎢⎢⎢ ⋮ ⋮ ⋮ ⋮ ⎥⎥⎥⎥
⎣ ⎥⎦
transformation. Mobile behavior is always related
λKn I1 λKn I2 . . . λKn In
to time, and time is an important element of the
mobile behavior model. The motion time is By fusing the influence matrix of supporting data into
discretely represented as i, and its new state space the spatiotemporal transformation tensor, the system mi-
is composed of pose information and time in- gration behavior fusion can be obtained:
formation and is represented as S � 􏼈􏼈T1 , P1 􏼉,
􏽮Ti , PJ 􏽯, . . . , 􏽮TI , PJ 􏽯}. From [154], the space- A � Λ ⊙ TST , (15)
time tensor transformation is shown in Figure 13.
Figure 13 shows the migration model of the where ⊙ is the Kronecker product.
platform. The elements of the tensor represent
the probability of the platform moving from one 7. Development Trend and Urgent
point to another at a certain point. Based on the Difficulties to Be Solved
transformation of space tensor, the fourth-order
space-time transition tensor TST can be obtained At present, most of the work on multisensor data fusion in
by combining time information through equa- academic circles is carried out for specific applications,
tion (12). without forming the basic theoretical framework and
16 Mathematical Problems in Engineering

Starting Via Starting Via Starting Via


Destination
point point point point point point

Destination Destination
To Starting point To Starting point

Via point Via point

g
tin
From From

un
Co
Destination 0 1 1 Destination 0 1/2 1/2 =1

Normalization in row
Starting point 1 0 2 Starting point 1/3 0 2/3

Via point 1 2 0 Via point 1/3 2/3 0

Frequency matrix Transition matrix


Figure 12: Space tensor transformation.

POI1 POI2 POI1 Energy POI2 POI1 POI2


center

A.M. P.M. P.M. Evening A.M. A.M. Evening


g
ntin

1
Cou

Energy center Energy center Energy center Energy center


Evening Evening A.M.
A.M. Normalization in
{POI-TIME}

POI2 − A.M.
POI2 − A.M. POI2 − A.M. POI2 − A.M.

Frequency tensor Transition tensor


Figure 13: Temporal and spatial tensor transformation.

algorithm system. Therefore, the establishment of the basic tolerance or robustness of the multisensor integrated
theoretical framework and the generalized algorithm system system directly affects the quality of data acquisition
of multisensor data fusion is the main trend of the future and overcomes the difficulties of sensor measure-
development of this field. Based on its future development ment error modeling, real-time response of the
trend, the following problems are urgently needed to be system to complex dynamic environment, estab-
solved: lishment of knowledge base, and reasonable ar-
rangement of sensors. It is the key to avoid blind
(1) Establish the optimal management scheme of sensor design of the fusion system to design the generalized
resources. In a multisensor data fusion system, sensor integration scheme and perfect the general-
sensing is the source of fusion data; the number, ized fusion architecture of multisensor data.
attributes, and integration methods of sensors di-
(3) Establish theories and methods that can fully and
rectly determine the quality of the fusion data, which
effectively utilize multiple sensors to provide redun-
is one of the key factors affecting the fusion result.
dant information. The more sufficient the amount of
The sensor resource optimization program will op-
information is, the closer the fusion result is to the
timize the scheduling of sensor resources from three
essence of things. With the help of some new tech-
aspects, space management, time management, and
nologies in other fields, theories, and algorithms that
mode management, so that they can be used to the
can fully and effectively utilize redundant information
fullest and most rationally and achieve the optimal
of multisensors, reduce the impact of data defects
performance of the sensor system.
(imprecise and uncertain) and alleviate outliers, and
(2) Evaluation criterion multisensor system is estab- false data [155] are developed, which is one of the key
lished; avoid blind design fusion system. The fault factors to improve the accuracy of data fusion.
Mathematical Problems in Engineering 17

(4) Establish criteria for judging data fusion to reduce the References
ambiguity of data association; inconsistent fusion data,
also known as data association ambiguity, is one of the [1] M. Liggins II, D. Hall, and J. Llinas, Handbook of Multisensor
main obstacles to overcome in data fusion. In the Data Fusion: Theory and Practice, Vol. 39, CRC Press, Boca
Raton, FL, USA, 2008.
process of multisensor data fusion, data consistency is
[2] E. Waltz, “Data fusion for C3I: a tutorial,” Command,
the key factor that affects the fusion result. Data as-
Control, Communications Intelligence (C3I) Handbook,
sociation is the key to ensuring the consistency of fused
pp. 217–226, EW Communications, Palo Alto, CA, USA,
data, that is, to ensure that the fused information is 1986.
information about the same goal or phenomenon. [3] P. S. Rossi, P. K. Varshney, and D. Ciuonzo, “Distributed
(5) Develop and improve the basic theory of data fusion. detection in wireless sensor networks under multiplicative
Academia has conducted extensive research on data fading via generalized score tests,” IEEE Internet of Things
fusion technology and has achieved a lot of suc- Journal, vol. 8, no. 11, 2021.
cessful experience, but until today, the theoretical [4] R. Rucco, A. Sorriso, M. Liparoti et al., “Type and location of
foundation is still incomplete, and effective basic wearable sensors for monitoring falls during static and dy-
algorithms are still missing. The development and namic tasks in healthy elderly: a review,” Sensors, vol. 18,
improvement of the basic theory of data fusion are no. 5, p. 1613, 2018.
key factors for the rapid development of this field. [5] P. I. Corke, “Machine vision,” Moldes, vol. 19, 2000.
[6] D. Lahat, T. Adali, and C. Jutten, “Multimodal data fusion:
(6) Improve the fusion algorithm to improve the fusion an overview of methods, challenges, and prospects,” Pro-
performance. Fusion algorithm is the core of data ceedings of the IEEE, vol. 103, no. 9, pp. 1449–1477, 2015.
fusion. In the new development, introducing new [7] B. Khaleghi, A. Khamis, F. O. Karray, and S. N. Razavi,
mathematical methods to improve the fusion algo- “Multisensor data fusion: a review of the state-of-the-art,”
rithm is the long-cherished wish of countless Information Fusion, vol. 14, no. 1, pp. 28–44, 2013.
scholars. The introduction of modern statistical [8] F. E. White, “Data fusion lexico,” The Data Fusion Lexicon
theory, random set theory, fuzzy set theory, rough set Subpanel of the Joint Directors of Laboratories, San Diego,
theory, Bayes theory, evidence theory, support vector CA, USA, 1991.
machine, and other intelligent computing technol- [9] V. D. Calhoun and T. Adali, “Feature-based fusion of
ogies will bring new development opportunities to medical imaging data,” IEEE Transactions on Information
the state estimation of nonlinear non-Gauss systems Technology in Biomedicine: A Publication of the IEEE En-
and heterogeneous data fusion. gineering in Medicine & Biology Society, vol. 13, no. 5,
(7) Establish a knowledge base for data fusion appli- pp. 711–720, 2008.
[10] H. Boström, S. F. Andler, M. Brohede et al., “On the defi-
cations. In the field of data fusion, it is necessary to
nition of information fusion as a field of research,” Neoplasia,
establish databases and knowledge bases, form-op-
vol. 13, no. 2, pp. 98–107, 2007.
timized storage mechanisms, high-speed parallel [11] L. A. Klein, Sensor and Data Fusion Concepts and Appli-
retrieval and reasoning mechanisms, etc., and to cations, SPIE Optical Engineering Press, Bellingham, WA,
improve the operating efficiency of the cluster fusion USA, 1999.
system and the reliability of the fusion results. [12] F. E. White, Data Fusion Lexicon, Joint Directors of Labs,
(8) Established generalized fusion algorithm system of Washington, DC, USA, 1991.
multisensor data. The generalized algorithm based [13] H. Durrant-Whyte, Integration, Coordination, and Control of
on the basic integrated structure model of the Multi-Sensor Robot Systems, Kluwer Academic Publishers
multisensor should have the advantages of reducing Group, Alphen aan den Rijn, Netherlands, 1988.
data defects, alleviating abnormal values and false [14] F. Mastrogiovanni, A. Sgorbissa, and R. Zaccaria, “A dis-
data, processing highly conflicting data, processing tributed architecture for symbolic data fusion,” in Pro-
data multimodality, processing data correlation, ceedings of the IJCAI 2007, Hyderabad, India, 2007.
processing data alignment/registration, and pro- [15] J. Llinas and D. L. Hall, “An introduction to multi-sensor
data fusion,” in Proceedings of the 1998 IEEE International
cessing data association. It also should have other
Symposium on Circuits and Systems, Monterey, CA, USA,
capabilities as to select a fusion framework for
1998.
complex system data fusion, implement timing
[16] E. L. J. Waltz, Multi Sensor Data Fusion, Artech House Inc,
operations, process static and dynamic data states Norwood, MA, USA, 1990.
[156], and compress data dimensions [30]. [17] M. A. Abidi and R. C. Gonzalez, Data Fusion in Robotics and
Machine Intelligence, Academic Press, San Diego, CA, USA,
Conflicts of Interest 1992.
[18] D. L. Hall and S. A. H. Mcmullen, Mathematical Techniques
The authors declare that they have no conflicts of interest. in Multisensor Data Fusion, Artech House, Boston, MA,
USA, 2004.
Acknowledgments [19] R. Malhotra and L. Wright, “Temporal considerations in
sensor management,” in Proceedings of the IEEE 1995 Na-
This work was supported by the National Natural Science tional Aerospace and Electronics Conference, Dayton, OH,
Foundation of China, under Grant no. 51905302. USA, 1995.
18 Mathematical Problems in Engineering

[20] S. Paradis, B. A. Chalmers, R. Carling, and P. Bergeron, 2010 panel discussion,” in Proceedings of the 2010 13th In-
“Toward a generic model for situation and threat assess- ternational Conference on Information Fusion, Edinburgh,
ment,” Proceedings of SPIE, vol. 3080, pp. 171–182, 1997. UK, 2010.
[21] A. N. Steinberg, C. L. Bowman, and F. E. White, Revisions to [41] E. P. Blasch, R. Breton, P. Valin, and E. Bosse, “User in-
the JDL Data Fusion Model, SPIE, Bellingham, WA, USA, formation fusion decision making analysis with the
1999. C-OODA model,” in Proceedings of the International Con-
[22] F. E. White, Data Fusion Lexicon. Joint Directors of Labo- ference on Information Fusion, Chicago, IL, USA, 2011.
ratories, Technical Panel for C3, Data Fusion Sub-Panel, [42] R. C. Luo and M. G. Kay, Multisensor Integration and Fusion:
Naval Ocean Systems Center, San Diego, CA, USA, 1987. Issues and Approaches, SPIE, Bellingham, WA, USA, 1988.
[23] I. R. Goodman, R. P. Mahler, and H. T. Nguyen, Mathematics [43] S. C. A. Thomopoulos, “Sensor integration and data fusion,”
of Data Fusion, Springer, Berlin, Germany, 1997. Journal of Robotic Systems, vol. 7, no. 3, pp. 337–372, 1990.
[24] D. Hall and J. Llinas, Handbook of Multisensor Data Fusion, [44] C. J. Harris, A. Bailey, and T. J. Dodd, “Multi-sensor data
CRC Press, Boca Raton, FL, USA., 2001. fusion in defence and aerospace,” Aeronautical Journal New
[25] B. V. Dasarathy, “Information fusion—what, where, why, Series, vol. 102, no. 1015, pp. 229–244, 1998.
when, and how?” Information Fusion, vol. 2, 2001. [45] M. Bedworth and J. O’Brien, “The Omnibus model: a new
[26] J. M. Richardson and K. A. Marsh, “Fusion of multisensor model of data fusion?” Aerospace & Electronic Systems
data,” The International Journal of Robotics Research, vol. 7, Magazine, vol. 15, no. 4, pp. 30–36, 2009.
no. 6, pp. 78–96, 1988. [46] A. G. Ciancio, S. Pattem, A. Ortega, and B. Krishnamachari,
[27] R. Mckendall and M. Mintz, Robust Fusion of Location In- “Energy-efficient data representation and routing for wire-
formation, IEEE Computer Society Press, Washington, DC, less sensor networks based on a distributed wavelet com-
USA, 1988. pression algorithm,” in Proceedings of the 2006 5th
[28] S. A. M. Desforges, “Strategies in data fusion sorting through International Conference on Information Processing in Sensor
the tool box,” in Proceedings of 1998 European Conference on Networks, Nashville, TN, USA, 2006.
Data Fusion, Malvern, PA, USA, 1998. [47] R. Verbeek and K. Weihrauch, “Data representation and
[29] P. Wang, L. T. Yang, J. Li, J. Chen, and S. Hu, “Data fusion in computational complexity,” Theoretical Computer Science,
cyber-physical-social systems: state-of-the-art and perspec- vol. 7, no. 1, pp. 99–116, 1978.
tives,” Information Fusion, vol. 51, pp. 42–57, 2019. [48] S. J. Wilson, “Data representation for time series data
[30] S. Alonso, D. Pérez, A. Morán, J. J. Fuertes, I. Dı́az, and mining: time domain approaches,” Wiley Interdisciplinary
M. Domı́nguez, “A deep learning approach for fusing sensor
Reviews: Computational Statistics, vol. 9, no. 1, Article ID
data from screw compressors,” Sensors, vol. 19, no. 13,
e1392, 2017.
p. 2868, 2019.
[49] S. Vigneshwari and M. Aramudhan, “Social information
[31] I. Bloch, A. Hunter, A. Appriou et al., “Fusion: general
retrieval based on semantic annotation and hashing upon the
concepts and characteristics,” International Journal of In-
multiple ontologies,” Indian Journal of Science and Tech-
telligent Systems, vol. 16, no. 10, pp. 1107–1134, 2010.
nology, vol. 8, no. 2, pp. 103–107, 2015.
[32] Z. Ning and Z. Jinfu, “Study on image compression and
[50] T. D. Cao, T. H. Phan, and A. D. Nguyen, “An ontology
fusion based on the wavelet transform technology,” Inter-
based approach to data representation and information
national Journal on Smart Sensing and Intelligent Systems,
search in smart tourist guide system,” in Proceedings of the
vol. 8, no. 1, pp. 480–496, 2015.
[33] A. Mohebi and P. Fieguth, “Statistical fusion and sampling of 3rd International Conference on Knowledge & Systems
scientific images,” in Proceedings of the 2008 15th IEEE In- Engineering, Hanoi, Vietnam, 2011.
ternational Conference on Image Processing, San Diego, CA, [51] S. Hachem, T. Teixeira, and V. Issarny, “Ontologies for the
USA, 2008. internet of things,” in Proceedings of the 8th Middleware
[34] Q.-S. Sun, S.-G. Zeng, Y. Liu, P.-A. Heng, and D.-S. Xia, “A Doctoral Symposium, Lisbon, Portugal, 2011.
new method of feature fusion and its application in image [52] S. T. Roweis and L. K. Saul, “Nonlinear dimensionality re-
recognition,” Pattern Recognition, vol. 38, no. 12, duction by locally linear embedding,” Science, vol. 290,
pp. 2437–2448, 2005. no. 5500, pp. 2323–2326, 2000.
[35] B. Garner and D. Lukose, “Knowledge fusion,” in Proceedings [53] L. Sorber, Data Fusion: Tensor Factorizations by Complex
of the 1992 Workshop on Conceptual Structures: Theory & Optimization, Faculty of Engineering, KU Leuven, Leuven,
Implementation, Las Cruces, NM, USA, 1992. Belgium, 2014.
[36] A. Goel, A. Patel, K. G. Nagananda, and P. K. Varshney, [54] I. Kotsia and I. Patras, “Support tucker machines,” in Pro-
“Robustness of the counting rule for distributed detection in ceedings of the 2011 Computer Vision and Pattern
wireless sensor networks,” IEEE Signal Processing Letters, Recognition, Colorado Springs, CO, USA, 2011.
vol. 25, no. 8, pp. 1191–1195, 2018. [55] T. G. Kolda, B. W. Bader, and J. P. Kenny, “Higher-order web
[37] D. Ciuonzo, S. H. Javadi, A. Mohammadi, and P. S. Rossi, link analysis using multilinear algebra,” in Proceedings of the
“Bandwidth-constrained decentralized detection of an un- 5th IEEE International Conference on Data Mining, Houston,
known vector signal via multisensor fusion,” IEEE Trans- TX, USA, 2005.
actions on Signal and Information Processing over Networks, [56] M. Signoretto, L. De Lathauwer, and J. A. K. Suykens, “A
vol. 6, pp. 744–758, 2020. kernel-based framework to tensorial data analysis,” Neural
[38] E. Waltz and J. Llinas, Multi Sensor Data Fusion, IET, Networks, vol. 24, no. 8, pp. 861–874, 2011.
London, UK, 2002. [57] K. Zheng, G. Si, Z. Zhou, J. Chen, and W. Yue, “Consistency
[39] J. Z. Sasiadek, “Sensor fusion,” Annual Reviews in Control, test based on self-support degree and hypothesis testing for
vol. 26, no. 2, pp. 203–228, 2002. multi-sensor data fusion,” in Proceedings of the 2017 IEEE
[40] E. Blasch, J. Llinas, D. Lambert et al., “High level information 2nd Advanced Information Technology,Electronic and Au-
fusion developments, issues, and grand challenges: fusion tomation Control Conference, Chongqing, China, 2017.
Mathematical Problems in Engineering 19

[58] F. Garcia, B. Mirbach, B. Ottersten, F. Grandidier, and the 6th World Congress on Intelligent Control & Automation,
Á. Cuesta, “Pixel weighted average strategy for depth sensor Dalian, China, 2004.
data fusion,” in Proceedings of the 2010 IEEE International [77] R. R. Yager, “On the Dempster-Shafer framework and new
Conference on Image Processing, Hong Kong, China, 2010. combination rules,” Information Sciences, vol. 41, no. 2,
[59] R. E. Kalman, “A new approach to linear filtering and pp. 93–137, 1987.
prediction problems,” Journal of Basic Engineering, vol. 82, [78] Siklóssy and S. Laurent, Representation and Meaning,
no. 1, pp. 35–45, 1960. Prentice-Hall, Hoboken, NJ, USA, 1972.
[60] R. E. Kalman and R. S. Bucy, “New results in linear filtering [79] A. Skowron and J. Grzymala-Busse, From Rough Set Theory
and prediction theory,” Journal of Basic Engineering, vol. 83, to Evidence Theory, John Wiley & Sons, Hoboken, NJ, USA,
no. 5, pp. 95–108, 1961. 1994.
[61] I. Guyon, M. Nikravesh, S. Gunn, and L. A. Zadeh, “Feature [80] D. Bell, Evidence Theory and Its Applications, Vol. 2, Elsevier
extraction,” Studies in Fuzziness and Soft Computing, Science Inc, Amsterdam, Netherlands, 1991.
Springer, vol. 31, pp. 1737–1744, Berlin, Germany, 2006. [81] E. L. Post, The Two-Valued Iterative Systems of Mathematical
[62] H. A. Fayed and A. F. Atiya, “A novel template reduction Logic, Princeton University Press, Princeton, NJ, USA, 1941.
approach for the K-nearest neighbor method,” IEEE [82] H. A. Simon, “Complexity and the representation of pat-
Transactions on Neural Networks, vol. 20, no. 5, pp. 890–896, terned sequences of symbols,” Psychological Review, vol. 79,
2009. no. 5, pp. 369–382, 1972.
[63] P. L. Ainsleigh, T. E. Luginbuhl, and P. K. Willett, “A se- [83] X. Sun, W. Gao, and Y. Duan, “MR brain image segmen-
quential target existence statistic for joint probabilistic data tation using a fuzzy weighted multiview possibility clustering
association,” IEEE Transactions on Aerospace and Electronic algorithm with low-rank constraints,” Journal of Medical
Systems, vol. 57, pp. 371–381, 2020. Imaging & Health Informatics, vol. 11, 2021.
[64] S. He, H. S. Shin, and A. Tsourdos, “Information-theoretic [84] X. Li, B. Kao, C. Shan, D. Yin, and M. Ester, “CAST: a
joint probabilistic data association filter,” IEEE Transactions correlation-based adaptive spectral clustering algorithm on
on Automatic Control, vol. 66, no. 3, pp. 1262–1269, 2020. multi-scale data,” 2020, https://arxiv.org/abs/2006.04435.
[65] S. Liu, H. Li, Y. Zhang, and B. Zou, “Multiple hypothesis [85] A. Treshansky and R. M. Mcgraw, “Overview of clustering
method for tracking move-stop-move target,” Journal of algorithms,” Proceedings of SPIE—The International Society
Engineering, vol. 2019, no. 19, pp. 6155–6159, 2019. for Optical Engineering, vol. 4367, pp. 41–51, 2001.
[66] A. O. T. Hogg, C. Evers, and P. A. Naylor, “Multiple hy- [86] M. Hassani, “Overview of efficient clustering methods for
pothesis tracking for overlapping speaker segmentation,” in high-dimensional big data streams: techniques, toolboxes
Proceedings of the 2019 IEEE Workshop on Applications of and applications,” Clustering Methods for Big Data Analytics,
Signal Processing to Audio and Acoustics, New Paltz, NY, Springer, Berlin, Germany, 2019.
USA, 2019. [87] C. L. Liv, Introduction to Combinatorial Mathematics,
[67] R. L. Streit and T. E. Luginbuhl, “Maximum likelihood McGraw Hill, New York, NY, USA, 1968.
method for probabilistic multihypothesis tracking,” in [88] K. Krishna and M. Narasimha Murty, “Genetic K-means
Proceedings of the SPIE-The International Society for Optical algorithm,” IEEE Transactions on Systems, Man and Cy-
Engineering, p. 2235, Rome, Italy, September 1994. bernetics, Part B, vol. 29, no. 3, pp. 433–439, 1999.
[68] R. L. Streit, S. G. Greineder, and T. E. Luginbuhi, “Maximum [89] Y. Lu, S. Lu, F. Fotouhi, Y. Deng, and S. J. Brown, “FGKA: a
likelihood training of probabilistic neural networks with fast genetic K-means clustering algorithm,” in Proceedings of
rotationally related covariance matrices,” in Proceedings of the 2004 ACM Symposium on Applied Computing, Nicosia,
the 1995 IEEE International Conference on Neural Networks, Cyprus, 2004.
Perth, Australia, 1995. [90] E. Schubert and P. Rousseeuw, “Faster K-medoids clustering:
[69] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan, Estimation with improving the PAM, CLARA, and CLARANS algorithms,”
Applications to Tracking and Navigation: Theory, Algorithms, in Proceedings of the 2019 International Conference on
and Software, John Wiley & Sons, New York, NY, USA, 2001. Similarity Search and Applications, Newark, NJ, USA, 2019.
[70] M. Nørgaard, N. K. Poulsen, and O. Ravn, “New develop- [91] H. H. Nguyen, “Privacy-preserving mechanisms for k-modes
ments in state estimation for nonlinear systems,” Automa- clustering,” Computers & Security, vol. 78, pp. 60–75, 2018.
tica, vol. 36, pp. 1627–1638, 2000. [92] R. Gelbard, O. Goldman, and I. Spiegler, “Investigating
[71] J. C. Spall, “Estimation via Markov chain Monte Carlo,” IEEE diversity of clustering methods: an empirical comparison,”
Control Systems, vol. 23, no. 2, pp. 34–45, 2003. Data & Knowledge Engineering, vol. 63, no. 1, pp. 155–166,
[72] A. Doucet, S. Godsill, and C. Andrieu, “On sequential Monte 2007.
Carlo sampling methods for Bayesian filtering,” Statistics and [93] H. Yin, “ViSOM—a novel method for multivariate data
Computing, vol. 10, no. 3, pp. 197–208, 2000. projection and structure visualization,” IEEE Transactions on
[73] S. J. Julier and J. K. Uhlmann, “Unscented filtering and Neural Networks, vol. 13, no. 1, pp. 237–243, 2002.
nonlinear estimation,” Proceedings of the IEEE, vol. 92, no. 3, [94] R. Amami, “An incremental method combining density
pp. 401–422, 2004. clustering and support vector machines for voice pathology
[74] I. Arasaratnam and S. Haykin, “Cubature Kalman filters,” detection,” Computers & Electrical Engineering, vol. 57,
IEEE Transactions on Automatic Control, vol. 54, no. 6, pp. 257–265, 2016.
pp. 1254–1269, 2009. [95] R. T. Ng and J. Han, “CLARANS: a method for clustering
[75] B. P. Carlin and T. A. Louis, “Bayes and empirical Bayes objects for spatial data mining,” IEEE Transactions on
methods for data analysis,” Statistics and Computing, vol. 7, Knowledge & Data Engineering, vol. 14, no. 5, pp. 1003–1016,
no. 2, pp. 153-154, 1998. 2002.
[76] L. Xu, Y. Chen, and P. Cui, “Improvement of D-S evidential [96] Y. Zhang, J. Sun, Y. Zhang, and X. Zhang, “Parallel
theory in multisensor data fusion system,” in Proceedings of implementation of CLARANS using PVM,” in Proceedings of
20 Mathematical Problems in Engineering

2004 International Conference on Machine Learning and multimodal deep learning,” Computer Networks, vol. 165,
Cybernetics, Shanghai, China, 2004. Article ID 106944, 2019.
[97] S. Gaffney and P. Smyth, “Trajectory clustering with mixtures [117] D. H. Hubel and T. N. Wiesel, “Receptive fields, binocular
of regression models,” in Proceedings of the 5th International interaction and functional architecture in the cat’s visual
Conference on Knowledge Discovery and Data Mining, San cortex,” Joumal of Physiology, vol. 160, pp. 106–154, 1962.
Diego, CA, USA, 1999. [118] K. Fukushima, “Neocognitron: a self-organizing neural
[98] C. C. Aggarwal and C. K. Reddy, Data Clustering: Algorithms network model for a mechanism of pattern recognition
and Applications, Taylor and Francis Group, London, UK, unaffected by shift in position,” Biological Cybernetics,
2013. vol. 36, pp. 193–202, 1980.
[99] R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan, [119] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-
“Automatic Subspace clustering of high dimensional data for based learning applied to document recognition,” Proceed-
data mining applications,” ACM SIGMOD Record, vol. 27, ings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
no. 2, pp. 94–105, 1998. [120] Y. Lecun, B. Boser, J. Denker et al., “Backpropagation applied
[100] L. Parsons, E. Haque, and H. Liu, “Subspace clustering for to handwritten zip code recognition,” Neural Computation,
high dimensional data: a review,” ACM SIGKDD Explora- vol. 1, no. 4, pp. 541–551, 2014.
tions Newsletter, vol. 6, no. 1, pp. 90–105, 2004. [121] Y. Lecun, K. Kavukcuoglu, and C. M. Farabet, “Convolu-
[101] M. Yin, S. Xie, Z. Wu, Y. Zhang, and J. Gao, “Subspace tional networks and applications in vision,” in Proceedings of
clustering via learning an adaptive low-rank graph,” IEEE 2010 IEEE International Symposium on Circuits and Systems,
Transactions on Image Processing, vol. 27, no. 8, pp. 3716– Paris, France, 2010.
3728, 2018. [122] E. Hinton, “Distributed representations,” Technical report,
[102] B. Sandipan, “An efficient approach of election algorithm in University of Toronto, Toronto, Canada, 1984.
distributed systems,” Indian Journal of Computer Science & [123] H. Wang and P. Liu, “Image recognition based on improved
Engineering, vol. 2, no. 1, 2011. convolutional deep belief network model,” Multimedia Tools
[103] B. Awerbuch, “A new distributed depth-first-search algo- & Applications, vol. 80, pp. 2031–2045, 2020.
rithm,” Information Processing Letters, vol. 20, no. 3, [124] E. G. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning
pp. 147–150, 1985. algorithm for deep belief nets,” Neural Computation, vol. 18,
[104] C. E. Shannon, “A mathematical theory of communication,”
pp. 1527–1554, 2006.
Bell System Technical Journal, vol. 27, 1948.
[125] C. M. Bishop, Pattem Recognition and Machine Iearning,
[105] E. T. Jaynes, “Information theory and statistical mechanics,”
Springer, New York, NY, USA, 2006.
Physical Review, vol. 106, 1957.
[126] L. A. Zadeh, “Fuzzy sets,” Information and Control, vol. 8,
[106] W. S. McCulloch and W. Pitts, “A logical calculus of the ideas
no. 3, pp. 338–353, 1965.
immanent in nervous activity,” Bulletin of Mathematical
[127] C. H. Wang, W. Y. Wang, T. T. Lee, and P. S. Tseng, “Fuzzy
Biophysics, vol. 5, no. 4, pp. 115–133, 1943.
B-spline membership function (BMF) and its applications in
[107] W. Pitts, “The linear theory of neuron networks: the dynamic
fuzzy-neural control,” IEEE Transactions on Systems, Man,
problem,” Bulletin of Mathematical Biophysics, vol. 5, no. 1,
and Cybernetics, vol. 25, no. 5, pp. 841–851, 1995.
pp. 23–31, 1943.
[128] J. Chleboun, “A new membership function approach to
[108] W. T. Katz, J. W. Snell, and M. B. Merickel, “Artificial neural
networks,” Methods in Enzymology, vol. 210, no. 210, uncertain functions,” Fuzzy Sets and Systems, vol. 387,
pp. 610–636, 1992. pp. 68–80, 2020.
[109] E. Judith and J. M. Deleo, “Artificial neural networks,” [129] S.-U.-D. Khokhar, Q. Peng, A. Asif, M. Y. Noor, and
Cancer, vol. 91, no. S8, pp. 1615–1635, 2001. A. Inam, “A simple tuning algorithm of augmented fuzzy
[110] Y. Xin, “Evolving artificial neural networks,” Proceedings of membership functions,” IEEE Access, vol. 8, pp. 35805–
the IEEE, vol. 87, no. 9, pp. 1423–1447, 1999. 35814, 2020.
[111] E. Judith and J. M. Deleo, “Artificial neural networks,” [130] J. R. Quinlan, “Induction on decision tree,” Machine
Cancer, vol. 91, no. S8, pp. 1615–1635, 2001. Learning, vol. 1, 1986.
[112] A. Hazra and S. M. S. Prakashchoudhary, “Recent advances [131] R. C. Barros, M. P. Basgalupp, A. C. P. L. F. de Carvalho, and
in deep learning techniques and its applications: an over- A. A. Freitas, “A survey of evolutionary algorithms for de-
view,” Advances in Biomedical Engineering and Technology, cision-tree induction,” IEEE Transactions on Systems, Man,
Springer, Berlin, Germany, pp. 103–122, 2020. and Cybernetics, Part C (Applications and Reviews), vol. 42,
[113] A. Mathew, P. Amudha, and S. Sivakumari, “Deep learning no. 3, pp. 291–312, 2012.
techniques: an overview,” in Proceedings of the 2021 Inter- [132] J. Mullane, B.-N. Vo, M. D. Adams, and B.-T. Vo, “A
national Conference on Advanced Machine Learning Tech- random-finite-set approach to bayesian SLAM,” IEEE
nologies and Applications, Cairo, Egypt, 2021. Transactions on Robotics, vol. 27, no. 2, pp. 268–282, 2011.
[114] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and [133] H. E. Robbins, On the Measure of a Random Set II, Springer,
P.-A. Manzagol, “Stacked denoising autoencoders: learning New York, NY, USA, 1985.
useful representations in a deep network with a local [134] B. Ristic, Particle Filters for Random Set Models, Springer
denoising criterion,” Journal of Machine Learning Research, Publishing Company, Berlin, Germany, 2013.
vol. 11, no. 12, pp. 3371–3408, 2010. [135] Z. Pawlak, “Rough set,” International Journal of Computer &
[115] J. Ngiam, A. Khosla, and M. Kim, “Multimodal deep Information Sciences, vol. 11, no. 5, 1982.
learning,” in Proceedings of the 28th International Conference [136] J. W. Grzymała-Busse, Z. Pawlak, R. Słowiński, and
on Machine Learning, pp. 689–696, ICML, Bellevue, WA, W. Ziarko, “Rough set,” Communications of the ACM,
USA, 2011. vol. 38, no. 11, 1995.
[116] G. Aceto, D. Ciuonzo, A. Montieri, and A. Pescapè, “MI- [137] W. Ji, Y. Pang, X. Jia et al., “Fuzzy rough sets and fuzzy rough
METIC: mobile encrypted traffic classification using neural networks for feature selection: a review,” Wiley
Mathematical Problems in Engineering 21

Interdisciplinary Reviews Data Mining and Knowledge Dis- [156] X. L. Dong, L. Berti-Equille, and D. Srivastava, “Truth dis-
covery, vol. 11, no. 3, 2021. covery and copying detection in a dynamic world,” Pro-
[138] Y. Zhang and Y. Wang, “Research on classification model ceedings of the VLDB Endowment, vol. 2, no. 1, pp. 562–573,
based on neighborhood rough set and evidence theory,” 2009.
Journal of Physics: Conference Series, vol. 1746, no. 1, Article
ID 12018, 2021.
[139] L. A. Zadeh, “Fuzzy logic � computing with words,” IEEE
Transactions on Fuzzy Systems, vol. 4, pp. 3–23, 1999.
[140] L. Běhounek and P. Cintula, “From fuzzy logic to fuzzy
mathematics: a methodological manifesto,” Fuzzy Sets and
Systems, vol. 157, no. 5, pp. 642–646, 2006.
[141] L. Z. Zadeh, “Fuzzy logic, neural networks and soft com-
puting,” Microprocessing and Microprogramming, vol. 38,
no. 1, p. 13, 1993.
[142] X. Xiang, C. Yu, L. Lapierre, J. Zhang, and Q. Zhang, “Survey
on fuzzy-logic-based guidance and control of marine surface
vehicles and underwater vehicles,” International Journal of
Fuzzy Systems, vol. 20, pp. 572–586, 2018.
[143] Z. Luo and Y. Deng, “A matrix method of basic belief as-
signment’s negation in Dempster-Shafer theory,” IEEE
Transactions on Fuzzy Systems, vol. 28, no. 9, pp. 2270–2276,
2020.
[144] P. Liu and X. Zhang, “A new hesitant fuzzy linguistic ap-
proach for multiple attribute decision making based on
Dempster-Shafer evidence theory,” Applied Soft Computing,
vol. 86, Article ID 105897, 2019.
[145] D. L. Hall and J. Llinas, “An introduction to multisensor data
fusion,” Proceedings of the IEEE, vol. 85, pp. 6–23, 1997.
[146] K. Cho, B. Jacobs, B. Westerbaan, and A. Westerbaan, “An
introduction to effectus theory,” Arctic & Alpine Research,
vol. 29, no. 1, pp. 122–125, 2015.
[147] W. K. Hastings, “Monte Carlo sampling methods using
Markov chains and their applications,” Biometrika, vol. 57,
no. 1, pp. 97–109, 1970.
[148] H. Peng and Z. Peng, “An iterative method of statistical
tolerancing based on the unified Jacobian-Torsor model and
Monte Carlo simulation,” Journal of Computational Design
& Engineering, vol. 7, no. 2, p. 165, 2020.
[149] D. Hall and S. Waligora, “Orbit/attitude estimation with
LANDSAT landmark data,” in Proceedings of the 1979 GSFC
Flight Mechanics/Estimation Theory Symposium, pp. 67–110,
NASA, Goddard Space Flight Center Flight Mechanics, 1979.
[150] C. L. Miao, J. S. Nan, and N. Guo, “Effectiveness evaluation
architecture for intelligence reconnaissance system based on
multi-source data fusion technique,” Telecommunication
Engineering, vol. 4, pp. 429–434, 2012.
[151] Z. Rong, G. Jing-Wei, and Y. Hang, “Study of operational
effectiveness evaluation of multisensor data fusion system,”
Radio Engineering of China, vol. 38, no. 3, pp. 31–33, 2008.
[152] L. Kuang, F. Hao, L. T. Yang, M. Lin, C. Luo, and G. Min, “A
tensor-based approach for big data representation and di-
mensionality reduction,” IEEE Transactions on Emerging
Topics in Computing, vol. 2, no. 3, pp. 280–291, 2014.
[153] A. Singh and G. Gordon, Relational Learning via Collective
Matrix Factorization, ACM, New York, NY, USA, 2008.
[154] P. Wang, L. T. Yang, Y. Peng, J. Li, and X. Xie, “M2T2: the
multivariate multistep transition tensor for user mobility
pattern prediction,” IEEE Transactions on Network Science
and Engineering, vol. 7, no. 2, pp. 907–917, 2020.
[155] M. Kumar, D. P. Garg, and R. A. Zachery, “A generalized
approach for inconsistency detection in data fusion from
multiple sensors,” in Proceedings of the American Control
Conference 2006, Minneapolis, MN, USA, 2006.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy