3D Point Cloud Object Detection For Millimeter Wave Radar
3D Point Cloud Object Detection For Millimeter Wave Radar
Graduation Committee:
Prof.dr. C. Brune (UT)
Dr. M. Guo (UT)
Dr. H. Hang (UT)
Dr. F.T. Markus (Nedap Healthcare)
3 Implementation 17
3.1 Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1.1 Range and Velocity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.1.2 Azimuth and Elevation Angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 Point Cloud Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.1 CFAR Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.2 Informatively Improved Point Clouds . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4 Experiment 29
4.1 Experiment Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.1.1 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.1.2 Set-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.2 Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.2.1 Detected Signal Strengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.2.2 Point Clouds for Variations of Interference Estimators . . . . . . . . . . . . . . . . 34
4.2.3 Proposed Informative Improvement . . . . . . . . . . . . . . . . . . . . . . . . . . 40
References 44
A Supplementary Figures 48
A.1 Signal Strength Polar Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
A.2 3D Point Clouds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Notation List
Tx transmitter antenna
Rx receiver antenna
IF intermediate frequency
s(t) electromagnetic wave signal
t time
τ time delay
f frequency
S slope of a chirp
B bandwidth of a chirp
Tc time duration of a chirp
ϕ phase shift
R range
c speed of light
λ wavelength
v velocity
θ azimuth incident angle
ψ elevation incident angle
d distance between two aligned antennas
NC number of chirps
NS number of samples
nG number of guard cells
nT number of training cells
1 Introduction
In this chapter the motivation for this research is explained in section 1.1, including why there is a need
for monitoring (elderly) patients at home and how mmWave radar is a suitable technology that can be
used. Section 1.2 gives an overview on the current use of mmWave radar and novel research in different
fields. The principles of radar technology including how information is retrieved, is explained in section
1.3, and in section 1.4 are the research goals and objectives formulated. This introductory chapter ends
with section 1.5, with an outline of the entire thesis and an overview of the contributions.
1.1 Motivation
According to the report from December 2022 by the Inspectie Gezondheidszorg en Jeugd (Inspection
Healthcare and Youth) of the Ministry of Health, Welfare and Sports, there is a shortage on the labour
market in the healthcare sector in the Netherlands [31]. Healthcare organisations are encouraged by the
inspection to investigate the possibilities of applying innovative technologies to increase independence of
patients and simultaneously decrease the work pressure for the personnel. MmWave radar is a suitable
technology for this and can be used for monitoring purposes. That is because by processing echo signals
to detect distance, angle and velocity, both large and small movements are detected. This means a person
can be localised and tracked [28, 83], but also activities, gestures and even vital signs can be recognized
[54, 77].
Camera-based motion monitoring systems have been popular for similar purposes, but they require strict
placement settings, lighting, and raise privacy concerns. MmWave radar on the other hand is privacy
preserving because personal features cannot be directly distinguished from the signals. Next to that, the
radar can be located somewhere on the ceiling similar to a fire alarm. Furthermore it does not contain a
camera lens which needs to be placed in direct sight, but antennas that cover at most a few centimeters
of surface and can even be placed behind a cover. This unobtrusive and non-invasive way of monitoring
is beneficial for the patient comfort.
There are multiple applications for which mmWave radar could serve, that are beneficial for both patients
and caregivers. An overview of possible applications can be seen in Figure 1. First of all, a person’s lifestyle
including daily routines, but also irregular behaviour can be monitored. Knowing if the patient is having
an afternoon nap, or if visitors are present, is useful information for the caregiver to, for instance, change
the route of their rounds. A patient who often forgets dinner can now be effectively reminded by their
1
caregiver. Likewise, insights in the pattern, activity and quality of the sleep can lead to actions to improve
the patients wellbeing, such as turning a patient in their bed to prevent pressure sores. A clearer and
more complete picture of the patient’s daily routine is provided at distance.
Moreover, safety is increased for particularly the elderly patients. They have an increased risk of falling,
and incidents often result in severe physical and psychological consequences [79]. Falls can be detected
and directly alarm caregivers, which decreases the risk of a patient being in an immobile situation for
quite some time till the next routine check-up. Next to that, monitoring the gait and balance can be
used to assess which patients are at higher risk of falling [76], such that caregivers can keep an extra eye
on these patients to prevent possible incidents.
Additionally, vital signs of patients such as heart and respiratory rate can also be monitored remotely
without the need for physical contact with a device [80, 2]. This can give insight into whether a patient
has been deteriorating rapidly and if adapted care is needed, while no additional action from a caregiver
is required and thus saves time. This can also be helpful for patients who have difficulty with articulating
their stress or need for help.
These examples give a clear view of the advantages of mmWave radar, and which improvements it can
bring for patient monitoring. For the patients this results in, but is not limited to, increased patient
comfort by adjusting the caregiver’s visits to the patient’s daily routines and decreased waiting time for
immediate care. While for the caregiver mmWave radar monitoring can contribute to better insights into
the patients wellbeing and a decrease in number of routine checks and hence decreased travel time. This
may result in an increase in time available for each patient, and thus increasing the quality of the care
provided which is again beneficial for the patient as well.
Radar is often favoured over optical sensors because the performance does not degrade as much in certain
2
weather situations such as heavy rain and fog [23]. Even occlusions such as smoke do not form an obstacle
for radar, therefore research is also done on indoor mapping for firefighters to find their way in buildings
when heavy smoke is present [24, 57].
Another research field where mmWave is active, is movement and activity recognition [73, 82]. Including
for instance large movements such as tracking people [83], but also smaller ones such as gesture recognition
for human-computer interaction [52, 54]. MmWave measurements focused on gait patterns of people can
even extract indicators for early stage cognitive disorders [79]. Even smaller movements like the heartbeat
and breathing rate of people can also be detected, which remains difficult for optical sensors [2, 56].
3
The transmitted frequency modulated signal is an electromagnetic wave and represented as [32, 62]
2
sT x (t) = eiπ(fstart t+St ) , (1)
with S = TBc the slope of the chirp, for B = fend − fstart the bandwidth, and Tc the time duration of the
chirp . When a reflected signal from a stationary object is received at an antenna, the incoming signal
also has the form of a chirp but arrives with a time delay τ ,
2
sRx (t) = eiπ(fstart (t−τ )+S(t−τ ) ) . (2)
On the hardware of an mmWave radar is a mixer, which subtracts the received signal from the transmitted
signal. This results in an intermediate frequency (IF) signal, which is then converted to a discrete signal
by an analog to digital converter (ADC). The formula for the IF signal becomes
2 2
sIF (t) = eiπ(fstart τ +S(2τ t−τ ))
= e2iπSτ t eiπ(fstart τ −Sτ )
= e2iπSτ t eϕ , (3)
where the time-independent term is isolated as a phase shift ϕ, for ϕ = iπτ (fstart − Sτ ). Some mmWave
radars only provide the real part of the IF signal as output, leading to the form
where fIF = Sτ denotes the frequency of the IF signal, which is also called the beat frequency [47].
Figure 5 shows how the IF signal at frequency Sτ results from the subtraction of the transmitted and
received chirps. The IF signal only exists during the time period the two signals overlap.
The output of the radar is raw data that consists of the discretised IF signal over N samples. For all
k receiver antennas Rx this information is stored in M chirps, and over L frames. For each frame the
information can be represented as a three-dimensional radar data cube with dimensions (k, M, N ) as can
be seen for the example in Figure 6.
Figure 6: Radar data cube representation for a single time frame with k = 5, M = 13, N = 12
4
1.3.2 Range, Velocity, and Angle of Arrival
The reflected signal of an object at distance R arrives after a time delay τ = 2Rc , where c is the speed
of light. As can also be seen from Figure 5, the IF frequency is equal to the frequency slope S = B/Tc
times the time delay τ ,
2SR 2BR
fIF = Sτ = = , (5)
c cTc
with bandwidth B, and chirp duration time Tc . Rewriting leads to the following formula, which shows
that the range R can be retrieved from the IF frequency by
cTc
R= fIF . (6)
2B
The phase of the IF signal ϕIF can be determined the moment the reflected signal arrives at the receiver
antenna. Given Equation (1), the wavelength representation λ = c/f0 , and using f0 ≫ Sτ /2 for mmWave
radar systems, the phase can be approximated as [41]
In Figure 5 the received signal and computed IF frequency are shown for one object. When reflected
signals from multiple objects are received, their individual frequencies can be separated by performing a
Fourier transform. In a similar way, a second Fourier needs to be performed over the chirps to separate
the individual phases.
Multiple chirps of duration Tc are transmitted after each other. When the object of detection is moving
with velocity v with respect to the sensor, the object will have traveled ∆R = vTc during one chirp.
Based on Equations (5) and (7) the difference in frequency and chirp between two chirps are
2S∆R 2SvTc
∆f = = , (8)
c c
4π∆R 4πvTc
∆ϕ = = . (9)
λ λ
In practice ∆f is negligible compared to fIF , because Tc is small. Nevertheless, ∆ϕ can still be detected
even when ∆R is only a millimeter. This means that from the difference in phase of the IF signal between
multiple chirps the velocity of an object from and towards a receiver antenna can be computed by
λ∆ϕ
v= . (10)
4πTc
When multiple receiver antennas are aligned, the angle of arrival can be estimated. The orientation in
which the antennas are positioned determines the angle it can detect. When antennas are placed in an L
format or grid with arrays over both horizontal and vertical direction, both azimuth and elevation angles
can be computed.
5
Focusing on a single angular dimension, Figure 7 shows a reflected signal of an object, coming from an
incident angle θ. By assumption the range R between the object and antennas is much larger than the
distance d between the two antennas. Therefore the difference of incident angle between the two antennas
is negligible and the incoming signal at both antennas is assumed to be parallel. However, the signal
needs to travel further to the second antenna with the relative delay [41]
dsin(θ)
∆τ = . (11)
c
Based on Equations (5) and (7), the differences in frequency and phase of the IF signal for two adjacent
receiver antennas and incident angle θ are
Sdsin(θ)
∆f = S∆τ = , (12)
c
2πdsin(θ)
∆ϕ = 2πf0 ∆τ = . (13)
λ
Similar as explained for the velocity computation, because d is too small relative to R the difference in
frequency of the IF signal becomes negligible with respect to fIF . But ∆ϕ is detectable and thus the
angle of arrival can be computed as
λ∆ϕ
θ = sin−1 ( ). (14)
2πd
The simplest method to filter signals of interest, is to set a fixed threshold. Only when this threshold is
exceeded, the received signal at that location is identified as an object. Using a fixed threshold does not
result in good performance, object reflections are affected by many factors such as shape, material, range
and size [17]. There may also be large variations in interference, which would require multiple thresholds
to be able to detect all objects. For this reason have adaptive thresholding methods have been developed,
6
and are used in most modern radars [66]. Adaptive threshold detection methods determine the threshold
by looking into the signals of the local region and are based on the assumption that background noise
and clutter locally follow the same distribution [17]. Constant False Alarm Rate (CFAR) method is an
adaptive thresholding method that is used by most modern radars [71], and results in point clouds as
shown in Figure 8. Chapter 2 discusses the CFAR algorithm and its many variants in detail. Because
interference makes detection a complicated problem, the detection performance is an important evaluation
factor for radar sensors. The CFAR method keeps a constant probability of incorrect detections, and is
therefore widely used.
7
options to be chosen from which lead to many possible variations of the method. Choices regarding certain
steps influence later steps within the method, however this is not always made clear. Therefore specific
attention is given to the interdependent relation between these steps, showing the practical implications
that may result when this relation is not correctly taken into account.
Chapter 3 covers the implementation of the point cloud generation, with simulated radar signals as
input. The spatial information is retrieved during the signal processing step, where not a standard angle
detection method is used. To calculate the angle of arrival for both azimuth and elevation angles at the
same time, the algorithm is adjusted. Variations of the object detection method from Chapter 2 are
included in the implementation, including the effect of incorrect combinations within the method which
are shown according to the simulations. Additionally, the fourth objective is included by implementing a
novel post-processing step, which improves how informative the point clouds are. Using simulated signals,
the results from adding this step are shown.
Chapter 4 addresses the last objective, namely the experiment. An entire experiment has been designed
to demonstrate and test the capabilities of the different CFAR variants for realistic indoor environments.
Experiments for these specific situations have not yet been conducted before, therefore the decisions for
the experiment setup have carefully been thought out. From setting the parameters and determining
which properties and characteristics of the rooms will be analysed, to finding a location and performing
the actual measurements. The acquired mmWave data is processed using the previously mentioned
implementation framework, and thus also includes the adjusted angle detection and informative improved
step.
This thesis ends with a discussion, suggestions, and an outlook on possible future research in Chapter 5.
Figure 9: Overview of the general process from radar to application, and the contributions in relation to
the chapters. Chapter 2 in orange, Chapter 3 in blue, and Chapter 4 in green.
8
2 Constant False Alarm Rate Adaptive Threshold Detection
Using a fixed threshold is undesired for object detection, it results in a decreased detection performance
and increased false alarm rate because the background clutter is unlikely to be the same everywhere [16].
Because interference is almost always present in the received radar signals and detection algorithms are
sensitive to interference, it is desired to keep false detections at a constant low to increase reliability [17].
The CFAR method uses an adaptive thresholding technique that resolves this problem by looking into the
regional background and thus taking into account local variations. When interference is present, these
signals will be contained in the surroundings and hence should in some way be included in the threshold.
This chapter focuses on the object detection part and addresses the first and second research objectives,
as shown in Figure 10. It Includes a detailed explanation on the method, covering not only the steps of
the method but also the reasoning and theory behind it. The information on the specific assumptions
and relations between steps has not yet been clearly described in one overview, but was distributed over
many works and had to be gathered and structured. This chapter provides the complete overview in
section 2.1, and explains the most important variations on determining the background interference level
in section 2.2. The interdependent relation of the steps is often scarcely reported, but is explicitly clarified
in section 2.3. Specifically the determination of the scaling factor and why incorrect combinations will
lead to decreased detection performance is elaborated on.
Figure 10: Relation between this chapter, the first two research objectives, and the general process from
radar to application.
9
2.1.1 Interference Level Estimation
A distinctive feature of the CFAR method is that it uses a sliding window technique, where along one
dimension each cell is individually evaluated for absence or presence of objects with respect to its local
background. The cells directly next to the cell under test (CUT) are called the guard cells and are
excluded from the interference estimation because it may occur that a reflected signal covers multiple
cells. Excluding them prevents signals of the object of detection being treated as background interference.
The local background on which the estimation is based are the reference cells Xi , i = 1, ..., N , which form
the reference window and are assumed to be a proper representation of the background interference level.
Before the background interference level is estimated, a detector is applied on the input data. The choice
of detector has influence on both the detection performance and computational costs, and thus the general
performance of the CFAR algorithm [49]. Common detectors are the linear law detector that uses the
original data, and the square law detector that uses the squared data [60]. Thus for original input data
X̄i , the linear law detector gives Xi = X̄i , and the square law detector Xi = X̄i2 , i = 1, ..., N .
After a detector has been used over the input data, the interference level is estimated. There are multiple
ways to do this, the easiest way is to take the average over the reference window, this is called Cell
Averaging (CA) CFAR, and the estimated interference level is
N
1 X
ZCA = Xi . (15)
N i=1
Figure 12: Interdependence of detector, background interference distribution and estimator, and the
scaling factor.
For an assumed Gaussian distribution, the probabilities of false alarm can be computed analytically and
their complete derivations can be found in [64]. The probabilities of false alarm for a linear detector and
square law detector with Gaussian distributed interference are [60]
!−N
(lin) α2
PF A,CA = 1+ 4 , (16)
N ( π4 − ( π4 − 1)e− π +1 )
α −N
(sq)
PF A,CA = +1 . (17)
N
10
In these equations α is the scaling factor, which can be obtained by rewriting the formulas and when
PF A is known . Hence the scaling factor for CA-CFAR with a Rayleigh distribution for both the square
law and linear law detectors are the following [64],
s
4 4
(lin) −1/N
αCA = N PF A − 1 − ( − 1)e1−N , (18)
π π
1
(sq)
αCA = N (PFNA − 1). (19)
For the Weibull distribution no closed form expression for the scaling factor exists, but can be computed
with for instance Monte Carlo simulation [49]. The interference estimator is multiplied with the scaling
factor to come to the adaptive detection threshold [29]
H0 : Object is absent,
H1 : Object is present.
In the hypothesis test, the CUT is compared to the threshold for presence and absence detection according
to the likelihood ratio test (LRT) [51]. If the CUT has a higher value than the threshold γ, the null
hypothesis H0 is rejected and presence H1 is detected. If the threshold is not passed, the null hypothesis
H0 is accepted,
CU T ≷H H0 γ.
1
(21)
The probability of detection PD (true positive) and the probability of false alarm PF A (false positive,
type I error) can be written as the probabilities
The goal of the CFAR method is to maximize the probability of detection PD , while maintaining a
low constant probability of false alarms PF A . This increases the chance of correct identification of true
positives, and is is based on the Neyman-Pearson lemma. The lemma states that to optimize the detection
performance while maintaining a predetermined significance level, the LRT is the most powerful test to
use [51]. To get a better understanding of the lemma this method is based on, the lemma and a proof
adapted from [59] are presented below.
Theorem 2.1 (Neyman-Pearson Lemma). To solve the constrained optimization problem
max PD s.t. PF A = c,
p1 (x) H1
Γ(x) = ≷ γ. (24)
p0 (x) H0
11
By assumption both test have PF A = c,
Hence rewrite Equations (28) and (29) into (27) to see that
To show that indeed the detection probability of the Neyman-Pearson Likelihood Ratio test is larger than
the probability of detection of any other test, it needs to be shown that
12
To resolve the difficulties regarding non-homogeneous interference environments such as clutter edges
and multiple objects, Hansen and Swayers [25] proposed two variations of the CA-CFAR interference
estimator. Smallest of (SO) CA-CFAR deals with the multiple target situation, whereas greatest of (GO)
CA-CFAR handles clutter edges. Both variants split the reference window in a leading and lagging part,
and average over them separately. SOCA-CFAR takes the minimum of the two averages as estimator,
whereas GOCA-CFAR takes the maximum of the two,
N/2 N
2 X X
ZSOCA = min( Xi , Xi ), (39)
N i=1 i=N/2+1
N/2 N
2 X X
ZGOCA = max( Xi , Xi ). (40)
N i=1 i=N/2+1
The scaling factors αSOCA and αGOCA for Rayleigh distributed interference and the square law detector
can be computed from the false alarm probabilities [68]
N/2−1
X N +i−2 1
PF A,SOCA = 2 , (41)
i (2 + 2αSOCA /N )N/2+i
i=0
N/2−1
1 X N +i−2 1
PF A,GOCA = 2 p N
−2 . (42)
(1 + 2 αGOCA ) i (2 + 2αGOCA /N )N/2+i
N i=0
Both variants remain to have difficulties. The SOCA-CFAR estimator has improved performance when
a second object is present in the reference window, but does not perform well when a clutter edge is
present. The GOCA-CFAR estimator is exactly the other way around, it performs well when a clutter
edge is present in the reference window, but degrades in performance in presence of another object. So
when both a clutter edge and another object are present, neither of the two variants performs well.
where X(1) denotes the smallest value and X(N ) the largest [66]. The OS-CFAR method does not use
the entire reference window for its estimation, but takes one value X(k) , k ∈ 1, 2, ..., N as an estimate for
the interference level,
ZOS = X(k) . (44)
Where for other variants guard cells need to be excluded from the estimation in case they might contain
the object of detection, OS-CFAR already excludes the highest N − k signals and thus does not require
exclusion of guard cells for the reference window.
The threshold γ is computed similar to the CA-CFAR threshold, but with a different estimator and scaling
factor. The scaling factor α for the square law detector and both Rayleigh and Weibull interference, it
can be derived from the probability of false alarm [37]
(sq)
(sq) N ! (αOS + N − k)!
PF A,OS = . (45)
N − k (α(sq) + N )!
OS
The scaling factor for the linear detector for both Rayleigh and Weibull distributed interference is com-
puted according to [64, 66] q
(lin) (sq)
αOS = αOS . (46)
OS-CFAR has increased performance over CA-CFAR variants when multiple objects and clutter edges are
present [29]. A limitation of the variant is that it requires longer processing time, because the reference
13
window needs to be ordered [6]. Dependent on which ordering algorithm is chosen, this can be decreased
but will always have higher computational costs than CA-CFAR estimator variants. Another advantage
is that it is able to detect closely spaced targets, but when a lot of clutter is present and k is chosen
either too large or too small, there will be a lot of difficulties encountered [16].
2.2.3 Others
Where the OS-CFAR interference estimator takes the k-th cell as estimator, the Censored Mean Level
Detector (CMLD) CFAR variant uses the k smallest cells of the reference window [65]. This means the
estimated interference level is computed as
k
1X
ZCM LD = X(i) . (47)
k i=1
If there are multiple objects present in the reference window, this method has increased performance.
That is, as long as not more than N − k cells of the reference window contain signals from those objects.
At the same time the performance may decrease for other situations, as the reference window and thus
the amount of information used decreases as well.
The Trimmed Mean (TM) CFAR variant does not only exclude the largest N − k cells from the reference
window, but also the smallest l are neglected [19].
k
1 X
ZT M = X(i) . (48)
k−l
i=l
Both CMLD- and TM-CFAR are outlier rejection variants, and have increased performance for nonho-
mogenous situations. A disadvantage of the estimators is that the computation costs is also increased,
the reference window needs to be ordered and then the mean needs to be calculated. Furthermore, in the
ideal situation it would be known a priori how many objects there are to be detected, such that the right
amount of largest cells can be excluded. However, in realistic situations this is never known beforehand.
There are more CFAR interference estimation variants. Most often they are improved versions of the
ones described here, several variants are combined or computational efforts are decreased compared to
the original variants [85, 33]. These improved versions are often developed for naval applications, where
every second matters when detecting rockets and missiles in the air. Similar for the automotive industry,
where time is also of importance for avoiding collisions.
2.3.1 Literature
Because the scaling factor depends on the detector, assumed distribution, and interference estimator, it
is thus important to know which detector is used. Often the detector is not mentioned at all, but when
the method is used correctly this may also be derived from how the scaling factor is computed.
There is literature that mentions which detector is used [43]. In most cases this is the square law detector,
although explanation on why this detector has been chosen is lacking [25, 29, 15, 61, 39], [50, 5]. However,
[49] compares multiple detectors and shows that the square law detector offers indeed in most cases the
14
best performance. In the case of Gaussian noise, the linear law and square law have similar performance,
but even then the square law is still preferred due to its lower computational costs.
Nevertheless, in literature there also exist cases where the step of using a scaling factor is mentioned,
but no further information on which one or how it is calculated is provided [84, 79, 10]. Information on
the detector used is not provided either. When the formula for the scaling factor is given, the detector
used is frequently not mentioned [58] nor included in the elaborate diagrams that are provided [26, 78].
This does not necessarily mean that an incorrect combination has been used, but when there is not more
information available it also cannot be checked.
Sometimes code of the implementation is publicly available, and the exact steps can be traced. There are
cases which suggest that an incorrect scaling factor is used, as the formula for the square law detector
is implemented although the squared data step is missing [14]. Possibly the data is squared during a
pre-processing step that is not included in the implementation, but it can be checked that this is not
included in the data set used either [18].
Even though for some cases the procedure is correctly mentioned in literature [9, 56], the corresponding
available code suggests otherwise. For these specific cases, the square law detector does not seem to have
been applied in the implementation, although the corresponding scaling factor is used. The author has
been informed about it and states that a very good point has been raised and thinks probably a mistake
has been made in the code [55].
15
(a) Original sampled signal (b) Original and squared sampled signals
Figure 13: Correct combination of scaling factors and law detectors. Figure 13d shows that the region
of detection where the thresholds are passed is similar for bot linear and square law.
Figure 14: Incorrect combination, square law scaling factor used on the linear law detector. The new
incorrect threshold is not passed.
16
3 Implementation
This chapter elaborates on the entire path from reflected signal to generated point cloud, by following the
steps of the developed Python implementation. Radar signals are simulated and used as input. Section
3.1 focuses on the signal processing step, and explains and shows how the velocity and three-dimensional
spatial information are retrieved. To determine the angles of arrival, the Minimum Variance Distortionless
Response (MVDR) method is implemented. This method determines the angle over one dimension, but
has now been adjusted such that both azimuth and elevation angle can be detected at once.
Section 3.2 includes the implementation of the CFAR detection method, and the interference level esti-
mator variations as described in Chapter 2. Their differences and the effects caused by change in sliding
window parameters are shown. Furthermore, to improve how informative the generated point clouds are,
a post-processing step is proposed and its resulting point clouds are included.
How this chapter and covered objectives relate to the general process from radar to application, is shown
in Figure 15.
Figure 15: Relation between this chapter, the two research objectives covered, and the general process
from radar to application.
To validate the computation steps in the signal processing as well as the detection implementation,
simulations have been performed. All individual steps of the signal processing and detection path are
explained along these simulation results. The radar parameters used for the simulation are the same as
for the experimental results in Chapter 4 and are shown in Table 1a.
For the results in this chapter, the reflected signals of three separate objects have been simulated with
various amplitude (A), range from the radar (R), velocity (v), azimuth angle (θ), and elevation angle (ψ).
Table 1b shows the values for these object parameters. It should be noted that the simulations do not
include any interference effects, and the objects are distinctly separated from each other in both spatial
and velocity domains.
17
3.1.1 Range and Velocity
Three objects are simulated according to the parameters in Table 1. All three objects result in a reflected
signal with the frequency dependent on their range from the radar. The IF signal measured at the receiver
antenna is a linear combination of all three waves, shown in Figure 16a. From this combined signal the
independent frequencies cannot be directly determined, but a Fourier transform first needs to be applied
which results in peaks at the separate frequencies and is shown in Figure 16b.
The IF signal is sampled in 512 data points over one chirp. Although one chirp is only less than a
millisecond, change in movement can be detected. Figure 17a shows the range of the three objects over
the duration of one chirp. The objects A, B, and C have traveled 0 m, -0.9 mm, and 0.5 mm respectively.
Chirps are stacked one after another as a frame of 128 successive chirps. Looking at the change in range
over these chirps, the velocity can be detected be performing another Fourier transform. Figure 17b
shows the range and velocity for the three objects.
Object A has a higher amplitude and does not change position. Object B and C both already have a less
strong signal and move from or towards the radar. This causes the spread and less bright signal seen in
the figure, because the objects covers more range whereas object A remains at a constant location.
(a) Range over a single chirp (b) Velocity over a single frame
18
at incident angle θ is
h i
∆τ = 0, dsin(θ)
c
, (49)
h i
∆ϕ = 0, 2πdsin(θ)
λ
, (50)
for d the spacing between the antennas, c the speed of light, and λ its wavelength. The angle of arrival
can be determined by performing another Fourier transform, now over the aligned antennas. When there
are not that many antennas in one array, the angle estimation is not very accurate and hence other
methods are preferred [74]. There are multiple methods to determine the angle of arrival for only two
aligned antennas. Most of them rely on angle vectors and the spatial covariance matrix, from which the
respective signal strength is determined.
Suppose M antennas receive signals from L separate sources, than the received signal at antenna k ∈ M
of source l ∈ L is −i2πdksin(θ)
Sk (t) ≈ sl (t)e λ = sl (t)ak (θl ), (51)
where sl (t) for l = 0, 1, ...L − 1 denotes the the signal coming from the lth source signal, and ak (θ) =
−i2πdksin(θ)
e λ is the angle vector. Then the combined received signal from all L sources for antenna k is
L−1
X
Sk (t) ≈ sl (t)a(θl ). (52)
l=0
Suppose that the noise n(t) has zero mean, covariance matrix σ 2 I and is uncorrelated to s(t). Then the
spatial correlation matrix is
R = E[x(t)x(t)H ] (55)
= E[(As(t) + n(t))(As(t) + n(t)) ] H
(56)
H
= AE[s(t)s(t) ]A + E[n(t)b(t) ] H H
(57)
= ARs A + σ I. H 2
(58)
The most common angle estimation methods can be divided over two categories: extrema search, and ma-
trix shifts. The delay-and-sum (DS) method [22], the minimum variance distortionless response (MVDR)
method [27] and multiple signal classifier (MUSIC) method [70] are extrema searching techniques. Estima-
tion of signal parameters via rotational invariance techniques (ESPRIT) is a well-known matrix-shifting
technique [67]. The MUSIC and ESPRIT methods have cubic complexity, whereas the DS and MVDR
method have quadratic complexity and are therefore preferred [21]. Because MVDR has higher resolution
than DS, the MVDR method is chosen as angle detection method.
The MVDR method is based on beamforming, and computes the signal strength for all possible directions.
Beamforming means that a set of weights w optimizes a specific parameter, here maximizing the signal
from the angle of interest. Hence consider the weighted signal
y(t) = wH x(t). (59)
The goal of this method is to minimize the signal power belonging to other angles, while setting the
response for an individual angle to 1. Thus for each possible angle θ, the signal power is minimized w.r.t.
a single constraint,
minw wH Rw (60)
s.t. w a(θ) = 1.
H
(61)
19
Using Lagrangian multipliers, the signal power at angle θ is computed as [38, 81]
1
P (θ) = , (62)
a(θ)H R−1 a(θ)
R = E[S(θ)H S(t)]. (63)
For the three simulated objects as described in the previous section, the azimuth angle is computed and
shown as angle with respect to the range in Figure 18a, and in polar coordinates in Figure 18b. Figures
18c and 18d also show the elevation angle of the three objects in both domains.
Figure 18: Range w.r.t. azimuth and elevation angle of three simulated objects
For both angle dimensions the angles are known corresponding to their range. Because the three objects
have distinct ranges, the azimuth and elevation angles can be matched to the correct object and is called
pair-matching [48]. However, it is not always the case that all objects have separate range distances.
Therefore a method where both angular dimensions are measured simultaneously is preferred. Combined-
angle method implementations exist for MUSIC and ESPRIT [20]. The procedure of this extension to
combined angle estimation is used and adjusted for the MVDR method.
The concept of the method stays the same, but computations need to be adjusted. Consider the angle
matrix Aθ for the azimuth angle θ, and Aψ for the elevation angle for A according to Equation (54).
Then for two angle dimension at the same time the angle matrix is computed as the Kronecker product
Aθ,ψ = Aθ ⊗ Aψ . (64)
20
Both angle matrices are of dimension (2 · n), for n the number of angular segments over which the signals
are estimated. The dimension of the combined angular matrix now becomes (22 · n2 ), which also increases
the computational complexity. Figure 19 shows the combined angles for each of the simulated objects
from Table 1b separately. It can be seen that the strength of the signals is indeed in descending order.
Figure 19: Azimuth and elevation angles for three simulated objects
To show that the method also works for more challenging situations than three independent signals, a
spatial complex but stationary object is simulated and shown in Figure 20. The spatial dimensions of
the object vary but are close to each other, which makes the angles more difficult to detect. Both angle
dimensions are estimated for the incoming signal by using the combined-angle version of the MVDR
method. Figure 21 shows both angles for the object over variation in range. The shape of the object
can be clearly seen in these angular plots. Next to that, although the entire object is simulated at equal
strength, it can be seen that the signal strength degrades as the range increases. This is in line with the
expectation, because signal strength in general degrades when it has to travel further.
21
(a) R = 3.50m (b) R = 3.55m (c) R = 3.60m (d) R = 3.65m
22
(a) CA, nG = 2, nT = 16 (b) CA, nG = 4, nT = 16 (c) CA, nG = 8, nT = 16
Figure 22: CFAR point clouds for various interference estimators and varying number of guard cells nG ,
for three simulated objects from Table 1b.
when there are no other objects nearby. The interference estimation is based on cells that lie further
from the object, which results in a lower threshold and thus a higher chance of being detected. The two
stripe-like detected regions are typical for SOCA-CFAR detection. Although there is no object simulated
for these ranges, the low signal strength resulting from the other objects are detected as presence. There
is a region where no presence is detected in the middle, because both halves of the reference window now
contain stronger signals from the same region. For the detected stripes, one half contained low signals
ans thus resulted in a low threshold.
As expected, GOCA-CFAR results in smaller detected regions than for the other two cell-averaging
variants. When the amount of guard cells is increased to nG = 4 in Figure 22h, objects B and C are
masked because similar to the SOCA-CFAR variant in Figure 22e signals from other objects are included
in the training window and thus the background interference estimation. Figure 22i shows that when
there is a larger number of guard cells, cells nearby the objects are detected as presence because now the
strong signals from the object are excluded and the threshold is lowered.
23
For the OS-CFAR generated point clouds for the three objects, the value k of three quarters of the training
window is chosen. The different ranges of the three simulated objects are clearly detected in Figure 22j.
When the amount of guard cells is increased for the OS-CFAR estimator variant as shown in Figure 22k,
object A is not detected. Because object A has a stronger amplitude than the other two objects and
does not change or range because it has zero velocity, its strong signal is centered at one location. Where
object B and C have a lower amplitude which is also spread over a larger range, this leads to significantly
lower signal strengths. The OS-CFAR estimator bases its statistically derived threshold on the weaker
objects’ signals, resulting in the strong signals from object A not being detected. In Figure 22l it is shown
that when the amount of guard cells is increased even further, the region of detection increases because
the training cells contain lower signal strength compared to the previous situation.
In section 2.3 the incorrect use of the linear law detector in combination with the scaling factor that
belongs to the square law detector is covered. To show the differences in detection performance, the
detection for the same simulated objects and sliding window parameters are shown in figures 23d, 23e,
and 23f. When the square law detector is applied, the difference between weak and strong signals becomes
larger and the edge is less gradual compared to the linear law. As can also be seen in Figure 23, more
cells will be detected as presence, compared to the linear law.
Figure 23: Detection for increase in number of guard cells, CA-CFAR. Correct use of square law detector
in (a), (b) and (c), linear law detector in combination with incorrect scaling factor in (d), (e), and (f).
For the more complex simulated object from Figure 20, the point clouds for the CA-, SOCA-, GOCA-,
and OS-CFAR interference estimators have also been generated for an increase in guard cells in Figure
24. The object of detection is not a signal coming from a single location, but covers a larger region over
both range and azimuth dimension. Therefore having enough guard cells becomes of greater importance
for correct detection. The CA-CFAR point clouds show that masking of itself takes place for a small
amount of guard cells. In contrast to the three individual objects, more guard cells now do not result in
masking because there are no other strong signals from for example other objects.
The SOCA-CFAR estimator results in similar but slightly larger point clouds, the region nearby the object
is also detected. For a small amount of guard cells, the interference estimation always includes signals
from the object itself. For the GOCA-CFAR estimator and nG = 2 in Figure 24g, only the cell belonging
to the strongest signal is detected. Increasing the number of guard cells to nG = 4 leads to increased
detection performance, whereas too many guard cells lead to the same false detection phenomena as
explained for Figure 22f and 22i.
The OS-CFAR interference estimator has been designed for combined clutter-edge and multiple object
situations, however this simulated situation contains neither of the two. Hence for this easy situation but
with a more complex object, using the OS-CFAR estimator does not result in good detection performance.
24
(a) CA, nG = 2, nT = 16 (b) CA, nG = 4, nT = 16 (c) CA, nG = 8, nT = 16
Figure 24: CFAR point clouds for various interference estimators and varying number of guard cells nG ,
for more complex simulated object from Figure 20.
For this specific situation having k = 34 is too low, and results in the region surrounding the simulated
object in being detected. The centre of the object is not detected because the k for those cells include
signals from the object itself. When the amount of guard cells is further increased, the outer borders of
the detected band get wider because the value of the kth signal is lower. At the same time the undetected
band in the middle also increases.
25
(a) CA-CFAR (b) SOCA-CFAR (c) GOCA-CFAR
Figure 25: 3D figures for CFAR detection with nG = 4, nT = 16, for three simulated objects in Figures
25a, 25b, and 25c, and a more complex simulated object in Figures 25d, 25e, and 25f.
Until now the generated point clouds have been shown as two-dimensional representations, however they
can easily be computed over three dimensions as well. Figure 25 shows the three dimensional generated
point clouds for both the three separate object simulation and the more complex object situation. It can
be seen that for both simulations the point clouds generated with the GOCA-CFAR estimator result in
fewer points, whereas usage of the SOCA-CFAR estimator results in a larger amount of points detected.
The point clouds generated for the OS-CFAR estimator result in zero detections, and are therefore not
included in the figure.
26
only one dimension has not been passed may include useful information on the shape of the object to
be detected. To analyse the effects of this proposed added informative step, it has been included in the
implementation.
(a) Detection over both dimensions (b) Detection over azimuth (orange) and range
(blue) separately, and both (green).
Figure 26: CA-CFAR detection over the 2D range-azimuth plane for three simulated objects
Figure 26 shows the ordinary generated point cloud in the range-azimuth plane in 26a, and the proposed
informatively improved version in 26b. The proposed point cloud does not directly seem to add useful
extra information.
(a) Range, azimuth, and elevation (b) Range and azimuth (orange), range and
elevation (blue), and over all three dimensions
(green).
27
Figure 27a shows the 3D generated point cloud for the three simulated objects and its proposed infor-
matively improved point cloud in Figure 27b. It includes the individual point clouds for when either
the azimuth or elevation angular dimension have been excluded from the detection. Figures 27c and 27d
show the original 3D point cloud and its informatively improved version for the more complex simulated
object from Figure 20. For the three simulated objects, if only the improved point cloud were provided it
would not become directly clear where the regions of the original detected point cloud are. Parts of the
green point cloud are hidden by the other two. For the simulation of the more complex object this does
not form a problem, but this could not have been known beforehand. Therefore it is advised to not solely
rely on the point cloud resulting from this proposed post-processing step, but to view it as an addition
to the ordinary point cloud output.
28
4 Experiment
This chapter discusses the experiment, from the design in section 4.1 to the results and analysis of the
conducted measurements in section 4.2. To test the performance of the different CFAR interference level
estimators in realistic living rooms, a novel experiment had to be set up. The variables and choices
made regarding which properties to focus on and which to minimize the influence of, are elaborated on
in section 4.1.1. The set-up is described in section 4.1.2, including the radar sensor board that is used as
well as the layout of the rooms the experiment is performed in.
The weight of this chapter lies with the obtained results, for which the data processing framework as
described in chapter 4 for the simulations is used. Hence the same angle detection method is used,
and the CFAR interference level estimators that are included are CA-, SOCA-, GOCA-, and OS-CFAR.
The reflected signal strengths are shown for different situations and rooms in section 4.2.1. Section
4.2.2 includes and discusses the point clouds generated for the different rooms, situations, and different
interference level estimators. Furthermore, the informatively improved point clouds proposed in section
3.2.2 have been generated for the experimental data as well and are shown and elaborated on in section
4.2.3. How this chapter and covered objective relates to the general process from radar to application, is
shown in Figure 28.
Figure 28: Relation between this chapter, the covered research objective covered, and the general
process from radar to application.
4.1.1 Variables
Living rooms are completely different from each other, they may vary with respect to many characteristics
and properties. Certain properties are likely to effect the reflection of signals and thus may also influence
the detection performance. For instance the shape, size, and building materials pf the room directly
relate to multipath effects. To limit this variation, the experiment is conducted in rooms of the same
shape and size and in the same buildings. The measurements are performed at van Boeijen, which is an
organisation for people with intellectual disabilities. Their location in Assen includes several buildings
with similar rooms, where residents have their own studio.
Just like ordinary living rooms, the interior of these studios differ a lot. The two properties with respect
to the interior that are focused on during this experiment, are how filled the room is and the amount
of strong reflectors. A highly occupied room may cause for more complex situations regarding multiple
objects. Whereas a high amount of strong reflectors may cause difficulties by masking other objects.
Both properties are evaluated objectively for each room, by letting three individuals fill out a small
questionnaire categorizing all rooms.
Next to the interior of the room, is the situation and occupancy by people also of interest. Therefore
29
five variations in movement activity are tested in each room, from zero movement to multiple movements
close to each other. The five movement situations are described as the following:
1. No movement: No person present.
2. Large scale movement: Person walks through the room (> 1m).
3. Small scale movement A: Vertical arm movement (< 1m).
4. Small scale movement B: Horizontal hand movement (< 0.3m).
5. Small scale movements combined: Small scale movements A and B in front of each other.
During the recording of the room without anyone present, the background signals are measured which
themselves also belong to objects. From this it can be evaluated whether the furniture can be clearly
detected. Additionally, evaluation of the large scale movement of a person walking through the room
may provide insights regarding the detection performance at different locations within the room.
The small scale movements 3, 4, and 5 are visualized in Figure 29. Movement situation 3 varies over
the azimuth and elevation angle, the range movement is limited. Movement situation 4 varies over the
range, whereas movement over both angles is limited. By including these different movement situations,
difference in performance caused by movements over different spatial dimensions may also be observed.
By including the combined movement situation 5, movements which cross paths can be evaluated.
4.1.2 Set-up
Figure 30: Infineon BGT60TR13C radar sensor [30], and details of the antenna positions
For this experiment an Infineon BGT60TR13C XENSIV 60GHz radar sensor has been used. Figure 30
shows the mmWave radar, in the red square. The rest of the board is required such that the information
can be read via a USBC connection. Next to it is a zoomed in representation of the antennas on the
mmWave radar. When the radar board is oriented as in the figure, the maximum azimuth angular range
is 80°, and the elevation angle has a maximum range of 50°. The region of interest for this experiment is
30
more focused on the centre and close to the floor, than towards the ceiling. Therefore this orientation of
the radar is preferred. However, when the focus lies on the entire elevation angle and the azimuth plane
is limited, the radar should be rotated resulting in exchanged elevation and horizontal-planes.
For reference a depth camera has been mounted on top of the mmWave radar, the Intel Realsense D455
has been used for this. As can be seen from Figure 31, it does not capture colour as an ordinary camera,
but the distance. Both sensors together are placed on a tripod at a height of 1.5m in the corner of each
room, located at the green marker in Figure 32. The tripod is directed into the room such that most of
the room is included in the 80° azimuth angle.
The movement situations happen at the same location within one room. As mentioned before, multipath
effects can be caused by the walls, hence the positioning with respect to them matters. Not every room
has an empty spot in the exact same location, therefore the location of movement activities differs per
room and is noted. Similar to the positioning within the room, characteristics of the person performing
the movements may also be of influence. Therefore it is noted who performs each movement including
their features of interest with respect to reflecting signals, such as wearing a large watch or a belt with a
big buckle.
Figure 32: Map of a building at Van Boeijen [7], including the sensor setup location marked in green
31
4.2 Results and Analysis
The radar parameters from Table 1a are used, which are the same as used for the simulations in the
previous Chapter. This leads to a maximum range of 6.5m, therefore the figures include only results
up till this distance. The maximum angles are defined as twice the half-power beam widths, and are
dependent on the antenna design and their beam patterns. For this sensor this leads to the maximum
detection angles θaz = 80°, and θel = 50° for the azimuth and elevation angles respectively.
32
(a) Empty interior, no presence (b) Full interior, no presence
(c) Empty interior, one person (d) Full interior, one person
(e) Empty interior, two people (f) Full interior, two people
Figure 33: Range-azimuth plots for different presence situations for a room with an empty interior and
almost no strong-reflecting materials, and for a room with a full interior and a lot of strong-reflecting
materials.
33
4.2.2 Point Clouds for Variations of Interference Estimators
Point clouds have been generated for each of the five movement situations in all of the thirteen individual
rooms, using the CA-, GOCA-, SOCA-, and OS-CFAR interference estimators. The most interesting
figures are discussed and included in this section, additional figures may be found in Appendix A.2. The
point clouds have been generated for nG = 4 guard cells and nT = 16 training cells on each side.
Figures 34, 35, and 36 show the generated point clouds for the three background interference level
estimators CA-, GOCA, and SOCA-CFAR for three consecutive frames in. Focus lies on the differences
and comparisons of the three small scale movement situations, the arm and hand movements and their
combination. The interval between two frames is equal to the chirp repetition time Tc of 750 µ s, which
is less than a millisecond. Although there is only a small period of time between the consecutive frames,
the point clouds differ per frame for every estimator variant used. The OS-CFAR estimator point clouds
have also been generated, but lead to empty point clouds and are therefore not included in the figures.
Figure 34: Point clouds for CA-, GOCA, and SOCA-CFAR for the same three consecutive frames. One
person present in the room, small scale arm movement, almost empty interior and minimal
strong-reflecting materials.
34
The small scale arm movement is shown in Figure 34, and the differences between the interference
estimator variants is shown. The SOCA-CFAR estimator leads as expected to more points than the
other two. The CA- and GOCA-CFAR estimators lead to smaller but similar point clouds, where the
GOCA-CFAR point clouds have a smaller number of points than the CA-CFAR point clouds. Where the
SOCA-CFAR point cloud changes a lot over the frames, one part is consistently detected at the same
location. This is the larger cloud in Figure 34i, which also corresponds to the location of the point clouds
of the other estimators. For both CA- and GOCA-CFAR, in the first frame points are detected nearby
the sensor which, belong to something else than the person making the movement.
Figure 35: Point clouds for CA-, GOCA, and SOCA-CFAR for the same three consecutive frames. One
person present in the room, small scale hand movement, almost empty interior and minimal
strong-reflecting materials.
Comparing the hand movement point clouds in Figure 35 to the arm movement point clouds of Figure
34, it can be seen that the smaller hand movement lead to a smaller number of points. Furthermore,
where the arm movement had a constant part in the point clouds over the different frames and estimator
variants, this is not the case for the hand movement situation. The point clouds over each frame are very
different. The location of the CA-CFAR cloud in Figure 35a is not included in the other two frames, and
only one of the three sub-clouds of Figure 35b is included in the cloud of Figure 35c.
35
(a) CA-CFAR (b) CA-CFAR (c) CA-CFAR
Figure 36: Point clouds for CA-, GOCA, and SOCA-CFAR for the same three consecutive frames. Two
people present in the room, small scale arm and hand movements combined, almost empty interior and
minimal strong-reflecting materials.
Figure 36 shows the generated point clouds for the two small scale movements combined, and thus two
people present. However, when comparing the point clouds to Figures 34 and 35, it does not necessarily
become clear that the amount of people present has increased. Nonetheless, from the point clouds in
Figures 36c and 36f, the shape of a human body could be deduced. Furthermore, for all three estimator
variants, the point clouds change quite a bit over the frames. For the CA- and SOCA-CFAR estimators
there is again a part of the point cloud which remains constant over the frames. However, this is not the
case for the GOCA-CFAR estimator generated clouds.
36
(a) CA-CFAR (b) CA-CFAR (c) CA-CFAR
(f)
(d) GOCA-CFAR (e) GOCA-CFAR (g) GOCA-CFAR
Figure 37: Point clouds for CA-, GOCA, and SOCA-CFAR for the same three consecutive frames. One
person present in the room, small scale arm movement, full interior and many strong-reflecting
materials.
Figure 37 shows the generated point clouds for the small scale arm movement, in a room with a full
interior and a lot of strong-reflecting materials. The point clouds are similar to the ones generated for the
same situation but in a more empty room, although these point clouds suggest the presence of a second
object. This might be caused by any of the objects present in the room.
37
(a) CA-CFAR (b) CA-CFAR (c) CA-CFAR
(f)
(d) GOCA-CFAR (e) GOCA-CFAR (g) GOCA-CFAR
Figure 38: Point clouds for CA-, GOCA, and SOCA-CFAR for the same three consecutive frames. One
person present in the room, small scale hand movement, full interior and many strong-reflecting
materials.
Figure 38 shows the point clouds for the small scale hand movement, and leads again to less points
compared to the arm movement situation in Figure 37 of the same room. This is similar to the relation
that was seen between the same movement situations, but for the room with a more empty interior. When
compared to this other room, measurements in the more crowded room lead to less points detected. This
may be the result of furniture blocking the view between the sensor and the person. Another also quite
likely possibility is that more signals from other objects are included in the reference window and thus
result in a raised threshold.
38
(a) CA-CFAR (b) CA-CFAR (c) CA-CFAR
Figure 39: Point clouds for CA-, GOCA, and SOCA-CFAR for the same three consecutive frames. Two
people present in the room, small scale arm and hand movement combined, full interior and many
strong-reflecting materials.
Looking at the point clouds generated for two people and the combined small scale movements as shown
in Figure 39, the shape of the point clouds do not suggest an increase in presence when compared to
Figures 37 and 38. The point clouds are similar to the two previously shown situations. Where the CA-
CFAR plots of the similar situation but in the more empty room suggested a second location of presence
in the room, this does not become clear from any of the estimator variations plots for this more crowded
room.
To summarize the differences and comparisons regarding the point clouds generated for the different
interference level estimators CA-, GOCA-, SOCA-, and OS-CFAR, it can be said that the CA- and
GOCA-CFAR estimators result in comparable point clouds for almost every situation. The GOCA-CFAR
estimator’s point clouds generally include a fewer number of points than for the CA-CFAR estimator, but
not in the same order of difference compared to the SOCA-CFAR estimator generated point clouds. The
SOCA-CFAR estimated point clouds contain many points which also change over the different frames, and
thus probably do not belong to the presence of an actual object. Therefore it can be said that although
the SOCA-CFAR estimator has been developed to deal with detection in multiple target situations, it is
not preferred in this setting.
39
Although the OS-CFAR estimator generated point clouds have not been included, their detection perfor-
mance should not be forgotten. A highly possible reason why the OS-CFAR generated point clouds result
in the detection of zero points, because inside a living room there are many reflected signals received by
the sensor, setting k to 34 may result in a too high threshold.
Furthermore, for the more empty room with almost no strong-reflecting materials present from Figures
34, 35, and 36, the combined movement situation resulted in larger point clouds. For the room with a
more full interior and many strong-reflecting materials from Figures 37, 38, and 39, this is the other way
around and the combined movement situation results in smaller point clouds.
(a) Empty interior, original point cloud (b) Empty interior, improved point cloud
(c) Full interior, original point cloud (d) Full interior, improved point cloud
Figure 40: Original point clouds and their informative improved versions for CA-CFAR estimator
detection, small scale hand movement. (Top: Almost empty interior and minimal strong-reflecting
materials, bottom: full interior and many strong-reflecting materials.)
Figure 40 shows the original and informatively improved point clouds as described in section 3.2.2, for the
small scale hand movement and two different room situations. From the original point cloud in Figure
40a, the shape of a human body is difficult to derive. However, the improved point cloud in Figure 40b
does suggest the shape of a body and thus provides more information to the original point cloud.
Figure 40d also provides more information to the original point cloud of Figure 40c. The improved point
cloud suggests that the two pair of dots at same azimuth angle and range but different elevation angle,
belongs to the same object. This object is the person standing in the room, nevertheless this would have
been hard to determine from the original point cloud. Furthermore, Figure 40 clearly shows that rooms
with a more full interior lead to less points detected in their point clouds, compared to the larger number
of points included for more empty rooms.
Figure 41 shows the original and improved point clouds for the same situation of the combined small
scale hand and arm movements, in a room with almost empty interior and minimal strong-reflecting
40
(a) CA-CFAR, original point cloud (b) CA-CFAR, improved point cloud
(c) GOCA-CFAR, original point cloud (d) GOCA-CFAR, improved point cloud
Figure 41: Original point clouds and their informative improved versions for small scale hand and arm
movements combined in an almost empty interior and minimal strong-reflecting materials. (Top:
CA-CFAR estimator, bottom: GOCA-CFAR estimator. )
materials present. The differences between the CA-CFAR and GOCA-CFAR estimator can be seen, for
both estimators the improved point clouds provide additional information to the original point clouds.
Figure 41d suggests the location of the second person in orange, which is at the same azimuth angle
and range as the separate part of the point cloud in Figure 41a and 41b. Although the second person
in the room is not directly apparent from the original point clouds or the individual improved point
clouds, combining the information from the improved point clouds for different interference estimators
may provide an even more complete picture of the actual situation in the room.
41
5 Discussion and Future Research
Current mmWave point clouds have their limitations, and indoor detection results in multiple difficulties.
Hence before mmWave radar sensors can be used in an actual device for monitoring patients, the detection
process needs to become more robust. With this in mind, the five research objectives of this thesis had
been determined. This chapter discusses these objectives, draws conclusions and makes suggestions for
future research regarding the specific topics.
Using adaptive thresholds from the CFAR method for object detection is the most common procedure
to generate point clouds from radar signals. However, a complete overview including all steps and their
variations, their interdependent relations, or assumptions the computations are based on is missing. The
information is spread over multiple articles, papers, and educational books. By gathering and structuring
all this information, this thesis provides a clear and complete overview of the CFAR object detection
method.
Not only the steps and their computations are included, but also the theory behind the method. The
detector operated over the data influences computations in following steps, similar for whether a Gaussian
or non-Gaussian distribution of the background interference is assumed. Together with the choice of
interference estimator used, these three factors determine how the probability of false alarm should be
computed to maintain a constant false alarm while optimizing the probability of detection. From this
false alarm rate a scaling factor is computed that is incorporated in the hypothesis test. The detection
by hypothesis test relies on the Neyman-Pearson Lemma, which is explained and its proof is provided.
The scaling factor is directly related to the probability of false alarm, and thus also dependent on the
detector, assumed interference distribution, and estimator used. Due to lack of information on the
interdependent relations of these steps, combinations of detector and incorrect corresponding scaling
factor might happen. Such a mismatch can result in decreased detection performance, which is undesired.
The determination of the scaling factor, including its relation to other steps of the method, are clarified.
Specific attention is given to incorrect interchanging of the linear law and square law detectors, possible
effects are also shown.
How the probability of false alarm, and thus also the scaling factor, are calculated is strongly dependent
on the assumed background interference distribution. This distribution determines a large part of the
threshold. However, no distribution can give an accurate representation of a realistic living room setting.
Therefore other computations which don’t rely that heavily on the assumed interference distribution,
might provide better scaling factors that result in an increased detection performance. Developing a new
method to calculate the scaling factor, dependent on other factors than in the original CFAR method,
will take a lot of work but can result in better point clouds. It is suggested to look into different ways of
including the background interference in the scaling factor determination.
From the generated point clouds it has also become clear that the shape of the object of detection
cannot always be easily derived. Including additional information in the point cloud be the proposed
post-processing step may in some cases improve the point cloud. However it does not always provide a
better picture of the actual situation. Setting a threshold is perhaps not the best method when more
information on the object is preferred. Instead of using CFAR detection methods, it is suggested for
further research to focus on alternative ways to determine presence and absence. Starting from the signal
strength information computed over the different dimensions, a possibility is to look into the shapes of
those signal figures and recognizing patterns over the frames.
An implementation has been developed and tested for both simulations and acquired mmWave radar data.
The range and velocity are retrieved according to standard Fourier transform procedures. For determining
the angle or arrival, there are multiple methods that can be considered. The angular resolution is highly
dependent on the amount of antennas on the sensor, but the choice of angle detection method also has
influence on this. The MVDR method was chosen for this implementation, because it has the highest
angular resolution of the angle detection methods with lower complexity.
The method computes the angle of arrival over one angular dimension at a time. An extension to compute
both azimuth and elevation angles at the same time has not yet been provided. However, the method
is adjusted according to the steps of other two-dimensional angle detection methods and included in
this implementation. Although the MVDR method is characterized as having low complexity and thus
lower computational costs compared to other angle detection methods, the computational costs increase
quadratically for the extension to two angle dimensions at the same time.
42
Other angle detection methods were not implemented in this thesis, however it is suggested to also look in
the effect different angle detection methods. Improving this part of the signal processing may beneficially
influence the eventual generated point clouds. More accurate information can result in more accurate
detection. Additionally, the angular detection can also be improved by using different mmWave sensors.
Having an array of two receiver antennas is the bare minimum to be able to detect the angle of arrival.
The accuracy of detection and resolution increase when more receiver antennas are aligned. When the
amount of antennas cannot be increased due to limited costs or physical limitations for the sensor or
device, virtual antennas can also be included. This results in increased angular resolution, but comes at
the cost of higher computational complexity.
Furthermore, an experiment has been developed and conducted for realistic living rooms, to demonstrate
and compare the performance of different interference level estimators for the CFAR detection method.
Living room situations are not similar to ordinary experimental environments in the radar research
field, such as military or automotive situations. Therefore it is not surprising that the performance of
the different interference estimators does not match the performance for these other situations, which
was the exact reason to look into this new situation. The estimators CA- and GOCA-CFAR perform
better over other estimators SOCA- and OS-CFAR, although the performance is still not at a desired
level. However, the performance of these estimators over the same data may vary when different signal
processing steps are implemented.
The data acquired in this research is limited. It is advised to gather data on multi-scale level in order to
get a comprehensive reliable radar data set. Such a large amount of data would be useful in order to train
the point cloud generator which points are of interest and which are not, instead of focusing on using a
threshold. A possibility is to label signals belonging to furniture like wardrobes and couches, which can
then be viewed as clutter and ignored during the rest of the detection process. However, it is dependent
on the eventual application whether these objects are also of interest and thus should be included in the
point cloud.
43
References
[1] MD Abouzahra and RK Avent. “The 100-kW millimeter-wave radar at the Kwajalein Atoll”. In:
IEEE Antennas and Propagation Magazine 36.2 (1994), pp. 7–19.
[2] Muhammad Arsalan, Avik Santra, and Christoph Will. “Improved contactless heartbeat estimation
in FMCW radar via Kalman filter tracking”. In: IEEE Sensors Letters 4.5 (2020), pp. 1–4.
[3] Kshitiz Bansal et al. “Pointillism: Accurate 3d bounding box estimation with multi-radars”. In:
Proceedings of the 18th Conference on Embedded Networked Sensor Systems. 2020, pp. 340–353.
[4] Amalia E Barrios. “Considerations in the development of the advanced propagation model (APM)
for US Navy applications”. In: 2003 Proceedings of the International Conference on Radar (IEEE
Cat. No. 03EX695). IEEE. 2003, pp. 77–82.
[5] R Bassem. Radarsystems analysis and Design using MATLAB. 2000.
[6] Stephen Blake. “OS-CFAR theory for multiple targets and nonuniform clutter”. In: IEEE transac-
tions on aerospace and electronic systems 24.6 (1988), pp. 785–790.
[7] Van Boeijen. Plattegrond, Assen.
[8] Mahdi Chamseddine et al. “Ghost target detection in 3d radar data using point cloud based deep
neural network”. In: 2020 25th International Conference on Pattern Recognition (ICPR). IEEE.
2021, pp. 10398–10403.
[9] Xingyu Chen et al. “MetaWave: Attacking mmWave Sensing with Meta-material-enhanced Tags”.
In: The 30th Network and Distributed System Security (NDSS) Symposium. Vol. 2023. 2023.
[10] Yuwei Cheng et al. “A novel radar point cloud generation method for robot environment perception”.
In: IEEE Transactions on Robotics 38.6 (2022), pp. 3754–3773.
[11] C.Q. Cutshaw and L.S. Ness. Jane’s Ammunition Handbook: 2003-2004. Jane’s Ammunition Hand-
book. Jane’s Information Group, 2003. isbn: 9780710625380.
[12] Xiaobo Deng, Chao Gao, and Jian Yang. “Sea Clutter Amplitude Statistics Analysis by Goodness-
of-Fit Test”. In: Procedia Engineering 29 (2012), pp. 791–796.
[13] Nedap Healthcare internal document. Possible applications mmWave monitoring. 2023.
[14] Shichen Dong. Human Activity Recognition System Based on Millimeter Wave Radar. 2020.
[15] Mohamed B El Mashade. “Analysis of Cell-Averaging based detectors for x 2 fluctuating targets in
multitarget environments”. In: Journal of Electronics (China) 23 (2006), pp. 853–863.
[16] Mohamed B El Mashade. “Performance analysis of the modified versions of CFAR detectors in
multiple-target and nonuniform clutter”. In: Radioelectronics and Communications Systems 56.8
(2013), pp. 385–401.
[17] Jonah Gamba. Radar signal processing for autonomous driving. Vol. 1456. Springer, 2020.
[18] Ennio Gambi et al. “Millimeter wave radar data of people walking”. In: Data in brief 31 (2020),
p. 105996.
[19] Prashant P Gandhi and Saleem A Kassam. “Adaptive order statistic and trimmed mean CFAR
radar detectors”. In: Fourth IEEE Region 10 International Conference TENCON. IEEE. 1989,
pp. 832–835.
[20] Xin Gao et al. “On the MUSIC-derived approaches of angle estimation for bistatic MIMO radar”.
In: 2009 International Conference on Wireless Networks and Information Systems. IEEE. 2009,
pp. 343–346.
[21] Edno Gentilho Jr, Paulo Rogerio Scalassara, and Taufik Abrão. “Direction-of-arrival estimation
methods: A performance-complexity tradeoff perspective”. In: Journal of Signal Processing Systems
92.2 (2020), pp. 239–256.
[22] Lal Chand Godara. Smart antennas. CRC press, 2004.
[23] Junfeng Guan et al. “Through fog high-resolution imaging using millimeter wave radar”. In: Proceed-
ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020, pp. 11464–
11473.
[24] Francesco Guidi et al. “Environment mapping with millimeter-wave massive arrays: System design
and performance”. In: 2016 IEEE Globecom Workshops (GC Wkshps). IEEE. 2016, pp. 1–6.
[25] V Gregers Hansen and James H Sawyers. “Detectability loss due to" greatest of" selection in a cell-
averaging CFAR”. In: IEEE Transactions on Aerospace and Electronic Systems 1 (1980), pp. 115–
118.
[26] Ghufran M Hatem, JW Abdul Sadah, and Thamir R Saeed. “Comparative study of various cfar
algorithms for non-homogenous environments”. In: IOP Conference Series: Materials Science and
Engineering. Vol. 433. 1. IOP Publishing. 2018, p. 012080.
[27] S. Haykin. Adaptative Filter Theory. 3rd ed. Prentice Hall, New York, 1996.
44
[28] Xu Huang, Joseph KP Tsoi, and Nitish Patel. “mmWave Radar Sensors Fusion for Indoor Object
Detection and Tracking”. In: Electronics 11.14 (2022), p. 2209.
[29] Eugin Hyun and Jong-Hun Lee. “A new OS-CFAR detector design”. In: 2011 First ACIS/JNU In-
ternational Conference on Computers, Networks, Systems and Industrial Engineering. IEEE. 2011,
pp. 133–136.
[30] Infineon. “BGT60TR13C 60 GHz Radar Sensor Datasheet V2.4.6”. In: (2021).
[31] Welzijn en Sport Inspectie Gezondheidszorg en Jeugd Ministerie van Volksgezondheid. Zorg, jeugdhulp
en toezicht in tijden van personeelstekorten. December 2022.
[32] Cesar Iovescu and Sandeep Rao. “The fundamentals of millimeter wave sensors”. In: Texas Instru-
ments (2017), pp. 1–8.
[33] Dejan Ivković, Milenko Andrić, and Bojan Zrnić. “A new model of CFAR detector”. In: Frequenz
68.3-4 (2014), pp. 125–136.
[34] Mengjie Jiang et al. “4D High-resolution imagery of point clouds for automotive MmWave radar”.
In: IEEE Transactions on Intelligent Transportation Systems (2023).
[35] Xinrui Jiang et al. “Millimeter-wave array radar-based human gait recognition using multi-channel
three-dimensional convolutional neural network”. In: Sensors 20.19 (2020), p. 5466.
[36] Willie D Jones. “Keeping cars from crashing”. In: IEEE spectrum 38.9 (2001), pp. 40–45.
[37] Eyung W Kang. Radar system analysis, design, and simulation. Artech House, 2008.
[38] Steven M Kay. Modern spectral estimation. Pearson Education India, 1988.
[39] Matthias Kronauge and Hermann Rohling. “Fast two-dimensional CFAR procedure”. In: IEEE
Transactions on Aerospace and Electronic Systems 49.3 (2013), pp. 1817–1823.
[40] Vincent Y. F. Li and Keith M. Miller. “Target Detection in Radar: Current Status and Future
Possibilities”. In: The Journal of Navigation 50.2 (1997), pp. 303–313.
[41] Xinrong Li et al. “Signal processing for TDM MIMO FMCW millimeter-wave radar sensors”. In:
IEEE Access 9 (2021), pp. 167959–167971.
[42] Teck-Yian Lim et al. “Radar and camera early fusion for vehicle detection in advanced driver
assistance systems”. In: Machine learning for autonomous driving workshop at the 33rd conference
on neural information processing systems. Vol. 2. 7. 2019.
[43] Gui-Ru Liu et al. “A radar-based door open warning technology for vehicle active safety”. In: 2016
International Conference on Information System and Artificial Intelligence (ISAI). IEEE. 2016,
pp. 479–484.
[44] Hui Liu. Robot systems for rail transit applications. Elsevier, 2020.
[45] Yawei Liu and Zhiqiang Li. “Clutter Simulation Overview”. In: 2017 2nd International Conference
on Materials Science, Machinery and Energy Engineering (MSMEE 2017). Atlantis Press. 2017,
pp. 530–534.
[46] Harry D Mafukidze et al. “Scattering centers to point clouds: a review of mmWave radars for
non-radar-engineers”. In: IEEE Access (2022).
[47] Bassem R Mahafza. Radar systems analysis and design using MATLAB. CRC press, 2022.
[48] Laurence Mailaender et al. Advances in angle-of-arrival and multidimensional signal processing for
localization and communications. 2011.
[49] Asem Melebari, Amit Kumar Mishra, and MY Abdul Gaffar. “Comparison of square law, linear
and bessel detectors for CA and OS CFAR algorithms”. In: 2015 IEEE Radar Conference. IEEE.
2015, pp. 383–388.
[50] WL Melvin and JA Scheer, eds. Principles of Modern Radar, Vol. III: Radar Applications. SciTech
Publishing, 2014.
[51] Jerzy Neyman and Egon Sharpe Pearson. “IX. On the problem of the most efficient tests of statistical
hypotheses”. In: Philosophical Transactions of the Royal Society of London. Series A, Containing
Papers of a Mathematical or Physical Character 231.694-706 (1933), pp. 289–337.
[52] Minh Q Nguyen and Changzhi Li. “Radar and ultrasound hybrid system for human computer
interaction”. In: 2018 IEEE Radar Conference (RadarConf18). IEEE. 2018, pp. 1476–1480.
[53] Ross D. Olney et al. “COLLISION WARNING SYSTEM TECHNOLOGY”. In: 1995.
[54] Sameera Palipana et al. “Pantomime: Mid-air gesture recognition with sparse millimeter-wave radar
point clouds”. In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Tech-
nologies 5.1 (2021), pp. 1–27.
[55] Zhengyu Peng. “MmWave CFAR object detection: Unclear use of scaling factor”. E-mail conversa-
tion regarding the radarsimpy code.
45
[56] Zhengyu Peng et al. “A portable FMCW interferometry radar with programmable low-IF architec-
ture for localization, ISAR imaging, and vital sign tracking”. In: IEEE transactions on microwave
theory and techniques 65.4 (2016), pp. 1334–1344.
[57] Akarsh Prabhakara et al. “High Resolution Point Clouds from mmWave Radar”. In: 2023 IEEE
International Conference on Robotics and Automation (ICRA). IEEE. 2023, pp. 4135–4142.
[58] Farra Anindya Putri et al. “Development of FMCW Radar Signal Processing for High-Speed Rail-
way Collision Avoidance”. In: Jurnal Elektronika dan Telekomunikasi 22.1 (2022), pp. 40–47.
[59] R.Nowak. Statistical Signal Processing, Detection Theory. 2010.
[60] RS Raghavan. “Analysis of CA-CFAR processors for linear-law detection”. In: IEEE Transactions
on Aerospace and Electronic Systems 28.3 (1992), pp. 661–665.
[61] Narasimhan Raman Subramanyan and Ramakrishnan Kalpathi R. “Robust variability index CFAR
for non-homogeneous background”. In: IET Radar, Sonar & Navigation 13.10 (2019), pp. 1775–1786.
[62] Karthik Ramasubramanian and T Instruments. “Using a complex-baseband architecture in FMCW
radar systems”. In: Texas Instruments 19 (2017).
[63] Mark A Richards et al. Principles of modern radar. Citeseer, 2010.
[64] Mark A. Richards. Fundamentals of Radar Signal Processing, 1st ed. McGraw-Hill, 2005.
[65] James A Ritcey. “Performance analysis of the censored mean-level detector”. In: IEEE Transactions
on Aerospace and Electronic Systems 4 (1986), pp. 443–454.
[66] Hermann Rohling. “Radar CFAR thresholding in clutter and multiple target situations”. In: IEEE
transactions on aerospace and electronic systems 4 (1983), pp. 608–621.
[67] Richard Roy and Thomas Kailath. “ESPRIT-estimation of signal parameters via rotational invari-
ance techniques”. In: IEEE Transactions on acoustics, speech, and signal processing 37.7 (1989),
pp. 984–995.
[68] Mochammad Sahal et al. “Comparison of CFAR methods on multiple targets in sea clutter using
SPX-radar-simulator”. In: 2020 International Seminar on Intelligent Technology and Its Applica-
tions (ISITIA). IEEE. 2020, pp. 260–265.
[69] Shuji Sayama and Seishiro Ishii. “Suppression of Log-Normal Distributed Weather Clutter Observed
by an S-Band Radar”. In: (2013).
[70] Ralph Schmidt. “Multiple emitter location and signal parameter estimation”. In: IEEE transactions
on antennas and propagation 34.3 (1986), pp. 276–280.
[71] Merrill Ivan Skolnik. Introduction to radar systems. 1980.
[72] S Tokoro et al. “Electronically scanned millimeter-wave radar for pre-crash safety and adaptive
cruise control system”. In: IEEE IV2003 Intelligent Vehicles Symposium. Proceedings (Cat. No.
03TH8683). IEEE. 2003, pp. 304–309.
[73] E Udo, Agwu Amaechi, and Ogobuchi Okey. Analysis and Computer Simulation of a Continuous
Wave Radar Detection System for Moving Targets. Vol. 5. 2021.
[74] Barry D Van Veen and Kevin M Buckley. “Beamforming: A versatile approach to spatial filtering”.
In: IEEE assp magazine 5.2 (1988), pp. 4–24.
[75] NY Verona. “M 2006 IEEE Radar Conference”. In: Conference on Security Tec (CCST). Vol. 16.
2006, p. 19.
[76] Fang Wang et al. “Quantitative gait measurement with pulse-Doppler radar for passive in-home
gait assessment”. In: IEEE Transactions on Biomedical Engineering 61.9 (2014), pp. 2434–2443.
[77] Yuheng Wang et al. “m-activity: Accurate and real-time human activity recognition via millimeter
wave radar”. In: ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP). IEEE. 2021, pp. 8298–8302.
[78] Yunneng Yuan et al. “Two-dimensional FFT and two-dimensional CA-CFAR based on ZYNQ”. In:
The Journal of Engineering 2019.20 (2019), pp. 6483–6486.
[79] Xuezhi Zeng, Halldór Stefán Laxdal Báruson, and Alexander Sundvall. “Walking Step Monitoring
with a Millimeter-Wave Radar in Real-Life Environment for Disease and Fall Prevention for the
Elderly”. In: Sensors 22.24 (2022), p. 9901.
[80] Youwei Zeng et al. “FarSense: Pushing the range limit of WiFi-based respiration sensing with CSI
ratio of two antennas”. In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous
Technologies 3.3 (2019), pp. 1–26.
[81] Guangcheng Zhang, Xiaoyi Geng, and Yueh-Jaw Lin. “Comprehensive mpoint: A method for 3d
point cloud generation of human bodies utilizing fmcw mimo mm-wave radar”. In: Sensors 21.19
(2021), p. 6455.
[82] Peijun Zhao et al. “CubeLearn: End-to-end learning for human motion recognition from raw mmWave
radar signals”. In: IEEE Internet of Things Journal (2023).
46
[83] Peijun Zhao et al. “mid: Tracking and identifying people with millimeter wave radar”. In: 2019
15th International Conference on Distributed Computing in Sensor Systems (DCOSS). IEEE. 2019,
pp. 33–40.
[84] Qiangwen Zheng et al. “An improved scheme for high-resolution point cloud map generation based
on FMCW radar”. In: 2020 IEEE 11th Sensor Array and Multichannel Signal Processing Workshop
(SAM). IEEE. 2020, pp. 1–5.
[85] Wei Zhou et al. “Modified cell averaging CFAR detector based on Grubbs criterion in non-homogeneous
background”. In: IET Radar, Sonar & Navigation 13.1 (2019), pp. 104–112.
47
A Supplementary Figures
A.1 Signal Strength Polar Plots
Figure 42: Range-angle plot, empty interior and almost no strong-reflecting materials in the room.
48
(a) Range-azimuth, no presence (b) Range-elevation, no presence
Figure 43: Range-angle plots, empty interior and almost no strong-reflecting materials in the room.
49
(a) Range-azimuth, no presence (b) Range-elevation, no presence
Figure 44: Range-angle plots, empty interior and almost no strong-reflecting materials in the room.
50
(a) Range-azimuth, no presence (b) Range-elevation, no presence
Figure 45: Range-angle plots, full interior and a lot of strong reflecting materials in the room.
51
(a) Range-azimuth, no presence (b) Range-elevation, no presence
Figure 46: Range-angle plots, full interior and an average amount of strong-reflecting materials in the
room.
52
(a) Range-azimuth, no presence (b) Range-elevation, no presence
Figure 47: Range-angle plots, average coverage of interior and a lot of strong-reflecting materials in the
room.
53
A.2 3D Point Clouds
Figure 48: Point clouds for CA-CFAR for six consecutive frames. One person present, small scale hand
movement, empty interior and almost no strong-reflecting materials.
54
(a) (b) (c)
Figure 49: Point clouds for GOCA-CFAR for six consecutive frames. One person present, small scale
hand movement, empty interior and almost no strong-reflecting materials.
Figure 50: Point clouds for SOCA-CFAR for six consecutive frames. One person present, small scale
hand movement, empty interior and almost no strong-reflecting materials.
55
(a) (b) (c)
Figure 51: Point clouds for CA-CFAR for six consecutive frames. Two people present, small scale arm
and hand movement combined, empty interior and almost no strong-reflecting materials.
Figure 52: Point clouds for GOCA-CFAR for six consecutive frames. Two people present, small scale
arm and hand movement combined, empty interior and almost no strong-reflecting materials.
56
(a) (b) (c)
Figure 53: Point clouds for SOCA-CFAR for six consecutive frames. Two people present, small scale
arm and hand movement combined, empty interior and almost no strong-reflecting materials.
57