Radar Imaging and Holography
Radar Imaging and Holography
Radar Imaging
and Holography
Other volumes in this series:
Volume 1 Optimised radar processors A. Farina (Editor)
Volume 3 Weibull radar clutter M. Sekine and Y. Mao
Volume 4 Advanced radar techniques and systems G. Galati (Editor)
Volume 7 Ultra-wideband radar measurements: analysis and processing
L. Yu. Astanin and A.A. Kostylev
Volume 8 Aviation weather surveillance systems: advanced radar and surface
sensors for flight safety and air traffic management P.R. Mahapatra
Volume 10 Radar techniques using array antennas W. Wirth
Volume 11 Air and spaceborne radar systems: an introduction P. Lacomme (Editor)
Volume 13 Introduction to RF stealth D. Lynch
Volume 14 Applications of space-time adaptive processing R. Klemm (Editor)
Volume 15 Ground penetrating radar, 2nd edition D. Daniels
Volume 16 Target detection by marine radar J. Briggs
Volume 17 Strapdown inertial navigation technology, 2nd edition D. Titterton and
J. Weston
Volume 18 Introduction to radar target recognition P. Tait
Volume 19 Radar imaging and holography A. Pasmurov and S. Zinovjev
Volume 20 Sea clutter: scattering, the K distribution and radar performance K. Ward,
R. Tough and S. Watts
Volume 21 Principles of space-time adaptive processing, 3rd edition R. Klemm
Volume 101 Introduction to airborne radar, 2nd edition G.W. Stimson
Volume 102 Low-angle radar land clutter B. Billingsley
Radar Imaging
and Holography
A. Pasmurov and J. Zinoviev
This publication is copyright under the Berne Convention and the Universal Copyright
Convention. All rights reserved. Apart from any fair dealing for the purposes of research or
private study, or criticism or review, as permitted under the Copyright, Designs and Patents
Act, 1988, this publication may be reproduced, stored or transmitted, in any form or by
any means, only with the prior permission in writing of the publishers, or in the case of
reprographic reproduction in accordance with the terms of licences issued by the Copyright
Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to
the publishers at the undermentioned address:
www.theiet.org
While the authors and the publishers believe that the information and guidance given in this
work are correct, all parties must rely upon their own skill and judgement when making use
of them. Neither the authors nor the publishers assume any liability to anyone for any loss
or damage caused by any error or omission in the work, whether such error or omission is
the result of negligence or any other cause. Any and all such liability is disclaimed.
The moral rights of the authors to be identified as authors of this work have been asserted
by them in accordance with the Copyright, Designs and Patents Act 1988.
List of figures ix
Introduction 1
References 231
Index 243
Chapter 1
Figure 1.1 The process of imaging by a thin lens 8
Figure 1.2 A schematic illustration of the focal depth of an optical image:
(a) image of point M lying in the optical axis; (b) image of
point A; (c) image of point B and (d) image of points A and B
in the planes M1 , M2 and M3 9
Figure 1.3 The process of optical hologram recording: 1 – reference
wave; 2 – object; 3 – photoplate and 4 – object’s wave 11
Figure 1.4 Image reconstruction from a hologram: 1 – virtual image;
2 – real image; 3 – zero diffraction order and
4 – hologram 13
Figure 1.5 Viewing geometry in computerised tomography (from
Reference 15): m – circumference for measurements;
c – circumference with the centre at point O enveloping
a cross section; p – arbitrary point in the circle with the polar
coordinates ρ and ; A, C and D – wide beam transmitters; B,
C and D – receivers; γ –γ , δ–δ – parallel elliptic arcs defining
the resolving power of transmitter–receiver pair
(CC and DD ) 15
Figure 1.6 A scheme of X-ray tomographic experiment using a collimated
beam: 1 – X-rays; 2 – projection angle;
3 – registration line; 4 – projection axis and
5 – integration line 17
Figure 1.7 The geometrical arrangement of the G(x, y) pixels in
the Fourier region of a polar grid. The parameters ϑmax and
ϑmin are the variation range of the projection angles. The
shaded region is the SAR recording area 18
Figure 1.8 Synthesis of a radar aperture pattern: (a) real antenna array
and (b) synthesised antenna array 22
Chapter 2
Figure 2.1 Viewing geometry for a rotating cylinder: 1, 2, 3 – scattering
centres (scatterers) 30
Figure 2.2 Schematic illustrations of aperture synthesis techniques:
(a) direct synthesis implemented in SAR, (b) inverse synthesis
for a target moving in a straight line and (c) inverse synthesis
for a rotating target 32
Figure 2.3 The holographic approach to signal recording and processing
in SAR: 1 – recording of a 1D Fraunhofer or Fresnel
diffraction pattern of target field in the form of a transparency
(azimuthal recording of a 1D microwave hologram), 2 – 1D
Fourier or Fresnel transformation, 3 – display 34
Figure 2.4 Synthesis of a microwave hologram: (a) quadratic hologram
recorded at a high frequency, (b) quadratic hologram recorded
at an intermediate frequency, (c) multiplicative hologram
recorded at a high frequency, (d) multiplicative hologram
recorded at an intermediate frequency, (e) quadrature
holograms, (f) phase-only hologram 37
Figure 2.5 A block diagram of a microwave holographic receiver:
1 – reference field, 2 – reference signal cos(ω0 t + ϕ0 ),
3 – input signal A cos(ω0 t + ϕ0 − ϕ), 4 – signal sin(ω0 t + ϕ0 )
and 5 – mixer 39
Figure 2.6 Illustration for the calculation of the phase variation of
a reference wave 39
Figure 2.7 The coordinates used in target viewing 42
Figure 2.8 2D data acquisition design in the tomographic approach 45
Figure 2.9 The space frequency spectrum recorded by a coherent
(microwave holographic) system. The projection slices are
shifted by the value fpo from the coordinate origin 46
Figure 2.10 The space frequency spectrum recorded by an incoherent
(tomographic) system 47
Chapter 3
Figure 3.1 A scheme illustrating the focusing properties of a Fresnel zone
plate: 1 – collimated coherent light, 2 – Fresnel zone plate,
3 – virtual image, 4 – real image and 5 – zeroth-order
diffraction 50
Figure 3.2 The basic geometrical relations in SAR 51
Figure 3.3 An equivalent scheme of 1D microwave hologram recording
by SAR 51
Figure 3.4 The viewing field of a holographic radar 60
Figure 3.5 A schematic diagram of a front-looking holographic radar 61
Figure 3.6 The resolution of a front-looking holographic radar along
the x-axis as a function of the angle ϕ 61
Chapter 4
Figure 4.1 The geometrical relations in a SAR 85
Figure 4.2 A generalised block diagram of a SAR 85
Figure 4.3 The variation of the parameter Q with the synthesis range Ls at
λ = 3 cm, = 0.02 and various values of R 90
Figure 4.4 The dependence of the spatial correlation range of the image
on normalised Ls for multi-ray processing (solid lines) at
various degrees of incoherent integration De and for averaging
of the resolution elements (dashed lines) at various
Ge : λ = 3 cm, R = 10 km; 1, 5–0 (curves overlap);
2, 6–0.25(λR/2)1/2 ; 3, 7–(λR/2)1/2 ; 4, 8–2.25(λR/2)1/2 91
Figure 4.5 The variation of the parameter Qh with the number of
integrated signals Ni at various values of Ka 93
Figure 4.6 The variation of the parameter Qe with the synthesis range Ls
at various signal correlation times τc 97
Figure 4.7 The parameter Q as a function of the synthesis range Ls at
various signal correlation times τc 98
Chapter 5
Figure 5.1 A schematic diagram of direct bistatic radar synthesis of
a microwave hologram along arc L of a circle of radius R0 :
1 – transmitter, 2 – receiver 102
Figure 5.2 A schematic diagram of inverse synthesis of a microwave
hologram by a unistatic radar located at point C 103
Figure 5.3 The geometry of data acquisition for the synthesis of a 1D
microwave Fourier hologram of a rotating object 103
Figure 5.4 Optical reconstruction of 1D microwave images from
a quadrature Fourier hologram: (a) flat transparency,
(b) spherical transparency 106
Figure 5.5 The dependence of microwave image resolution on
the normalised aperture angle of the hologram 109
Chapter 6
Figure 6.1 The aspect variation relative to the line of sight of a ground
radar as a function of the viewing time for a satellite at
the culmination altitudes of 31◦ , 66◦ and 88◦ : (a) aspect α and
(b) aspect β 121
Figure 6.2 Geometrical relations for 3D microwave hologram recording:
(a) data acquisition geometry; a–b, trajectory projection onto
a unit surface relative to the radar motion and (b) hologram
recording geometry 123
Figure 6.3 The sequence of operations in radar data processing during
imaging 125
Figure 6.4 Subdivision of a 3D microwave hologram into partial
holograms: (a) 1D partial (radial and transversal), (b) 2D
partial (radial and transversal) and (c) 3D partial holograms 128
Figure 6.5 Subdivision of a 3D surface hologram into partial holograms:
(a) radial, (b) 1D partial transversal and (c) 2D partial 129
Figure 6.6 Coherent summation of partial hologram. A 2D narrowband
microwave hologram: (a) highlighting of partial holograms
and (b) formation of an integral image 132
Figure 6.7 Coherent summation of partial hologram. A 2D wideband
microwave hologram: (a) highlighting of partial holograms,
(b) formation of an integral image 133
Figure 6.8 The computational complexity of the coherent summation
algorithms as a function of the target dimension for a
narrowband microwave hologram: (a) transverse partial
images, (b) hologram samples 137
Figure 6.9 The relative computational complexity of coherent summation
algorithms as a function of the target dimension for a
narrowband microwave hologram: (a) transverse partial
images/CCA, (b) hologram samples/CCA 140
Chapter 7
Figure 7.1 Characteristics of an imaging device in the case of partially
coherent echo signals: (a) potential resolving power at C2 = 1,
(b) performance criterion (1 – dc = 6.98 m, 2 – dc = 3.49 m
and 3 – dc = 0) 151
Figure 7.2 Typical errors in the impulse response of an imaging device
along the s-axis: (a) response shift, (b) response broadening,
(c) increased amplitude of the response side lobes and
(d) combined effect of the above factors 153
Figure 7.3 The resolving power of an imaging device in the presence of
range instabilities versus the synthesis time Ts and the method
of resolution step measurement: (a) −σp = 0.04 m; 1 and 1
(2 and 2 ) – first (second) way of resolution step measurement;
1 and 2 – Tc = 1.5 s, 1 and 2 – Tc = 3 s; (b) −σp = 0.05 m,
1 and 1 (2 and 2 ) – first (second) way of resolution step
measurement; 1 and 2 – Tc = 1.5 s, 1 and 2 – Tc = 3 s 154
Figure 7.4 The resolving power of an imaging system in the presence of
velocity instabilities versus the synthesis time Ts and the
method of resolution step measurement: (a) σx ,y = 0.1 m/s
(other details as in Fig. 7.3), (b) σx ,y = 0.2 m/s (other details
as in Fig. 7.3) 155
Figure 7.5 Evaluation of the performance of a processing device in the
case of partially coherent signals versus the synthesis time Ts
and the space step of path instability correlation dc : 1–dc =
6.98m, 2 – dc = 3.49 m 155
Chapter 8
Figure 8.1 The normalised refractive index spectrum n (χ )/Cn2 as
a function of the wave number χ in various models:
1 – Tatarsky’s model-I, 2 – Tatarsky’s model-II, 3 – Carman’s
model and 4 – modified Carman’s model 160
Figure 8.2 The profile of the structure constant Cn2 versus the altitude for
April at the SAR wavelength of 3.12 cm 164
Figure 8.3 The profile of the structure constant Cn2 versus the altitude for
November at the SAR wavelength of 3.12 cm 165
Chapter 9
Figure 9.1 The mean monthly convoy speed in the NSR changes from V0
(without satellite data) to V1 (SAR images used by the
icebreaker’s crew to select the route in sea ice). The mean ice
thickness (hi ) is shown as a function of the season. (N. Babich,
personal communications) 198
Figure 9.2 (a) Photo of grease ice and (b) a characteristic dark SAR
signature of grease ice. © European Space Agency 199
Figure 9.3 Photo of typical nilas with finger-rafting 200
Figure 9.4 A RADARSAT ScanSAR Wide image of 25 April 1998,
covering an area of 500 km × 500 km around the northern
Novaya Zemlya. A geographical grid and the coastline are
superimposed on the image. © Canadian Space Agency 201
Figure 9.5 A RADARSAT ScanSAR Wide image of 3 March 1998,
covering the boundary between old and first-year sea ice in
the area to north Alaska. © Canadian Space Agency 202
Figure 9.6 (a) Photo of a typical pancake ice edge and (b) a characteristic
ERS SAR signature of pancake ice. A mixed bright and dark
backscatter signature is typical for pancake and grease ice
found at the ice edge. © European Space Agency 203
Figure 9.7 A RADARSAT ScanSAR Wide image of 8 May 1998,
covering the south-western Kara Sea. © Canadian Space
Agency 204
Figure 9.8 An ENVISAT ASAR image of 28 March 2003, covering
the ice edge in the Barents Sea westward and southward of
Svalbard. © European Space Agency 205
Figure 9.9 An ERS-2 SAR image of 11 September 2001, covering
the Red Army Strait in the Severnaya Zemlya Archipelago.
© European Space Agency 206
Figure 9.10 An ERS-2 SAR image (100 km × 100 km) taken on 24 June
2000 over the Black Sea (region to the East Crimea peninsula)
and showing upwelling, natural films 208
Figure 9.11 SST retrieved from a NOAA AVHRR image on 24 June
2000. 209
Figure 9.12 A fragment of an ERS-2 SAR image (26 km × 22 km) taken on
30 September 1995 over the Northern Sea near the Norwegian
coast and showing swell 210
Figure 9.13 An ERS-2 SAR image (100 km × 100 km) taken on
28 September 1995 over the Northern Sea and showing an oil
spill, wind shadow, low wind and ocean fronts 211
Figure 9.14 An ERS-1 SAR image (100 km × 100 km) taken on
29 September 1995 over the Northern Sea showing rain
cells 213
Figure 9.15 An ERS-2 SAR image (18 km × 32 km) taken on
30 September 1995 over the Northern Sea showing an internal
wave and a ship wake 214
Figure 9.16 The scheme of the reconstruction algorithm 221
Figure 9.17 A typical 1D image of a perfectly conducting cylinder 222
Figure 9.18 The local scattering characteristics for a metallic cylinder
(E-polarisation) 223
Figure 9.19 The local scattering characteristics for a metallic cylinder
(H-polarisation) 224
Figure 9.20 A mathematical model of a radar recognition device 226
Chapter 6
Table 6.1 The number of spectral components of a PH 136
Chapter 8
Table 8.1 The main characteristics of the synthetic aperture pattern 174
Chapter 9
Table 9.1 Technical parameters of SARs borne by the SEASAT and
Shuttle 192
Table 9.2 Parameters of the Almaz-1 SAR 192
Table 9.3 The parameters of the ERS-1/2 satellites 193
Table 9.4 SAR imaging modes of the RADARSAT satellite 194
Table 9.5 The ENVISAT ASAR operation modes 194
Table 9.6 The LRIR characteristics 216
Table 9.7 The variants of the sign vectors 227
Table 9.8 The valid recognition probability (a Bayes classifier) 228
Table 9.9 The valid recognition probability (a classifier based on
the method of potential functions) 228
The analysis of the current state and tendencies in radar development shows that novel
methods of target viewing are based on a detailed study of echo signals and their
informative characteristics. These methods are aimed at obtaining complete data on a
target, with emphasis on revealing new steady parameters for their recognition. One
way of raising the efficiency of radar technology is to improve available methods of
radio vision, or imaging. Radio vision systems provide a high resolution, considerably
extending the scope of target detection and recognition. This field of radar science and
technology is very promising, because it paves the way from the classical detection
of a point target to the imaging of a whole object.
The physical mechanism underlying target viewing can be understood on a heuris-
tic basis. An electromagnetic wave incident on a target induces an electric current on
it, generating a scattered electromagnetic wave. In order to find the scattering prop-
erties of the target, we must visualise its elements making the greatest contribution
to the wave scattering. This brings us to the concept of a radar image, which can be
defined as a spatial distribution pattern of the target reflectivity. Therefore, an image
must give a spatial quantitative description of this physical property of the target with
a quality not less than that provided by conventional observational techniques.
Radio vision makes it possible to sense an object as a visual picture. This is very
important because we get about 90 per cent of all information about the world through
vision. Of course, a radar image differs from a common optical image. For instance,
a surface rough to light waves will be specular to radio waves (microwaves), and
images of many objects will look like bright spots, or glare. However, the repre-
sentation of information transported by microwaves as visual images has become
quite common. It took much time and effort to get a high angular resolution in the
microwave frequency band because of the limited size of a real antenna. It was not
until the 1950–1960s that a sufficiently high resolution was obtained by a side-looking
radar with a large synthesised antenna aperture. The synthetic aperture method was
then described in terms of the range-Doppler approach.
At about the same time, a new method of imaging in the visible spectrum emerged
which was based on recording and reconstruction of the wave front and its phase,
using a reference wave. A lens-free registration of the wave front (the holographic
technique), followed by the image reconstruction, was first suggested by D. Gabor in
1948 and re-discovered by E. Leith and U. Upatnieks in 1963. The two researchers
suggested a holographic method with a ‘side reference beam’ to eliminate the zeroth
diffraction order. This principle was later used in a new, side-looking type of radar.
A specific feature of holographic imaging is that a hologram records an integral
Fourier or Fresnel transform of the object’s scattering function. The emergence of
holography radically changed our conception of an object’s image. Earlier, humans
had dealt with images produced by recording the distribution of light intensity in a
certain plane. But objects can generate a light field or another kind of electromagnetic
field with all of its parameters modulated: the amplitude, phase, polarisation, etc.
This discovery considerably extended the scope of spatial information that could be
extracted about the object of interest.
It should be noted that holography brought about revolutionary changes only in
optics, because it did not possess ways or means to save the recorded information about
the phase structure of an optical field until then. But the application of holographic
principles to the microwave frequency band proceeded easily, giving excellent results.
This was due to the fact that radio engineering had employed methods of registration
of the electromagnetic wave phase long before the emergence of holography. For
many years, radar imaging developed independently of holography, although some
workers (E.N. Leith, W.E. Kock, D.L. Mensa, B.D. Steinberg) did note that many
intermediate steps in the recording and processing techniques for radar imaging were
quite similar to those of holography and tomography. These researchers, however,
only briefly reviewed the holographic principles just to point out the fundamental
similarity and difference between optical and radar imaging, but they did not make a
comprehensive analysis of this fact in the context of radiolocation.
E.N. Leith and A.L. Ingalls showed that the operation of a side-looking radar
should be treated in terms of a holographic approach. Holograms recorded in the
microwave frequency range were referred to as microwave holograms, and radar sys-
tems based on the holographic principle were called by E.N. Leith quasi-holographic.
In fact, the work done in those years became the basis for designing a special type of
radar to perform imaging. The research into radar imaging was developing quite inten-
sively, and many scientists made their contributions to it: L.J. Cutrona, A. Kozma,
D.A. Ausherman, G. Graf, I.L. Walker, W.M. Brown, D.L. Mensa, D.C. Manson,
B.D. Steinberg, N.H. Farhat, V.C. Chen, D.R. Wehner and others.
These efforts were accompanied by the development of tomographic techniques
for image reconstruction in medicine and physiology (X-ray imaging). Initially,
tomography was treated as a way of reconstructing the spatial distribution of a cer-
tain physical characteristic of an object by making computational operations with
data obtained during the probing of the object. This resulted in the emergence of
reconstructive computerised tomography possessing powerful mathematical meth-
ods. Later, tomographic techniques were suggested capable of reconstructing a
physical characteristic of an object by a mathematical processing of the field reflected
by it.
Naturally, there have been suggestions to combine the available methods of
radar imaging (e.g. the range-Doppler principles) with tomographic algorithms
(D.L. Mensa, D.C. Manson). At present, the work on radar imaging goes on,
the results of D.S. Manson, who was the first to demonstrate their applicability to data
processing.
Chapter 4 considers the radar aperture synthesis during the viewing of partially
coherent and extended targets. The mathematical model of the aperture is also based
on the holographic principle; the aperture is thought to be a filter with a frequency-
contrast characteristic, which registers the space–time spectrum of a target. This
approach is useful for the calculation of incoherent integration efficiency to smooth
out low contrast details on an image.
In Chapter 5 we discuss microwave imaging of a rotating target, using 1D Fourier
hologram theory and find the longitudinal and transverse scales of a reconstructed
image, the target resolution and a criterion for an optimal processing of a Fourier
microwave hologram. The resolution of a visual radar image is found to be consistent
with the Abbe criterion for optical systems. One specificity is that it is necessary to
introduce a space carrier frequency to separate two conjugate images and an image
of the reference source. Here we have an analogy with synthetic aperture theory, with
the exception that we employ the concept of a complex microwave Fourier hologram.
It is shown that there is no zeroth diffraction order in digital reconstruction. We have
formulated some requirements on methods and devices for synthesising this type of
hologram. This method is easy and useful to implement in an anechoic chamber.
Chapter 6 focuses on tomographic processing of 2D and 3D microwave holograms
of a rotating target in 3D viewing geometry with a non-equidistant arrangement of
echo signal records in the registration of its aspect variation (for space objects). The
suggested technique of image reconstruction is based on the processing of microwave
holograms by coherent summation of partial holograms. These are classified into 1D,
2D, 2D radial, as well as narrowband and wideband partial holograms. This technique
is feasible in any mode of target motion. The method of hologram synthesis com-
bined with coherent computerised tomography represents a new processing technique
which accounts for a large variation of real hologram geometries in 3D viewing. This
advantage is inaccessible to other processing procedures yet.
Chapter 7 is concerned with methods of hologram processing for a target moving
in a straight line and viewed by a ground radar processing partially coherent echo
signals. The signal coherence is assumed to be perturbed by such factors as a turbulent
medium, elastic vibrations of the target’s body, vibrations of parts of the engines, etc.
We suggest an approach to modelling the track instabilities of an aerodynamic target
and present estimates of the radar resolving power in a real cross-section region.
Chapter 8 focuses on phase errors in radar imaging, evaluation of image quality
and speckle noise.
Finally, possible applications of radar imaging are discussed in Chapter 9. The
emphasis is on spaceborne synthetic aperture radars for surveying the earth surface.
Some novel and original developments by researchers and designers at the Nansen
Environmental and Remote Sensing Centre in Bergen (Norway) and at the Nansen
International Environmental and Remote Sensing Centre in St Petersburg (Russia)
are described. They have much experience in processing holograms from various
SARs: Almaz-1 (Russia), RADARSAT (Canada), ERS-1/2 and ENVISAT ASAR (the
European Space Agency). Of special interest to the reader might be the information
about the use of microwave holography for classification of sea ice, navigation in
the Arctics, a global monitoring of ocean phenomena and characteristics to be used
for surveying gas and oil resources. We illustrate the use of the holographic methods
in a coherent ground radar for 2D imaging of the Russian spacecraft Progress and
for the study of local radar responses to objects of complex geometry in an anechoic
chamber, aimed at target recognition.
To conclude, the methods and techniques described in this book are also appli-
cable to many other research fields, including ultrasound and sonar, astronomy,
geophysics, environmental sciences, resources surveys, non-destructive testing,
aerospace defence and medical imaging, that have already started to utilise this rapidly
developing technology. We hope that our book will also be used as an advanced text-
book by postgraduate and graduate students in electrical engineering, physics and
astronomy.
Acknowledgements
The idea to write a book about the application of holographic principles in radio-
location occurred to us at the end of the last century and was supported by the late
Professor V.E. Dulevich. We are indebted to him for his encouragement and useful
suggestions.
We express our gratitude to the staff members of the Nansen Centres (Bergens and
St Petersburg), who provided us with valuable information about the practical applica-
tion of a side-looking radar. We should like to thank V.Y. Aleksandrov, L.P. Bobylev,
D.B. Akimov, O.M. Johannessen and S. Sandven for their help in the preparation of
these materials.
Our deepest thanks also go to our colleagues E.F. Tolstov and A.S. Bogachev for
their excellent description of the criteria for evaluation of radar images. This book
is based on the results of our investigations that have taken a long period of time.
We have collaborated with many specialists who helped to shape our conception of a
coherent radar system. We thank them all, especially S.A. Popov, G.S. Kondratenkov,
P.Ya. Ufimtzev, D.B. Kanareykin and Yu.A. Melnik, whose contribution was par-
ticularly valuable. We also thank our pupils V.R. Akhmetyanov, A.L. Ilyin and
V.P. Likhachev for their assistance in the preparation of this book. We are also grateful
to L.N. Smirnova, the translator of the book, for her immense help in producing the
English version.
y f x
= − = − , (1.1)
y x f
xx = ff . (1.2)
M⬘
A
M
y
H F⬘ B⬘
B F
–y⬘
N
A⬘
–x⬘ –f ⬘ f⬘ x⬘
a1 N⬘ a2
The relation between the elements of an image and the corresponding elements of an
object is known as a linear or transversal lens magnification V defined as
y
V = . (1.3)
y
Since the lens is described by the equality f = −f , Eq. (1.2) gives
xx = −f 2 . (1.4)
Newton’s formulae relate the distances of the object and the image to the respective
focal points. However, it is sometimes more convenient to use their distances to the
respective principal planes. Let us denote these distances as a1 and a2 . Then using
Fig. 1.1 and Eq. (1.2), we can get
1 1 1
− = . (1.5)
a2 a1 f1
The linear magnification can be expressed through a1 and a2 as
a2
V = . (1.6)
a1
Consider now the concept of focal depth in the image space [80]. When constructing
the image to be produced by a lens, we assumed that the image and the object were
in planes normal to the optical axis. Suppose now that the object AB, say, a bulb
filament, is inclined to the optical axis, as is shown in Fig. 1.2, while a photographic
plate is in the plane M1 normal to the optical axis of the objective lens. In order to
(a)
M⬘
Photographic
M F F⬘ plate
(b) A
M
F F⬘
B Photographic
A⬘ plate
(c) A
B⬘
M
F F⬘
B M3 M1 M2
Photographic
plate
(d)
B⬘
M⬘ B⬘
M⬘ M⬘
A⬘ A⬘
M3 M1 M2
Figure 1.2 A schematic illustration of the focal depth of an optical image: (a) image
of point M lying in the optical axis; (b) image of point A; (c) image of
point B and (d) image of points A and B in the planes M1 , M2 and M3 .
find the image on the photoplate, we shall construct rays of light going away from
individual points of the object. The light beams going from the object AB to the
objective lens and from the objective lens to the image are conic with the lens as
the base and the points of the object and the image as the vertices. Imagine that
the image of the point M of an object lying in the optical axis is on a photoplate in
the plane M1 (Fig. 1.2(a) and (d)). Then the beam of rays converging onto this image
will have its vertex on the plate. The object’s extremal points A and B will produce
conic rays with the vertices in front of (B ) in Fig. 1.2(c) and behind the photoplate
(A ) (Fig. 1.2(b)). Thus, it is only the point M in the optical axis that will have its
image as a bright point M in Fig. 1.2(a). The end points A and B of the line will look
like light circles A and B . The image of the line will look like M1 in Fig.1.2(d). If
the photoplate is shifted towards A (Fig. 1.2(b)) or B (Fig. 1.2(c)), we shall have
different images M2 or M3 (Fig. 1.2(d)).
It follows from this representation that the image of a 3D object extended along
the optical axis will have different focal depths on the plate at all the points in the
image space. In practice, however, images of such objects have a good contrast.
Therefore, the objective lens possesses a considerable focal depth. This parameter
determines the longitudinal distance between two points of an object, and the sizes of
their images do not exceed the eye’s unit resolution. Therefore, the classical recording
on a photoplate produces a 2D image, which cannot be transformed to a 3D image.
The third dimension may be perceived only due to indirect phenomena such as the
perspective.
Now let us describe the real and virtual optical images and see how the image
of a point object M can be constructed with rays. The rays go away from the object
in all directions. If one of the rays encounters a lens along its pathway, its trajectory
will change. If the rays deflected by the lens intercept, when extended along the light
propagation direction, a point image will be formed at the interception and can be
recorded on a screen or a photoplate. This kind of image is known as real. However,
when the rays are extended along the direction opposite to the light propagation
direction, both the interception point and the image are said to be virtual. The images
in Fig. 1.2 are real because they are formed by rays intercepting at their extension
along the light propagation.
An optical image possesses orthoscopic and pseudoscopic properties. Suppose
a 2D object has a surface relief, its image will be orthoscopic if it is not reversed
longitudinally: the convex parts of the object look convex on the image. Using the
above approach, we can show that the image formed by a thin lens is orthoscopic. If
an image has a reverse relief, it is termed pseudoscopic; such images are produced
by holographic cameras.
Thus, images produced by classical methods have the following typical charac-
teristics.
• Imaging includes only the recording of incident light intensity, while its wave
phase remains unrecorded. For this reason, this sort of image cannot be
transformed to a 3D image.
• An image has a limited focal depth.
• An image produced by a thin lens is real and orthoscopic.
1
Reference wave
z u
3
4
where ωo = k sin θ , θ is the wave incidence onto a photoplate located in the xOy
plane, k = 2π/λ1 is the wave number, and λ1 is the wavelength of coherent light
source.
The intensity of the interference pattern on the hologram is
In addition to the constant term a2o + a(x, y), the hologram function in Eq. (1.7)
contains a harmonic term 2ao a(x, y) cos(ωo x) with the period
T = 2π/ωo = λ1 / sin θ. (1.8)
The quantity ωo which defines this period is known as the space carrier frequency
(SCF) of a hologram. For example, for a He–Ne laser beam (λ1 = 0.6328 µm)
incident onto a hologram at an angle of 30◦ , the SCF is ωo = 900 lines/mm. The
minimum period of the SCF is θ = π/2 and is equal to the wavelength λ1 . The a/ao
ratio is called the hologram modulation index.
It follows from Eq. (1.7) that the amplitude and phase distributions of the object’s
wave appear to be coded by the SCF amplitude and phase modulations, respectively.
As a result, a hologram turns out to be the carrier of space frequency which contains
spatial information, whereas a microwave is the carrier of angular frequency and
contains temporal information. Phase-only holograms record only the phase variation
rather than the amplitude.
The first stage of the holographic process is terminated by recording the quan-
tity I (x, y). A photoplate records a hologram. The transmittance of an exposed and
processed photoplate is
Tn (x, y) = I −γ , (1.9)
where γ is the plate contrast coefficient. It is reasonable to take γ = −2 because
the hologram then corresponds to a sine diffraction grating which does not form
diffraction orders higher than the first one. So we have
t(x, y) = Tn (x, y) = I (x, y).
During the reconstruction, a hologram is illuminated by the same reference wave as
was used at the recording stage. The reconstruction occurs due to the light diffraction
on the hologram (Fig. 1.4). Immediately behind the hologram, a wave field is induced
with the following components:
Reconstruction
wave
z u
2u
1 3
The last term in Eq. (1.14) corresponds to Eq. (1.12) but usually it is not analysed
completely. In holographic theory (Eq. (1.7)), one often restricts one’s considera-
tion to the second and third terms. Commonly, the information about the object is
assumed to be distributed uniformly across the hologram aperture; in reality, however,
a hologram is synthesised from a set of microholograms. So the aperture is split into
Γm
Γc
B
r
q
A
g
Φ
0 Basic line
Q d
D⬘
d g
C⬘ D
the quantity Ei is the part of E directly related to the initial wave front which is the
first to arrive at any point in c , while ES is composed of all effects scattered, often
repeatedly, by all the points in the c region.
The secondary probing effect must be given as
l(B)
S(ϑ, ) = g(ρ, )dl, (1.16)
l(A)
where l is a coordinate going along the ray, whose initial and final points are denoted
as l(A) and l(B), respectively.
There are no dimensionality problems with this expression, because the measured
quantity S(ϑ, ) and the reconstructed quantity g(ρ, ) are 2D. So if S(ϑ, ) is
prescribed for the number of ϑ and pairs sufficient for the description of g(ρ, )
with the desired accuracy, the true density distribution may be reconstructed such
that the computational algorithm is stable. Equation (1.16) is a governing equation
in conventional tomography. At present, there are various reconstruction techniques
allowing the solution of this integral equation [88].
No doubt, it would be desirable to integrate the true image in the c region (in the
image space). For practical considerations, however, the data may be integrated in a
different space, whose properties depend on how the experimental data are related to
the density function g(ρ, ). The quantity to be measured is often a Fourier image of
the density distribution, so the data recording is said to be performed in the Fourier
space. An example of this type of recording is that in a radio telescope with a synthetic
aperture [118]. Although the data integration in the image space and the Fourier
space is identical theoretically, the practical algorithms for image reconstruction differ
2
0 q
x
∞
G(X , Y ) = g(x, y)e−j(xX +yY ) dx dy.
−∞
qmax
qmin
Figure 1.7 The geometrical arrangement of the G(x, y) pixels in the Fourier region
of a polar grid. The parameters ϑmax and ϑmin are the variation range
of the projection angles. The shaded region is the SAR recording area
The past decade has witnessed an ever increasing interest in radars with a very high
resolving power. For example, the ERS-1 and ERS-2 radars (side-looking synthetic
aperture radars (SARs) of the European Space Agency [62]) provide microwave
imagery of the earth surface with the resolution of 25 m × 25 m in the azimuth-range
coordinates. An earth area of 100 km × 100 km (100 km is the radar swath width) is
represented by 1.6×107 pixels. Modern ground radars have large antenna arrays with
an aperture of about 104 –105 λ1 , where λ1 is the radar wavelength. They provide an
angular resolution of 10−4 –10−5 rad [129], so the radar vision field can be subdivided
into 104 –105 beams.
A radar with a linear and angular resolution much higher than that of a TV equip-
ment (7.105 − 106 pixels) is capable of producing microwave images of extended
targets (land areas and water surfaces) and complex objects (aircraft, space craft).
So it is reasonable to give a definition of a microwave image. At present, there is no
generally accepted definition, so we suggest the following formulation. A microwave
image is an optical image, whose structure reproduces on a definite scale the spa-
tial arrangement of scatterers (‘radiant’ points) on a target illuminated by microwave
beams. In addition to the arrangement, scatterers are characterised by a certain radi-
ance. It should be emphasised that the microwave beams can produce 3D images,
whereas the visible range of conventional optical systems gives only 2D images.
Available methods of microwave imaging can be grouped into three classes:
• direct methods using real apertures;
• methods employing synthetic apertures;
• methods combining real and synthetic apertures.
Imaging by direct methods can, in turn, be performed by real antennas or antenna
arrays. Real antennas were used in the early years of radar history. An earth area
was viewed by means of circular scanning or sector rocking of the antenna beam in
the azimuthal plane. Such systems were termed panoramic or sector circular radars.
Modern panoramic radars use 50–100λ1 apertures and their resolution is low. Since
the application of airborne panoramic antenna arrays is a hard task, the only way
to increase the resolution is to use the millimetre wavelength range. One is faced
with a similar problem when dealing with a side-looking real antenna mounted along
the aircraft fuselage. Such antennas may be as long as 10–15 m; at the wavelength
λ1 = 3 cm, their angular resolution is less than 10 min of arc and the linear resolution
of the earth surface is a few dozens of metres, which is too low for some applications.
For this type of antenna, the problem of increasing the aperture size was solved in a
radical way – by replacing a real aperture with a synthesised aperture.
Consider the potentialities of antenna arrays for aerial survey of the earth surface
and for ground imaging of targets flying at low altitudes. Suppose we are to design an
antenna array for aircraft imaging. The target has a characteristic size D and is illu-
minated by a continuous radar pulse. Then, according to the sampling theorem [103],
the echo signal function in the aperture receiver can be described by a series of records
recorded at the intervals
Rλ1
δL = , (1.21)
D
where R is the distance to the target. The aperture size necessary for getting a desired
resolution on the target , can be defined in terms of Abbe’s formula [131]:
λ1 ∼ λ1 R
= = , (1.22)
2 sin α/2 L
where α is the aperture angle and L is its length.
The total number of receivers on an aperture of length L is
L DL
N = = . (1.23)
δL λ1 R
With Eq. (1.22), we get
D
N = . (1.24)
Let us illustrate this with a particular problem. Suppose we have λ1 = 10 cm,
R = 600 km, D = 20 m, and = 1 m. Then we get L = 60 km, δL = 3 km and
N = 20. A planar aperture of L × L in size must contain n = N 2 = 400 individual
receivers.
This example shows that the applicability of direct imaging using large antenna
arrays is quite limited. Nevertheless, one of these techniques employing a radio cam-
era designed by B. D. Steinberg is of great interest [129]. The radio camera is based
on a pulse radar with a real large antenna array and an adaptive beamforming (AB)
algorithm. The principal task is to obtain a high resolution with a large aperture avoid-
ing severe restrictions on the arrangement of the antenna elements. The operation of a
self-phasing algorithm requires the use of an additional external phase-synchronising
radiation source with known parameters, which could generate a reference field and
would be located near the target. The radio camera provides an angular resolution
of 10−4 –10−5 rad [129], and the image quality is close to that of optical systems.
There is one limitation – the radio camera has a narrow vision field. But still, it may
find a wide application in radar imaging of the earth surface, in surveying aircraft
traffic, etc.
To summarise, direct real aperture imaging of remote targets at distances of
hundreds and thousands of kilometres is practically impossible.
We turn now to methods employing a synthesised aperture. The idea of aperture
synthesis born during the designing of a side-looking aperture radar [32,74,86] was
to replace a real antenna array with an equivalent synthetic antenna (Fig. 1.8). An
antenna with a small aperture is to receive consecutive echo signals and make their
coherent summation at various moments of time. For a coherent summation to be
made, the radar must also be coherent, namely, it should possess a high transmitter
(a)
ur
LA
Summator Output
(b)
us
x V
x
Memory
(delay line)
Summator Output
X
V
t
0 Ts
Figure 1.8 Synthesis of a radar aperture pattern: (a) real antenna array and
(b) synthesised antenna array
frequency stability and have a reference voltage of the stable frequency to compare
echo signals. We shall see below that a reference voltage is similar to a reference
wave in holography, with the only difference that the ‘wave’ is created in the receiver
by the voltage of a coherent generator.
Under the conditions described above, the echo signals received by a real antenna
are saved in a memory unit as their amplitudes and phases. When an aircraft flies
over an earth area x = Ls , the signals are summed up at the moment Ts = x/V
(the final moment of synthesis), where V is the track velocity of the aircraft. As
λ1
θs = . (1.25)
2x
Owing to its large size, a synthetic aperture can provide very narrow patterns, so the
track range resolution
δx = θs R, (1.26)
where R is the slant range to the target, may be very high even at large distances. To
illustrate, if the synthetic aperture length is x = 400 m and λ1 = 3 cm, the resolution
may be as high as δx = 6 m at R = 160 km.
Similar principles apply to a stationary ground radar and a moving target. If one
needs to obtain a high angular resolution, one can make use of the so-called inverse
aperture synthesis. We shall show in Chapter 2 that the resolution on the target is then
independent of the distance to it but is determined only by the radar wavelength and
the synthesis angle. As a result, one can obtain a very high angular resolution and
reconstruct the arrangement of the scatterers into a microwave image.
Thus, current approaches to microwave imaging, based on direct and inverse
synthesis of the aperture, provide 2D images which are structurally similar to optical
images. Besides, there are methods combining both approaches. They apply a real,
say, phased aperture and a synthetic aperture along the aircraft track. These techniques
also produce images similar in structure to optical images [2]; they will be discussed
in detail in Chapter 3. However, there are certain differences between the two types
of 2D images. We summarise the most important ones below.
1. The wavelengths in the microwave range are 103 –106 times longer than in the
visible range, and this determines an essential difference in the scattering and
reflection by natural and man-made targets. In the visible range, the scattering by
man-made targets is basically diffusive, and it can be observed when the surface
roughness is of the order of a wavelength. This fact allows a target to be consid-
ered as a continuous body. In the microwave range, the picture is quite different
because there is no diffusion. The signals are reflected by scattering centres,
corner structures and specular surfaces. For this reason, a microwave image of
a man-made target is discrete and is made up of ‘dark’ pixels and those produced
by the strong reflectors we mentioned above. A good example is the microwave
image of an aircraft that was obtained in Reference 130. Reflection by natural
targets produces similar images. However, the reflection spectrum of the earth
surface contains an essential diffusion component.
2. For these reasons, the dynamic range of microwave images varies between 50
and 90 dB, while it rarely exceeds 20 dB in optical images, reaching the value
of 30 dB in bright sunlight.
3. The quality of an image does not depend on the natural luminosity of a target
and depends but slightly on weather conditions.
4. Image quality strongly depends on the geometry of the earth region to be
imaged, especially its slant angles, roughness and bulk features in the surface
layer. So microwave imaging is used for all-weather mapping, soil classifica-
tion, detection of boundaries of background surfaces, etc. There is no unified
optimal angle (in the vertical plane) for viewing geological structures, and
the best values should be adjusted to the local topography. For mountain-
ous and undulated reliefs, for example, a small radiation incidence relative
to the normal is preferable, while the imaging of plains requires the use of
large incidence angles, which increase the sensitivity to surface roughness. For
this reason, images produced by airborne SAR may be inadequate radiomet-
rically (speckle noise) resulting from a large variation in the incidence across
a swath because of a wide aperture pattern. Space SARs possess an approxi-
mately constant radiation incidence across a swath, so there is no speckle on the
image.
5. The density of blackened regions on a negative depends significantly on the
dielectric behaviour of the surface being imaged, in particular, on the presence
of moisture, both frozen and liquid, in the soil.
6. The microwave range gives the opportunity to probe subsurface areas. For exam-
ple, the microwave images of the Sakhara desert obtained by a SIR-A SAR
showed the presence of dried river beds buried under the sands, which were
invisible on the desert surface. This opens up new opportunities to archaeologi-
cal surveys. It has been demonstrated experimentally that the probing radiation
depth in dry sand may be as large as 5 m. Besides, a sand stratum possessing
a low attenuation is found to enhance images of subsurface roughness due to
refraction at the air–soil interface. This effect is particularly strong for horizontal
polarisation at large incidence angles.
7. The specific propagation pattern of the long wavelengths in the microwave
range provide quality imagery of lands covered with vegetation.
8. The interaction of subwater phenomena such as internal waves, subsurface
currents, etc., with the ocean surface allows imaging the bottom topography
and various subwater effects.
9. The use of moving target selection allows one to make precise measurements
of the target’s radial velocity relative to the SAR.
10. An important factor in imagery is the proper choice of radiation polarisation.
11. Quite specific is imaging of urban areas and other anthropogenic targets. This is
due to a large number of objects with a high dielectric permittivity (e.g. metallic
objects), surface elements possessing specular reflection, resonance reflectors
and objects with horizontal and vertical planes that form corner reflectors. The
result of the latter is the following effect: streets parallel to the SAR carrier track
produce white lines on the image (the positive), while streets normal to the track
produce dark lines. Moreover, the presence or absence on the image of some
linear elements of the radar scene and an average density of blackening of the
whole image depend on the azimuthal angle, that is, the angle made by the SAR
beam in the plane tangential to the earth surface. This is a serious obstacle to
the analysis of images of urban areas.
12. An image contains speckle noise associated with the coherence of the imaging
process.
All radar targets can be classified into point and complex targets [138]. A point target
is a convenient model object commonly used in radar science and practice to solve
certain types of problems. It is defined as a target located at distance R from a radar
at the viewing point ‘0’, which scatters the incident radar radiation isotropically. For
such a target, the equiphase surface is a sphere with the centre at ‘0’. Suppose a radar
generates a wave described as
f (t) = a(t) exp j[ωo t + (t)],
where f0 = ωo /2π is the carrier frequency, while a and are the amplitude and
phase modulation functions overlapping the carrier frequency.
A point target located at distance R creates an echo signal
2R 2R 2R 2R
g(t) = σ f t − = σa t − exp j ωo t − + t− ,
c c c c
(2.1)
where σ is a complex factor including the target reflectance and signal attenuation
along the track.
The Doppler frequency shift is implicitly present in the variable R. If we assume
that the radial velocity v1 is constant, we shall have
R = R1 + v1 t1 , (2.2)
where R1 is the distance to the target at the initial moment of time t = 0.
Equations (2.1) and (2.2) describe a simple model target to be further used for the
analysis of the aperture synthesis and imaging principles.
In practice, most radar targets refer to the class of complex targets. In spite
of a great variety of particular targets, we can offer a common criterion for their
classification. This criterion is based on the relationship between the maximum target
size and the radar resolving power in the coordinate space of the parameters R, α, β and
Ṙ, which are the range, the azimuth, the elevation angle and the radial velocity of the
target, respectively. An additional important parameter is the number of scattering
centres (scatterers). In accordance with this criterion, all complex targets can be
subdivided into extended compact targets and extended proper targets. A target is
referred to as extended compact if it has a small number of scatterers, its linear
and angular dimensions are much smaller than the radar resolution element, and
the difference between the radial velocities of the extremal scatterers is appreciably
smaller than the velocity resolution element. What is important is that this definition
also holds for targets located at large distances. On the other hand, a target which has
a size much larger than the radar resolution element and a large number of scatterers
should be referred to as extended proper. Earth and water surfaces are examples of
such targets.
We shall first discuss extended compact targets (airplanes, spacecrafts, etc.). In
the high-frequency region, these targets should be represented as a set of scatterers,
or radiant points. The mathematical model of an extended compact target, based on
the concept of scatterers, has the form [138]:
M
√
U= σm exp(jm ), (2.3)
m=1
where Zm (α) is the projection of the distance between the mth and the first scatterers
onto the bisectrix of the bistatic angle, k = 2π/λ1 is the wave number of the incident
wave and β is the bistatic angle and ξm is the residual phase contribution of the mth
scatterer, including the contribution of the creeping wave.
For scatterers retaining their position with changing bistatic angle, the mathemat-
ical model is
M
√
U= σm exp(j2kZm (α) cos β/2ξm ). (2.5)
m=1
ia eikR ia eikR
Eϕ = Eox (ϑ), Eϑ = Hox (ϑ), (2.6)
2 R 2 R
where k is the wave number and ϑ is the angle between the viewing direction and the
cylinder symmetry axis, π/2 ≤ ϑ ≤ π :
(ϑ) = (1) + (2) + (3), (2.7)
(ϑ) = (1) + (2) + (3), (2.8)
(1) = f (1)[J1 (ζ ) + iJ2 (ζ )]eikl cos ϑ , (2.9)
(2) = f (2)[−J1 (ζ ) + iJ2 (ζ )]eikl cos ϑ , (2.10)
(3) = f (3)[−J1 (ζ ) + iJ2 (ζ )]e−ikl cos ϑ , (2.11)
ζ = 2ka sin ζ ,
J1 (ζ ) and J2 (ζ ) are the first- and second-order Bessel functions, respectively. Indices
1, 2 and 3 correspond to three scatterers on the cylinder (Fig. 2.1).
Similar expressions can be obtained for the functions (1), (2) and (3) by
replacing f (1), f (2) and f (3) by g(1), g(2) and g(3), respectively. The latter are
y
z
3
A A/
1
q
defined as
−1 −1
f (1) sin(π/n) π π (π − 2ϑ)
= cos − 1 ± cos − cos ,
g(1) n n n n
(2.12)
−1
f (2) sin(π/n) π π 2ϑ −1
= cos − 1 ∓ cos − cos , (2.13)
g(2) n n n n
−1
−1
f (3) sin(π/n) π π (π + 2ϑ)
= cos − 1 ∓ cos − cos ,
g(3) n n n n
(2.14)
n = 3/2.
The functions (2.7)–(2.14) can be used to calculate the scattering characteristics (the
RCS diagram, the amplitude and phase scattering diagrams, etc.) for an experi-
mental study of diffraction in an anechoic chamber (AEC). The last two diagrams,
for example, can be found as the modulus and argument of the functions (2.7) and
(2.8). However, the representation of the field as a sum of the fields re-transmitted
by scatterers provides information on individual scatterers. Such characteristics are
referred to as local responses [12]. The RCS diagrams for scatterers on a cylinder and
n = 1, 2, 3.
The phase responses of scatterers can be derived in the form of arguments of the
complex valued functions (2.9)–(2.11). A scattering model for a cylinder with bistatic
incidence was designed in Reference 12 in the EWM approximation. Besides, it is
shown in References 105 and 109 that the amplitude responses and the positions of
scatterers on a target can be studied experimentally using images reconstructed from
microwave holograms.
We now turn to models of extended proper targets. Such targets include
• land surface;
• sea surface;
• large anthropogenic objects like urban areas and settlements;
• special standard objects for radar calibration.
An analysis of models of all of these targets would go far beyond the scope of this book.
We give a brief survey of scattering models of sea surface in Chapter 4, including a
model of a partially coherent extended proper target, which is used in the analysis of
microwave radar imagery.
It should be noted that extended compact targets may also be partially coherent
(Chapter 7). In either case, these targets produce parasitic phase fluctuations which
perturb radar imaging coherence.
Target models are used for several purposes: to justify the principles of inverse
aperture synthesis, to interpret microwave images, to obtain local RCS of scatterers
on standard objects, and to calibrate measurements made in AECs.
We have mentioned in Chapter 1 that the use of a synthetic aperture is necessary if one
needs to obtain a high angular resolution of targets at large distances. It has been shown
by some researchers [73,109] that the aperture synthesis is, in principle, possible for
any form of relative motion of a target and a real antenna; what is important is that
the target aspect should change together with the relative displacement.
Today there are two basic methods of aperture synthesis – direct and inverse.
Direct synthesis can be made by scanning a relatively stationary target by a real
antenna (Fig. 2.2(a)). The target is on the earth surface and the antenna is located on
an aircraft. Radar systems with direct antenna synthesis are known as side-looking
synthetic aperture radars (SARs). The authors of Reference 85 have suggested for
them the term quasi-holographic radars (Chapter 3). Methods of aperture synthesis
using linear translational motion of a target or its natural rotation relative to a stationary
ground antenna are called inverse methods and radars based on such methods are
(a) Lc
V
1
bo
1 – radar 2 – target
(b) Lc
V
2
bo
1 – radar 2 – target
(c)
2 bo
1 – radar 2 – target
Figure 2.2 Schematic illustrations of aperture synthesis techniques: (a) direct syn-
thesis implemented in SAR, (b) inverse synthesis for a target moving in
a straight line and (c) inverse synthesis for a rotating target
known as inverse synthetic aperture radars (ISARs) (Fig. 2.2(b) and (c)). There are also
combined approaches to field recording. For example, a front-looking holographic
radar (Chapter 3) combines direct synthesis along the track and transversal synthesis
with a one-dimensional (1D) real antenna array (Fig. 3.4). A spot-light mode of
synthesis is also possible: it uses both the linear movement of an airborne antenna
and its constant axial orientation to a ground target (Fig. 3.11). Radars based on this
principle are known as spot-light SAR [100].
Finally, ground radars operating in the inverse synthesis mode and viewing a
linearly moving target can combine a real-phased antenna array and aperture synthesis.
Imaging radar signal processing can be considered from different points of view.
Since there is an essential difference between the direct and inverse modes of aperture
synthesis, the processing techniques should be described individually for each type
of radar.
2.3.1 SAR signal processing and holographic radar for earth surveys
The SAR aperture synthesis by coherent integration is treated in terms of
• the antenna approach [74];
• the range-Doppler approach [85,140];
• the cross-correlation approach [85];
• the holographic approach [85,143];
• the tomographic approach [100].
The use of a variety of analytical techniques in radar imaging leads to various
processing designs and physical interpretations of some of its details.
The first four approaches provide a fairly complete analysis of the effects of SAR
parameters on its performance characteristics and the results are generally consistent.
Each approach, however, enables one to see the image recording and reconstruction
in a new light, because each has its own merits and demerits. In this book, we
largely follow the holographic approach to the performance analysis of various SAR
systems, which involves the theories of optical and holographic systems. According
to one of the pioneers of optical and microwave holography E. H. Leith, a holographic
treatment of SAR performance has proved most fruitful. The recording of a signal is
regarded as that of a reduced microwave hologram of the wave field along the azimuth,
that is, along the flight track. Illumination of such a hologram by coherent light
reconstructs the optical wave field, which is similar to the recorded microwave field
on a certain scale. A schematic diagram illustrating the holographic approach to SAR
signal recording and processing is presented in Fig. 2.3. For a point target, for instance,
an optical hologram is a Fresnel zone plate. When the plate is illuminated by coherent
light, the real and virtual images of the point target are reconstructed (Fig. 3.1).
Thus, a microwave image of a point target can be obtained directly owing to
the focusing properties of a Fresnel zone plate. The processing optics in that case is
1
Reference signal
Image
Figure 2.3 The holographic approach to signal recording and processing in SAR:
1 – recording of a 1D Fraunhofer or Fresnel diffraction pattern of
target field in the form of a transparency (azimuthal recording of a
1D microwave hologram), 2 – 1D Fourier or Fresnel transformation,
3 – display
of the target around its axis. In its latter modification, the ECP algorithm is used for
3D and stroboscopic microwave imaging [13].
Polar format processing is another effective way to overcome the scatterers’ move-
ment through the resolution elements. It is based on the representation of radar data
in a 3D frequency space.
In our opinion, a very perspective way of inverse aperture synthesis is by holo-
graphic processing [109,146]. The possibility of using a holographic approach was
first suggested by E. N. Leith [85]. Not only does it provide a new insight into the
processes occurring in inverse synthesis but it also helps to find novel designs of
recording and reconstruction devices.
The schematic diagram of the holographic approach to ISAR signal recording
and reconstruction is similar to that shown in Fig. 2.3. The first step is to record a
1D quadrature or complex microwave Fourier hologram (the diffraction pattern of
the target field) (Section 2.3.1). The reference signal is a coherent heterodyne pulse.
The second step is the implementation of a 1D Fourier transform. The next step is
the image representation.
Tomographic processing can be performed using one of the three ways of image
reconstruction:
• reconstruction in the frequency region [9];
• reconstruction in the space region by using a convolution back-projection
algorithm [9];
• reconstruction by summation of partial images (Chapter 6).
The tomographic approach to ISAR analysis will be discussed in Section 2.4.2 and
in Chapter 6.
ao exp(iΦo) ao exp(iΦo)
(c) (d)
a exp(iΦ) a exp(iΦ)
Multiplicative Amplitude-
Receiver
detector phase detector
ao exp(iΦo) ao exp(iΦo)
(e)
Amplitude-phase
detector I
a exp(iΦ)
ao exp(iΦo)
p/2
Amplitude-phase
detector II
a exp(iΦ)
(f) Amplitude-
Receiver Phase
limiter circuit detector
exp(iΦ)
wave. In the second case a multiplicative hologram is formed [109] which is defined as
The latter can also be formed at high and medium frequencies (Fig. 2.4(c) and (d),
respectively).
In either case it is possible to record a quadrature microwave hologram
Phase Phase
shifter shifter
No– 1 No– 2
(90°)
1 4
Plane wave
v a
x
Radar antenna
Figure 2.6 Illustration for the calculation of the phase variation of a reference wave
Fresnel and Fraunhofer holograms have found application in SAR theory, while
Fourier holograms are used in ISAR theory (Chapters 3 and 5).
Since the process of hologram synthesis implies that the radar is to be coherent,
the question arises as to what requirements must be imposed on the coherence. Let us
first define the concept of coherence in microwave radar theory. A signal is said to be
coherent if it shows no abrupt changes in the basic frequency, or if such changes are
small, of the order of 1–3◦ [14]. If the basic frequency changes are greater than these
values, the signal reflected from a target is called partially coherent. This happens
when the coherence is perturbed due to:
Within this definition, a continuous radiation is always coherent for a period of time
when various instabilities in the transmitter performance can be neglected. When a
radar operates in a pulse mode, coherence is determined by an unambiguous relation
between the initial phase values of the carrier frequency of a train of pulses. The
above definition of coherence also applies to radar signals with known phase jumps
that can be avoided using coherent sensing. Since the first of the factors responsible
for coherence instability is the most serious one, there was a suggestion to introduce
in imaging theory the concept of frequency, rather than coherence, stability [87].
A comprehensive analysis of requirements on the frequency stability was made in
SAR theory by R.O. Harger [55]. A simplified approach is considered in Reference 87.
The latter will be discussed here in more detail in order to explain the physical
mechanism of SAR instability. The treatment of this problem has yielded the following
expression:
where α is the rate of linear frequency variation due to the instability of the radar
generator, T is the time for a pulse to reach the target at distance R and to come back.
It is clear from Eq. (2.25) that a permissible phase error π α 2 T 2 is π/4 for the time
T = 2R/c.
Therefore, Eq. (2.25) is the criterion for the coherence length in the holographing
of reflecting targets; it should provide the frequency stability of the signal propagation
for a time consistent with the scene depth (a full analogy with optical holography).
Similar stability requirements can be imposed on coherent ISAR, in which coherence
is preserved if the signal phase deviation due to the frequency instability is less than
π/2. Then we have the expression
where δfc is the deviation of the probing signal frequency for the time T . Neglecting
the signal delay in the antenna-feeder waveguide, we get
Point target
–r
o
0
–r
–r
a
and measures the amplitude and phase of the complex envelope of an echo signal.
The target is assumed to consist of a small number of independent scatterers, whose
position relative to the centre of mass of the target O and the radar is defined by the
respective vectors (Fig. 2.7). The target moves along an arbitrary trajectory, rotating
around its centre of mass. The conditions for the far zone and a uniform field amplitude
of the wave incident on the target surface facing the radar are fulfilled. The algorithm
for the processing of the complex envelope of an echo signal, synthesised by the radar
receiver, is
2|r̄|
sv (t) = g(r̄o )w t − exp(−j2kc |r̄|)d r̄o , (2.31)
c
v
where g(r̄o ) is the function of the target reflectivity and kc = 2π/λc is a wave number
corresponding to the wavelength of the radar carrier oscillation. Equation (2.31)
allows the estimation of the ĝ(r̄o ) reflectivity of every scatterer.
The integration of Eq. (2.31) is made over the target space. With the condition
for the far zone, the vector r̄ describing the position of an arbitrary scatterer relative
to the radar can be substituted by its projection on the line of sight:
where
and ū is a unit vector coinciding with the line of sight and directed away from the
target rotation centre towards the radar.
Generally, both terms of Eq. (2.32) vary during the viewing. However, the con-
tribution to the imaging is made only by the variation in the relative range r̂. On the
contrary, the range variation of the target’s centre of mass |r̄a | produces distortions
in the image. By substituting Eq. (2.32) into Eq. (2.31) and regrouping the terms for
where ∗ denotes convolution, h(r̄o ) is the processing system response from a single
point target in the space frequency domain, h(r̄o ) = F −1 {H (f )}, and H (f ) is a 3D
aperture function.
It is clear from Eq. (2.39) that the value of ĝ(ro ) is a distorted representation of
the target reflectivity g(ro ). The distortion is largely due to the limited frequency
spectrum and the small angle step of the aspect variation.
Equation (2.39) can be transformed in the 3D frequency domain. More often,
however, one needs 2D images, which can be obtained using an appropriate 2D data
acquisition design (Fig. 2.8). Equation (2.38) then has the form:
∞
S(f ) = H (f ) g(u, v) exp[−j2(kc + k)v] du dv. (2.40)
−∞
represents the projection of the target reflectivity on the v-axis, the target aspect
defined by the angle θ (Fig. 2.8) can be written as
∞
Sθ (f ) = H (f ) Pθ (v) exp[−j2(kc + k)v] dv. (2.42)
−∞
where Pθ (fp ) is the Fourier transform of the projection Pθ (v) with the space
frequency fp .
The substitution of Eq. (2.41) into Eq. (2.43) yields
∞
Pθ (fp ) = g(u, v) exp[−j2π(0u + fp v)] du dv (2.44)
−∞
or
v
y
u
ro o
w
x
where P(·) is the Fourier transform of the target reflectivity in the (x, y) coordinates.
Then using Eq. (2.45), we have
Equation (2.45) represents the formulation of the projection theorem underlying the
tomographic imaging algorithms [34,57].
Bearing in mind that v = y cos θ −x sin θ , we go from Eq. (2.43) to the 2D Fourier
transform in the (x, y) coordinates related to the target:
∞
S(fx fy ) = H (f ) g(x, y) exp[−j2π(fx x + fy y)] dx dy, (2.47)
−∞
∞
ĝ(x, y) = S(fx , fy ) exp[j2π(fx x + fy y)] dx dy = g(x, y) ∗ h(x, y). (2.48)
−∞
This approach to imaging can be implemented in the frequency and space domains (see
Chapter 6). Note that the radar data on a signal are recorded in polar coordinates [8],
while the imaging devices are represented as a dot matrix. This inconvenience neces-
sitates the use of a cumbersome procedure of data interpolation and then finding a
compromise between the degree of interpolation complexity (the greater the com-
plexity, the better the image quality) and the computation resources. It will be shown
fy
fpo
fx
Du
fp1
fpt
Dfp
Figure 2.9 The space frequency spectrum recorded by a coherent (microwave holo-
graphic) system. The projection slices are shifted by the value fpo from
the coordinate origin
taken along the transverse range represents the projection of the target reflectivity on
the RLOS.
fy
Dfp
fx
~Du
We have shown in Chapter 2 that the aperture synthesis can be described in different
ways, including a holographic approach. It was first applied by E. N. Leith to a
side-looking synthetic aperture radar (SAR) [85,86]. He analysed the optical cross
correlator, which processes the received and the reference signals, and concluded
that ‘if the reference function is a lens, the record of a point object’s signal can
also be considered as a lens, because the reference function has the same functional
dependence as the signal itself’ [85]. The signal from a point object is a Fresnel lens,
and its illumination by a collimated coherent light beam creates two basic images – a
real image and a virtual image (Fig. 3.1). The author also pointed out that the images
formed by a Fresnel lens were identical to those created by correlation processing. He
drew the conclusion that ‘by reducing the optical system to only three lenses, we are, it
appears, led to abolishing even these, as well as the correlation theory upon which all
had been based’ [85]. This was a radically new concept of SAR. The radar theory and
the principles of optical processing were revised in terms of the holographic approach.
Its key idea is that signal recording is not just a process of data storage, like in antenna
or correlation theories, but it is rather the recording of a miniature hologram of the
wave field along the carrier’s trajectory. For this, the recording is made on a two-
dimensional (2D) optical transparency (the ‘azimuth-range’), or a complex reflected
signal is recorded 2D. The first procedure uses a photographic film to record the
range across the film but the azimuth and pathway range along its length. In optical
recording, the image is reconstructed in the same way as in conventional off-axial
holography, that is, along the carrier’s pathway line. If a microwave hologram is
recorded optically, its illumination by coherent light reproduces a miniature optical
representation of the radar wave field. Therefore, the object’s resolution is determined
by the size of the hologram recorded along the pathway line, rather than by the aperture
5
1
Figure 3.1 A scheme illustrating the focusing properties of a Fresnel zone plate:
1 – collimated coherent light, 2 – Fresnel zone plate, 3 – virtual image,
4 – real image and 5 – zeroth-order diffraction
of a real radar antenna. The range resolution is provided by the pulse modulation of
radiated signals. Since the holographic approach to SAR is applicable only to its
azimuthal channel, the authors of the work [85] termed it quasi-holographic. In his
later publications on this subject, E. N. Leith pointed out that aperture synthesis should
be described as a microwave analogue of holography to which holographic methods
could be applied, rather than as holography proper.
Thus, a combination of SAR and a coherent optical processor represents a
‘quasi-holographic’ system, whose azimuthal resolution is achieved by holographic
processing of the recorded wave field. Both E. N. Leith and F. L. Ingalls believe [86]
that this representation is most flexible and physically clear. The use of the holo-
graphic approach for SAR analysis has so far been restricted to optical processors
[87]. There is a suggestion to represent the entire SAR azimuthal channel as a holo-
graphic system [143]. In that case the initial stage of the holographic process in this
channel (the formation of a microwave hologram) is the recording of the field scat-
tered by an artificial reference source. The second stage (the image reconstruction)
is described in terms of physical optics.
played by a signal directly supplied to the synchronous detector; this is the so-called
‘artificial’ reference wave.
We shall describe now the receiving device of the synthetic aperture which records
a hologram on a cathode tube display. Usually, a hologram is recorded by modulating
the tube radiation intensity, with the photofilm moving with velocity vf relative to
the screen. For objects with different ranges Ro from the pathway line, one can
use a pulse mode and vertical display scanning. As a result, the device records a
series of one-dimensional (1D) holograms having different positions along the film
width, depending on the distance to the respective objects. Suppose all the objects
are located at a distance Ro to the pathway line. For simplicity, the radiated signal can
then be taken to be continuous because the pulsed nature of the radiation is important
only for the analysis of range resolution. Figure 3.3 shows an equivalent scheme
of 1D microwave hologram recording. A synthetic aperture is located at point Q
with the coordinates (x , 0) (x = vt, where t is the current moment of time), and a
hypothetical source of the reference wave is at point R(xr , zr ). The source functions in
a way similar to that of the reference wave during the hologram recording (Fig. 1.2).
The point P(xo , zo = −Ro ) belongs to the object being viewed along the xo -axis. If
x⬘
V
qR
Ro
x⬘
xo
Q(x⬘, O)
P(xr, zr)
Ro
z
0
Rr
R(xr, zr)
the object’s scattering characteristics are described by the function F(xo ) and its size
is small as compared with Ro , one can use the well-known Fresnel approximation to
define the diffraction field along the -axis [103]:
∞
eik1 Ro
Uo (x ) = Co √ F(xo )eik1 ((xo −x )/(Ro )) dxo , (3.1)
λ1 R o
−∞
Using the filtering properties of the δ-function and Eq. (3.3), we arrive at the
following equation for the hologram (with the constant phase terms ignored):
(x )2 x xo
h(x ) = Ar Ao cos ωx x − k1 + 2k1 , (3.5)
Ro Ro
where Ao is the scattered wave amplitude at the receiver input.
If Eq. (3.4) holds, expression (3.5) yields
x xo
h(x ) = Ar Ao cos ωx x + 2k1 . (3.6)
Ro
Thus, a synthetic aperture forms either a Fraunhofer or a Fourier hologram of a point
object. The former looks like a 1D Fresnel zone plate, in accordance with Eq. (3.5), and
the latter is a 1D diffraction grating with a constant step, in accordance with Eq. (3.6).
During the photographic recording, the holograms are scaled by substituting the
x -coordinate by the x-coordinate, where x = x /nx and nx = v/vf . A constant term
ho (‘displacement’) is added to Eqs (3.5) and (3.6) for the photographic registration
of the bipolar function h(x ).
Images of two-point objects with the same coordinates xo but different ranges R1
and R2 will also be distorted due to the dependence of ξ on Ro . The resolving power
from Reighley’s criterion is
x = x1 − x2 = πR/k1 nx vf T . (3.16)
With Eq. (3.4), the permissible limit for this parameter in SAR with unfocused
processing has the value
x = πλ1 Ro /4 ≈ 0.44 λ1 Ro . (3.17)
Note that a hologram is written on a photofilm (in the case of an optical processor)
or in a memory device (in the case of digital recording) continuously during the flight.
For this reason, the focused or unfocused aperture regime is prescribed only at the
reconstruction stage.
Synthetic aperture radar can also be considered in terms of geometrical optics,
which implies phase structure analysis of a hologram. One of the expressions in (3.2)
can be re-written as
h(x ) = Ar Ao cos(ϕr − ϕo ),
where ϕo is the phase of the field scattered by the object. For a point object located
at point P (Fig. 3.3), we can write two expressions taking into account the SAR wave
propagation to the object and back:
ϕo = −2k1 (PQ − PO),
ϕr = −2k1 (RQ − RO),
where RO = Rr is the distance between a hypothetical reference wave source and
the coordinates origin. By expanding ϕr and ϕo into series, we get for the first-order
terms
4π 1 1 xr xo
ϕr − ϕo ∼
= − (x 2
) − − x − . (3.18)
λ1 2Rr 2Ro Rr Ro
In a simple case of xo = 0, xr = 0 and Rr = ∞ (a plane reference wave without
linear phase shift), we have
ϕr − ϕo = 4π(x )2 /2λ1 Ro .
The space frequency in the interference pattern is
1 ∂(ϕr − ϕo )
ν(x) = = 2x /λ1 Ro . (3.19)
2π ∂x
= (L )
At a certain value of xcr S max /2, the frequency ν may exceed the resolving
power of the field recorder, which is defined in this case by the real aperture angle
and is equal to νcr = 1/LR . From this we have the condition
(LS ) ≤ λ1 Ro /LR = ϑR Ro . (3.20)
The substitution of (Ls )max into Eq. (3.12) gives a classical relation for the
attainable limit of SAR resolution:
xlim = LR /2.
The pulsed nature of the signal allows determination of such an important radar
parameter as the minimum repetition frequency of probing pulses, χmin . Obviously,
the pulse mode is similar to hologram discretisation. The distance between two
adjacent records
x = vf /χ must meet the condition
x ≤ [2ν(xcr
)]−1 .
χmin = 2ϑ/LR .
xc and Rc are the coordinates of the reconstructing wave source, µ = λ2 /λ1 , m = n−1
x .
The image coordinates for a point object are
1 1 2µ 1 1
= ± 2 − ,
RI RC m Ro Rr
xI xI 2µ xo xr
= ± − .
RI RC m Ro Rr
The value k = 0 is for the spherical aberration, k = 1 is for the coma, k = 2 is for the
astigmatism. These relations can be used to find the maximum size of the synthetic
aperture, (Ls )max , from Reighley’s formula (wave aberrations at the hologram edges
should not be larger than λ2 /4). Since spherical aberration is largest in the order of
magnitude, we obtain
µ2
(LS )max = 2 λ1 Ro
4 3 1−4 2 . (3.22)
m
For typical conditions of SAR performance, the value of (Ls )max calculated from
Eq. (3.20) is smaller than (Ls )max found in Eq. (3.22), that is, the effect of wave
aberrations is inessential.
where g = g(x ) is the trajectory deviation from the x -axis. At Ro xo , x and g, the
binomial expansion ignoring all terms of the g 2 order gives an approximate expression
for ϕo (x ):
4π (x )2 − 2xo x x4 − 4xo (x )3 + 4xo2 (xo )2
ϕo (x ) ∼=− −
λ1 2Ro 8R3o
zo g zo g(x )2 − 2zo gxo x
− + .
Ro 2R3o
The phase equation for a wave reconstructing one of the images has a stan-
dard form:
ϕI = ϕc ± (ϕo − ϕr ), (3.23)
2π x2 − 2xI x x4 − 4xI x3 + 4xI2 x2
ϕI = − − . (3.24)
λ2 2RI 8R3I
The phases ϕc and ϕr are described by expressions similar to (3.24). The phase
differences between the respective third-order terms relative to 1/RI in Eqs (3.23)
and (3.24) represent aberrations described as
=
ϕ (3) +
ϕn(3) .
(3)
The aberrations
ϕ (3) are defined by Eq. (3.21), and
ϕn has the form:
where
and g is the trajectory deviation. Here the quantities D3 , D4 and D5 are aberrations
arising from the trajectory instabilities.
Equation (3.25) describing distortions in the hologram phase structure can be used
to calculate the compensating phase shift directly during the synthesis. For this, SAR
should be equipped with a digital signal processor.
By applying Reighley’s criterion to each term in Eq. (3.25), one can get the
following conditions for maximum permissible deviations of the carrier’s trajectory:
g3 ≤ λ2 /4/D3 = λ1 Ro /8Zo = λ1 /8 cos ϑo , (3.26)
g4 ≤ λ2 /4/D4 /xmax = λ1 R3o /4LS Zo xo , (3.27)
g5 ≤ λ2 /4/D5 /xmax
2
= λ1 R3o /Zo L2S . (3.28)
Besides, if one knows the flight conditions and carrier’s characteristics,
Eqs (3.26)–(3.28) can be used to find constraints imposed on the parameter cos ϑo
and the maximum size of the synthetic aperture:
cos ϑo ≤ λ1 /8g,
LSmax ≤ λ1 R2o /4gxo ,
LSmax ≤ Ro λ1 /g.
Normally, SAR meets the conditions LS Ro and xo Ro . So D4 and D5 can be
neglected leaving only the factor D3 , which severely restricts the trajectory stability
(see Eq. (3.26)).
Effects arising in a synthetic aperture during the viewing of moving targets can be
estimated in terms of physical optics. Suppose a point object moves radially (along the
z-axis) at velocity vo , such that its displacement is smaller than the range resolution
for the synthesis time T . Then, the equation for the hologram, ignoring constant phase
terms, is
vo n2x x2 nx xo x k1 vo 2 2 2
h(x) ∼ cos ωx nx x + 2k1 nx x − k1 + 2k1 − nx x .
v Ro Ro Ro v
(3.29)
The substitution of Eq. (3.29) into (3.7) gives a condition for viewing the focused
image:
v 2
k2 Ro o
ρ=± 2
1+ .
2k1 nx v
Since vo /v 1, the image can be viewed practically in the same plane as that
for an immobile object. Keeping this in mind, we can obtain, after the integration,
a function describing one of the reconstructed images:
sin {[ωx nx + 2k1 (vo /v)nx + (2k1 nx /Ro )(xo − nx ξ )] vf T /2}
V (ξ ) = Co .
[ωx nx + 2k1 (vo /v)nx + (2k1 nx /Ro )(xo − nx ξ )] vf T /2
The image position is defined as being
xo ω x Ro R o vo
ξ= + + .
nx 2k1 nx nx v
Clearly, the object’s motion is equivalent to the use of additional carrier frequency
at the recording stage, which causes the image shift. The optical processor deals with
a real image recorded on a photofilm. The recording field on the film is limited by a
diaphragm cutting off the background. The value of vo may become so large that no
image will be recorded because of the shift.
The object’s motion in the azimuthal direction (along the x -axis) at velocity vo is
equivalent to the change in the SAR’s flight velocity. Then Eq. (3.9) describing the
position of the focused image along the z-axis can be re-written as
ρ = ±λ1 Ro /2λ2 n2x = ±λ1 Ro v2 /2λ2 (v − vo )2 .
Therefore, the object’s motion along the x -axis changes the focusing conditions
by the value
vo vo vo 2
δρ = ρ − ρ = 2ρ 1− 1− , (3.30)
v 2v v
where ρ is found from Eq. (3.9). If the condition vo v is fulfilled, we have
δρ ≈ 2ρvo /v. (3.31)
Equation (3.30) yields
vo = v 1 − 1 − δρ/(ρ + δρ) .
On the other hand, a simple geometrical consideration can give the following
relations for the resolving power of SAR along the z-axis (longitudinal resolution):
ρ = 2(
x vf )2 /λ2 v2 = 2(
x )2 /λ2 n2x . (3.32)
The focusing depth
ρ is defined as the focal plane shift along the z-axis by
a distance, at which the azimuthal resolution
x becomes twice as poor as the
diffraction limit in Eq. (3.12).
The viewing of a focused image of an object moving at velocity vo requires an
additional focusing of the optical processor. The object’s velocities that require the
focusing can be found from the condition
ρ < δρ, where δρ is given by Eq. (3.31).
Using Eqs (3.9) and (3.32), we get
vo > 2(
x )2 /λ1 Ro .
At lower velocities, there is no need to re-focus the processor, and a poorer image
quality may be assumed to be inessential.
To conclude Section 3.1, we should like to emphasise the following. The SAR
operation principles can be described by conventional methods (Chapter 2) that are
still widely used [73] or with a holographic approach representing the side-looking
synthetic aperture and the processor as an integral system for recording and recon-
structing the wave field. The analysis of the aperture synthesis can be based on
the well-elaborated principles of holography as well as on physical and geometrical
optics. The examples we have discussed support the physical clarity of the holo-
graphic approach and its value for SAR analysis. We can get a better insight into the
Transmitter
antenna
Receiver Y
V
antenna
Xc
a
H
Survey
zone
Line of track
x
dx(w),m
2.5
1.5
0.5
0 w
20 30 40 50 60 70 80 90 100
Figure 3.6 The resolution of a front-looking holographic radar along the x-axis as
a function of the angle ϕ
dz(w),m
200
180
160
140
120
100
80
60
40
20
0 w
20 30 40 50 60 70 80 90 100
Figure 3.7 The resolution of a front-looking holographic radar along the z-axis as
a function of the angle ϕ
resolution δz even when its signal is continuous. This is due to the fact that a holo-
gram contains information about the three dimensions of the object, including the
longitudinal range (Chapter 2).
(a) Y
zr
Reference wave
(xr, yr, zr) Photoplate
Object
(xo, yo, zo)
zo
(b) Y
Reconstructing wave zp
(xp, yp, zp) Hologram
Virtual image
(xi, yi, zi)
zi Real image
(x /i, y /i, z /i)
Figure 3.8 Generalised schemes of hologram recording (a) and reconstruction (b)
the right of the hologram. At λ1 = λ2 , zr = zo and zc > 0 both images are virtual,
whereas at λ1 = λ2 , zr = zo and zc < 0 they are real.
One can show with Eqs (3.36) that holographic images of objects more complex
than just a point, for example, consisting of two point sources, can be magnified or
diminished relative to the respective object [50,51].
As the reconstructed wave front is 3D, the transverse (along the x- and y-axes) and
the longitudinal (along the z-axis) magnifications obtained during the reconstruction
can be analysed separately.
From Eq. (3.36), the transverse magnifications are:
for the real image (superscript ‘r’)
∂xi ∂yi λ 2 zi
Mtr = = = , (3.37)
∂xo ∂yo λ 1 zo
Here the superscript is for the real image and the subscript is for the virtual one.
The transverse magnification describes the ratio of the width and height of the image
to the appropriate parameters of the real object.
The longitudinal magnification can be found by differentiating Eq. (3.36) for zi :
∂zi λ2 zi2 ∼ λ1 r 2
Mlr = = = Mt (3.40)
∂zo λ1 zo2 λ2
∂zi λ2 z 2 λ1 v 2
Mlv = = − i2 ∼ =− Mt . (3.41)
∂zo λ1 z o λ2
λ2 zi λ2 zi2
Mtr = Mlr or = .
λ1 z o λ1 zo2
Therefore, a geometrical similarity is possible only if the image is reconstructed
at the site the object occupied during the recording.
By substituting the coordinate zi = zo into Eq. (3.36), we can get an expression
for the coordinates of the reconstructing source:
1 1 λ2 λ2
= 1∓ ∓ . (3.42)
zp zo λ1 λ1 zr
Another way of obtaining an undistorted image is to change the scale of the linear
hologram size by a factor of m at the transition from the recording to the reconstruction
[50]. At m < 1, the hologram becomes smaller while at m > 1 it becomes larger.
The coordinates of an image reconstructed from a hologram diminished m times can
be found from
λ2 zi λ 2 zi zi
xi = ±m xo ∓ m xr − xp ;
λ1 zo λ1 zr zp
λ2 zi λ 2 zi zi
yi = ±m yo ∓ m y r − yp ; (3.43)
λ1 zo λ1 zr zp
−1
1 2 λ2 λ2
zi = ±m ∓m .
zp λ1 zr λ1 z o
zo = mzi . (3.48)
The substitution of Eq. (3.48) into Eq. (3.36) yields the coordinates of the
reconstructing source; in particular, for zp we have
1 m λ2 λ2
= 1 ± m2 ∓m . (3.49)
zp zo λ1 λ 1 zr
(a) x
Photoplate
2 d 1
a
R1
3
L1
(b) 4
2 1
Hologram
1
2
/
dv l1 l1 dr
l/2 l2
L2
Figure 3.9 Recording (a) and reconstruction (b) of a two-point object for finding
longitudinal magnifications: 1, 2 – point objects, 3 – reference wave
source and 4 – reconstructing wave source
3
2
∆xi
∆zi
Figure 3.10 The focal depth of a microwave image: 1 – reconstructing wave source,
2 – real image of a point object and 3 – microwave hologram
zi = Mlv zo (3.57)
or
λ1 v 2
zi = − (M )
zo . (3.58)
λ2 t
With the relation for the transverse magnification (3.52), one can write
λ1
zo
zi = −
xi . (3.59)
λ2
xo
The last factor in Eq. (3.59) can be written as
zo
= tgαo , (3.60)
xo
where αo is the aperture angle in the objects’ space. Then Eqs (3.59) and (3.60) yield
λ1 v
xi
zi = − Mt . (3.61)
λ2 tgαo
If the scale of the initial hologram is diminished m times, we have
λ1 v
xi
zi = −m2 Mt . (3.62)
λ2 tgαo
Let us now define the quantity
xi . Although the resolution along the x- and y-axes
is determined by different physical conditions, the resolution elements
x and
y
must have the same values. Therefore, instead of
xi one can use δx describing the
resolution along the pathway line provided by the aperture synthesis. Then Eqs (3.62)
0.45λ21 Mtv H
zi = − . (3.63)
λ2 Xs tgαo sin3 ϕ
A characteristic feature of this expression is that
zi is inversely proportional to
the synthetic aperture length Xs .
It is also worth discussing some practical aspects of scaling in a holographic
radar. Unlike SAR, this type of radar has no anamorphism, that is, the image planes
coincide in azimuth and range. So there is no need to use special optics to eliminate
anamorphism. However, the image proportions along the x- and y-axes do not coin-
cide because the scaling coefficient in azimuth, Px , differs from that in range, Py .
According to Reference 81, Px is defined as
Px = v/V , (3.64)
where v is the velocity of the transparency on which the hologram is recorded and V
is the velocity of the antenna array.
Along the y-axis, the scaling coefficient Py is
W
Py = , (3.65)
2a
where W is the transparency width and 2a is the double length of the antenna array.
As a result, the holographic image appears to be defocused along the x- and y-axes.
The image scale along these axes can be equalised by special optics – spherical or
cylindrical telescopes. The optics suggested in Reference 81 can change the image
scale from 4 to 25 times. Transversal and longitudinal scales of an image can be
equalised by choosing a proper coefficient m. Therefore, the final values of longitu-
dinal magnification and focal depth can be found only after one has selected all the
scaling coefficients Py , Px and m.
To conclude, we summarise specific features of front-looking SAR systems.
1. It has been shown in Reference 74 that SAR systems have a serious limitation.
When the view zone approaches the pathway line, the resolution in azimuth
becomes much poorer. This makes it impossible to obtain quality images in
the front view zone. In contrast, a holographic radar provides a high resolution
directly under the aircraft.
2. Another essential advantage of a holographic radar is a high longitudinal res-
olution even in a continuous pulse mode along the z-axis, providing 3D relief
images.
3. The 3D character of a holographic radar image is a basis for obtaining range
contour lines which can then be recalculated to get surface contours [81]. This
operation mode is ‘purely’ holographic. In fact, it implements the principle of
two-frequency holographic interferometry.
4. A high 3D quality of the image requires the use of a new parameter – the image
focal depth, by analogy with optical systems.
S(t) = (3.66)
0, otherwise
y
v
u
where ωo is the SAR carrier frequency, 2α is the LFM slope and τ is the pulse dura-
tion. Note that the latter condition is not obligatory because the signal may have a
narrow band. It is assumed that the target is in the far zone and the microwave phase
front in the target vicinity is planar. The signal reflected by a unit area of the surface
at the point (xo , yo ) is
2R
ro (t) = ARe g(xo , yo )S(t − ) dx dy, (3.67)
c
where A is the amplitude coefficient accounting for the signal attenuation during the
propagation; 2R/c is the time delay of the signal while it covers the distance R in both
directions; g(x, y) = |g(x, y)| exp[ jϕo (x, y)] is the density function, whose physical
sense here is just the distribution of the earth surface reflectivity; and ϕo (x, y) is
the signal phase shift due to the reflection. We also assume that the function g(x, y)
remains constant within the given ranges of radiation frequencies and view angles ϑ.
Normally, when the distance to the target is much larger than the target’s size,
elements of the ellipses in Fig. 3.11 may be regarded as segments of straight lines.
Therefore, with Eqs (1.17) and (3.67) we can write down the total echo signal from
all reflectors located within a narrow band normal to the u-axis and having the width
du at u = uo :
2(Ro + uo )
r1 (t) = ARe pϑ (uo )S t − du,
c
where Ro is the distance between the SAR and the target centre.
The total signal from the area being surveyed is
L
2(Ro + uo )
rϑ (t) = ARe pϑ (u)S t − du , (3.68)
c
−L
where L is the area length along the u-axis and A = const, which is valid at Ro L.
In contrast to the classical situation presented in Fig. 1.6, the linear integral used for
the projection is taken along the line normal to the microwave propagation direction.
Now we substitute Eq. (3.66) for the LFM pulse into Eq. (3.68), simultaneously
detecting the received signal with a couple of quadrature multipliers, and then we
pass the output signals through low-frequency filtres. What we eventually get is the
signal
L
A 4αu2 2
cϑ (t) = pϑ (u) exp j 2 exp −j [ωo + 2α(t − τo )u] du,
2 c c
−L
where
τo = 2Ro /c and τ/2 + 2(Ro + L)/c ≤ t ≤ τ/2 + 2(Ro − L)/c. (3.69)
The latter expression is the Fourier transform of the function pϑ(u) exp( j4αu2 /c2 ),
whose exponential factor can be easily eliminated if we find the inverse Fourier
The quantity M ×N is equal to the number of pixels on the image, each pixel having
the size a × b. According to the sampling theorem, I /a and I /b are approximately
equal to the size of the P region along the x- and y-axes, while I /(Ma) and I /(Nb)
should equal the spacing between the grid nodes along the same axes. Thus, the P grid
consists approximately of M radial lines and N pixels along each line. Note that in
classical tomography, we have M ∼ = N and the grid P includes about π M /2 radial
lines and N pixels along each line.
We can now estimate the inverse Fourier transform f of the function F across the
region R:
f (ma, nb) = F(x, y)E(xma + ynb) dx dy
= G(u, v)E(uma + vnb)
(u,v)∈P
where A(k) = −(C + kD)tg(ϑo /2), B(k) = −2A(k)/(M − 1), ϑo is the size of the
R region, C and D are some selected real positive numbers.
The values of the f function are found in two steps. First, for −M /2 ≤ m < M /2
and 0 ≤ k < N we find the function
−1
i=M
H (m, k) = E(m2 aB(k)/2) {G(i, k)E(i2 aB(k)/2)}
i=0
× E(−(m − i) aB(k)/2).
2
the functions g(x, y) and G(X , Y ) written in the polar coordinates [34]:
π/2 ∞
1
g(ρ cos , ρ sin ) = G(r cos ϑ, r sin ϑ)|r|
4π 2
−π/2 −∞
where Noml is the number of pixels outside the major lobe on a point scatterer’s
image and Niml is the number of pixels inside the major lobe; and (2) the computation
time and complexity, or the number of elementary arithmetic operations to be made.
The value of RMN for the convolution algorithm has been found to be −(30/40) dB.
A similar result is obtained using the convolution algorithm with a high interpo-
lation order (8–16). The computation complexity of the first algorithm is about N 3
(N × N is the number of pixels on the image) and that of the second algorithm is
about kN 2 (k is a constant varying in proportion with the interpolation order). The
computation time with the convolution algorithm is 3–5 times longer than with the
interpolation algorithm. Its application is, however, preferred because it allows pro-
cessing primary data as they come handy (e.g. the internal integral in Eq. (3.74)) in
real time for each projection individually. The convolution algorithm can be used
for simultaneous (systolic) computations by a set of elementary processors such as
a multiplier, a summator and a saving register, which are not tightly coupled to one
another.
There have been some attempts to design ‘faster’ tomographic algorithms, using,
for example, the Hankel transform. The principle of this algorithm is as follows.
Because the functions g(ρ, ) and G(r, ϑ) are periodic with the period 2π , they can
be expanded into a Fourier series:
∞
g(ρ, ) = gn (ρ) exp( jn),
n=−∞
∞
G(r, ϑ) = Gm (r) exp( jmϑ),
m=−∞
where
π/2
1
gn (ρ) = g(ρ, ) exp(−jn)d,
2π
−π/2
π/2
1
Gm (r) = G(r, ϑ) exp(−jmϑ)dϑ.
2π
−π/2
where Jn (·) is the first-order Bessel function. This relation is known as the nth order
Hankel transform [103].
Apparently, these relations can be applied to the reconstruction of g from the
known values of G. An important advantage of this algorithm is the use of data in a
polar format without interpolation. The Hankel transform takes the largest computa-
tional time. The available procedures for accelerating the computation are based on the
representation of Eq. (3.75) as a convolution and the use an asymptotic representation
of the Bessel function.
The available tomographic algorithms for image reconstruction in spotlight SAR
also include signal processing designs accounting for the wave front curvature.
These employ more complex transformations than just finding Fourier images. The
‘efficiency’ of such algorithms should be evaluated taking into account the inade-
quacy of the problem formulation. We should recall that a problem is considered to
be ill-posed if it has no solution, or the solution is ambiguous or unsteady, that is,
it does not change continuously with the input data. It is the second circumstance
that usually takes place in the case being discussed, because experimental data fit
only a small region in the transformation space. Even if we assume that the G(X , Y )
values are known over the whole polar grid, there is generally no sampling theorem
for g(ρ, ) in the polar format.
The tomographic approach allows estimation of all major parameters of the
spotlight SAR. In particular, the resolution was estimated as
πc
δx ∼
= ,
2αT
∼ πc
δy =
2ωo sin(|ϑmin | + |ϑmax |)
a value coinciding with a conventional radar estimate [100]. The conditions for the
input data discretisation were defined. Besides, requirements for the synthesis were
formulated, providing that one could ignore the deviation of projections from a straight
line and their incoherence due to the wave front curvature in the target vicinity.
We have made the above analysis for a 2D case, neglecting the SAR’s altitude. This
circumstance does not, however, violate the generality of our treatment. A correction
for the altitude can be easily made by ‘extending’ the linear range by a factor of Ro /R,
where Ro is the slant range to the target’s centre and R is the slant range projection
onto the earth plane.
We should like to emphasise the following important difference between CAT
systems and SAR operating in a spot-light (telescopic) mode. In order to provide a
high resolution, a CAT radar must cover a much larger range of angles than a SAR,
say, 360◦ against 6◦ . This can be understood in terms of image reconstruction from
data obtained within a limited region of a 2D space–time spectrum. In this sense,
the spectral region utilised by the SAR is shifted relative to the origin by 2ωo /c
(Eq. (3.71)), while the spectral region of a CAT radar is not. We shall try to show why
a high resolution can be achieved by a small aperture in SAR.
We should first recall that resolution corresponds to the width of the major lobe
of the pulse response, normally at 3 dB. The resolving power of both CAT and SAR
systems depends only on the frequency band used in a 2D spectrum and it should
be independent of the carrier frequency ωo , which is the frequency of the band shift.
To illustrate, the range resolution for the shaded region in Fig. 1.7 is inversely propor-
tional to the frequency band width along the X -axis (or the u-axis) and the azimuthal
resolution to that along the Y -axis (or the v-axis).
If the number of point objects is large, the image quality becomes poor due
to signal interference. This effect arises because the pulse response of the system,
usually expressed as a 2D function sin x/x, contains a constant phase factor varying
with the carrier frequency ωo and the position of the point object. As is easy to
see, the quality of a reconstructed image is independent of the ωo variation provided
that the function describing the object depends on a complex-valued variable with
an occasional uncorrelated phase. This means that the phases of signals reflected by
different scattering centres are not correlated. The authors evaluated the image quality
from a formula similar to that for finding a mean-square root error. One can suggest
that the process of SAR imaging meets this condition. As a result, the spectrum of
the ‘initial image’ occupies a wide frequency band in Fourier transform space and the
object’s reflectivity can be reconstructed from a limited shifted spectral region. This
circumstance is similar to a fact well known in holography: the image of a diffusely
scattering object can be reconstructed from any fragment of the hologram (Chapter 2).
These aspects of image quality can be treated in a different way. The band width
of space frequencies,
v, which defines the azimuthal resolution, ‘grows’ with the
shift frequency (Fig. 1.7) as
v = (|ϑmin | + |ϑmax |)(2ωo /c).
For a CAT radar, ωo = 0 and
v is
(|ϑmin | + |ϑmax |)
u,
where
u ∼ = 4αT /c 2ωo /c. Therefore, in order to obtain a high azimuthal
resolution, one must have information about the whole range of view angles, 360◦ .
One can eventually say that the principal difference between the CAT and SAR
systems is that the latter is coherent and can process complex signals.
To conclude, the tomographic principle of synthetic aperture operation does not
rely on the analysis of Doppler frequencies of reflected signals. We shall turn to this
factor again in Chapters 5 and 6 when we describe imaging of a rotating object by
an inverse synthetic aperture. It will be shown that the holographic and tomographic
approaches do not need an analysis of Doppler frequencies.
Remote sensing of the earth surface in the microwave frequency range is a rapidly
developing field of fundamental and applied radio electronics [31,77]. It has already
become a powerful method in many earth sciences such as geophysics, oceanology,
meteorology, resources survey, etc. Especially among microwave sensors side-
looking synthetic aperture radars (SAR) are capable of providing high-resolution
images of a background area at any time, irrespective of weather conditions. Extensive
information has been obtained by airborne radars and radars carried by satellites and
spacecraft: SEASAT-A and SIR (USA), RADARSAT (Canada), Almaz-1 (Russia),
ERS and ENVISAT (European Space Agency), Okean (Russia, Ukraine). A challenge
to the radar scientist is the analysis of synthetic aperture imaging of extended targets.
The various tasks of remote SAR sensing of the earth include the study of the
ocean surface, sea currents, shelf zones, ice fields, and many other problems [62].
Objects to be imaged are wind slicks, oil spills, internal waves, current boundaries,
etc. Some of these targets are characterised by motions with unknown parameters,
so they are considered to be partially coherent. This chapter focuses on theoretical
problems of SAR imaging of such targets while their practical aspects are discussed
in Chapter 9.
In contrast to a conventional radar which measures instantaneous amplitudes of
a signal reflected by a target, the SAR registers the signal phase and amplitude for a
finite synthesis time Ts . The conversion of these data to a radar image requires the
knowledge of the time variation of these characteristics, which can be found if one
knows a priori the time variation of the reflected signal. When the view zone includes
only stationary targets, the prescribed data have the form of the time dependence of
distances between the SAR and the objects being viewed. If the time variation of
the signal phase is unknown, the coherence is violated. This may happen not only
in SAR viewing of the sea surface but also because of sporadic fluctuations of the
carrier trajectory (see Chapter 7). So partial coherence may be associated with the
viewing conditions or with the target itself. The analytical method discussed below
preserves its generality in this case.
Dχ = (1 + S)2 . (4.3)
SCS measurements involve a large ambiguity. From Eqs (4.2) and (4.3) it follows that
the standard deviation of the SCS value is equal to the image intensity. To estimate this
value, it is necessary to find the mean noise intensity and subtract it from the image
intensity. We assume χm = S and assume that the estimate dispersion to be constant.
If the radiometric resolution γ is found to be on the level of one standard deviation
(the ratio of the mean value plus one standard deviation to the mean value), then for
the distribution described by Eq. (4.1) at zero noise we have
γ = 10 lg(2 + 1/S). (4.4)
Obviously, γ will not be larger than 3 dB even at S → ∞. The simplest way to
improve radiometric resolution is to average the viewing results on several neigh-
bouring resolution elements of an extended target (incoherent signal integration).
Then we shall have
γ = 10 lg[1 + (1 + S)/N 1/2 S], (4.5)
where N is the number of uncorrelated integrated versions of the image.
Incoherent signal integration by SAR can be provided only at the expense of spatial
resolution because this is normally done by multi-ray processing or by averaging the
intensities of elements of a highly resolved image. For example, the SEASAT-A
aperture used a four-ray processing which, nevertheless, could not totally remove the
speckle noise [99].
Thus, there is a certain contradiction between spatial and radiometric resolu-
tions [61]. A possible compromise is to choose a proper criterion for image quality.
However, this is not very easy to do for two reasons. First, such a criterion must
account for specific features of the object being viewed, which may happen to be
diverse. Second, one must adapt this criterion to the subsequent processing of the
image – visual, automated, etc. Moore [99], for example, suggested using visual
expertise of the image as a criterion for evaluation of its quality. For a quantitative
analysis he used the spatial grey-level (SGL) volume V = Va VR Vg (N ), where Va and
VR are the azimuth and range resolutions, respectively, and Vg (N ) is the grey-level
resolution defined by the number of uncorrelated integrated realisations, N .
Before proceeding with the discussion of criteria that can optimise the coherent-
to-incoherent signal ratio in the synthetic aperture, we think it is necessary to consider
briefly the available methods to describe SAR mapping of a typical fluctuating
extended target – a rough sea surface.
At present we have much information on rough sea surface viewing by SAR sys-
tems [36,62], both airborne and carried by spacecraft. Most of the publications
describe wave movements and their effect on radar image quality. However, this
issue still remains controversial and is a subject of much debate [56].
When the sea surface is viewed by an airborne or space SAR, the probing radiation
incidence varies from 20◦ to 70◦ . Bragg scattering by small-scale and capillary waves
has the greatest effect on the reflection of electromagnetic radiation. The effect of
large-scale (gravitational) waves on the radar image reveals itself in the modulation
of scattering by small-scale waves. These phenomena are usually described by a
2D model which considers the sea surface as a superposition of Bragg scatterers –
capillary and longer gravitational waves. They can also be described by a facet model,
in which facets represent small-scale scatterers with superimposed capillary waves;
the scatterers move with orbital velocities defined by large-scale waves [59]. The
imaging of large-scale waves is affected by the following factors:
The first two processes are important for sea viewing by any radar, whereas the third
process affects only SAR imaging. The effect of moving waves on the image quality
can be found analytically if one bears in mind that the synthesis time (0.1–3 s) is much
shorter than the period of a large-scale wave (8–16 s). Then the functions that describe
the time variation of the facet parameters and scattering coefficients can be expanded
into a Taylor series. The major expansion terms are related to the radial components
(along the slant range) of the orbital velocity and acceleration of the facets. These
components are responsible for two effects: the velocity bunching and the image
defocusing along the azimuth. The velocity bunching is associated with the azimuthal
shift of each facet image because of the radial velocity effect, which represents a
periodic rarefaction and thickening of virtual positions of elementary scatterers along
the large-scale wave pattern. The bunching degree varies with the number of images
of individual facets per unit azimuthal length, which is proportional to
R dur
= , (4.6)
v dx
where R is the inclined range, v is the SAR carrier velocity, ur is the radial velocity
component and x is the azimuthal coordinate on the sea surface. For small values
of ||, this effect is linear and is characterised by a linear transfer function; for large
|| values (>π/2), it becomes nonlinear, leading to image distortions. It is greatest
for waves running along the azimuthal coordinate but practically vanishes for radial
waves.
Image defocusing of large-scale waves is interpreted as being either due to the
radial acceleration of the facets or due to the change in the relative aperture velocity
because of the effect of the azimuthal phase velocity of sea waves [61]. Investigations
have shown that the latter explanation is better substantiated. The major contribution
to the image is made by the amplitude modulation of the reflected signal due to
the surface roughness and facet inclination, whereas the velocity bunching plays
a minor role. As for the image defocusing, it can be removed by correcting the
signal processing conditions, for example, by an additional adjustment of the optical
processor or by refining the base function during digital image reconstruction.
Generally, the sea wave behaviour appears to be quite complex. For this reason,
available models of a probing signal reflected by the sea surface depend on the
particular problem to be solved. Models accounting for the orbital motion of liquid
droplets are too sophisticated to be extended to a large class of objects defined as
partially coherent. Besides, they do not readily apply to the analysis of the influence
of aperture parameters on image quality, because imaging is then determined only
by the sea wave characteristics and viewing geometry. Probably, the only factor that
affects the sea imaging by SAR and related to the choice of radar parameters is the
image defocusing. But even here, we deal with the mapping of sea waves, which
is a particular problem that does not represent the whole class of partially coherent
targets.
On the other hand, of academic interest and practical importance are the problems
of background dynamics, various anomalies in the extended target reflectivity (for
the sea, these are slicks, spills of surface-active substances, etc.), as well as the proper
choice of the SAR design for viewing this class of targets. The analysis shows that
the results obtained can be extended to a large number of partially coherent extended
targets.
In principle, the basic characteristics of extended target images, including images
of sea surface, could be found by solving the problem of electromagnetic wave scat-
tering by a moving plane. The methods of dealing with these problems are well known
but they involve cumbersome calculations.
The radar signal model discussed above agrees well with experimental data [112].
Equation (4.11) has a general form allowing the solution of a large range of problems
involved in the analysis of extended target imaging by SAR systems. We shall fur-
ther omit partially coherent background modulation by large-scale waves, assuming
u(x, t) = 1 in order to be able to extend the results to a sufficiently large class of
objects.
The model we have described can provide the basic statistical characteristics of
partially coherent surface images, but we should first outline the imaging model
itself.
Suppose a SAR is borne by a carrier moving uniformly along a straight line with a
velocity v. The carrier position is described by the coordinate y = vt and inclined
range R, while the position of an arbitrary element of the viewed surface is described
by the x-coordinate (Fig. 4.1). The imaging process is subdivided into two stages – the
registration of the reflected signal (hologram recording) and the image reconstruction.
This approach allows one to represent a general block diagram of the synthetic aperture
(Fig. 4.2) with the complex amplitude of the reconstructed image written as a sum of
convolutions:
s = f ∗ w ∗ h + n ∗ h, (4.12)
R f ( y, t)
y⬘
R
u
y = vt
0
Noise
where f is a function of the viewed surface reflectivity; w and h are the impulse
responses of the radar and the processor, respectively; n is the complex amplitude of
additive noise; and ∗ denotes convolution.
The optimal quality of images of point objects is achieved by matching the impulse
responses of the radar and the aperture processor:
h(y) = w∗ (y). (4.13)
This condition cannot, however, provide an optimal image of an extended proper
object [99], since it is impossible to integrate an incoherent signal and to reduce the
speckle noise on the image. On the other hand, the fact that the image intensity g(u) =
s(u)s∗ (u) is usually registered at the aperture processor output allows introducing the
concept of a partially coherent processor in square filtration theory [58]. One can then
account simultaneously for the effects of coherent and incoherent signal integration
by the aperture and eventually obtain the major statistical characteristics of images
of partially coherent extended targets. This type of processor will have the following
impulse response:
Q(y1 , y2 ) = γ (y1 − y2 )h(y1 )h∗ (y2 ), (4.14)
where γ (y1 −y2 ) is a factor characterising the degree of incoherent signal integration.
Then Eq. (4.13) will be valid for any class of targets.
To avoid cumbersome calculations, we shall introduce Gaussian approximations
of the functions
w(y) = exp(−ay2 /2) exp(jby2 /2), (4.15)
Let us turn back to the synthetic aperture shown in Fig. 4.1. In one of the range
channels, the reflected signal can be represented as a random complex field. For
many real surfaces, the function f (y) in the centimetre wavelength range is a Gaussian
random process with the zero average and the correlation function in the form of the
Dirac delta-function obeying Eq. (4.8).
The time relations for the surface changes can be described by the autocorrelation
function of Eq. (4.9) and that of the reflected signal, assuming u(y1 , t1 ) ≡ s0 (y1 , t1 ):
where s is the complex amplitude of the image and h is the impulse response of the
aperture processor.
To smooth out the image fluctuations, one usually uses incoherently integrated
signals. We can now evaluate the effects of two smoothing procedures: multi-ray
processing and averaging of neighbouring resolution elements on the image [2].
Additionally, we shall consider the potentiality of incoherent signal integration on the
hologram. In the first case, when the image is reconstructed by an optical processor,
its intensity is [60]
g1 (u) = g(u, τ )Da (τ ) dτ . (4.21)
Here Da (τ ) describes the light distribution across the aperture stop located in front of
the secondary film which records the image, τ is the current exposure of the secondary
film, and u is the reconstructed image coordinate.
In the second case, the image intensity is
g2 (u) = g(u )Ga (u − u ) du , (4.22)
where Ca (y − y ) is the weighting function of the averaging and y = vt is the spatial
coordinate on the hologram.
To simplify the calculations, let us approximate the above functions with the
expressions
where H (η, ω) is a 2D transfer function of the aperture processor and Sh (ω) is the
hologram power spectrum. In turn, H (η, ω) = H (η)H ∗ (ω), where H = F{h} is the
Fourier image of the function h(x). With Eq. (4.24), we get
2 2 −1/2 2 2 L
H (η, ω) = (L + b ) exp −(η + ω )
2(L2 + b2 )
b
× exp j(ω2 − η2 ) . (4.26)
2(L2 + b2 )
The function Sh (ω) represents the Fourier transform of the hologram spatial cor-
relation function, Rh (y ), which can be described, for low intrinsic aperture
noise, as
with
Hence, we have
By substituting Eqs (4.26) and (4.28) into Eq. (4.25) and using the expression
covg (u) = F{Sg (ω)} for the background, we obtain
<ga > = Sg (ω)dω = 21/2 π p[aL(a + L + 2B) + b2 (a + L)]−1/2 ,
<ga2 >/σg2 = 1.
(4.29)
Assuming that the spectrum of the intrinsic aperture noise recorded on the hologram
is uniform and has spectral density Shn (ω) = n, we find the respective parameters of
the image noise:
σn2 = n2 (π/L),
(4.30)
ucn
= π[2L/(L2 + b2 )]1/2 ,
2
gan /σn2 = 1.
Q, db
0
20 km
R = 10 km
–1.0
0 10 20
Ls/√ lR/2
Figure 4.3 The variation of the parameter Q with the synthesis range Ls at λ = 3 cm,
= 0.02 and various values of R
At the minimal width of the |H (ω)| function, the difference between ga
and
gan
is also insignificant. This accounts for the maximum of the Q function at
Ls = (λR/2)1/2 (Fig. 4.3). A quantitative analysis of Q shows that the influence
of the real aperture pattern on the signal-to-noise ratio is slight and reveals itself only
at large synthesis ranges, Ls (λR/2)1/2 .
This relation describes the impulse response of the aperture processor and enables
one to find its transfer function:
with
Following the same procedure and using the last two relations, we can find the
characteristics of the background and noise on the image:
The analysis of these relations shows that the image smoothing is improved, as was
expected, while the correlation functions of the clutter and radar noise images are
practically the same, uc
≈ ucn
. Figure 4.4 demonstrates the correlation range ver-
sus the normalised quantity Ls for various degrees of incoherent integration De , or for
冓uc冔, m
40
4
8
20
3
7
2
5
6
1,5
0
2 4
Ls/√ lR/2
Figure 4.4 The dependence of the spatial correlation range of the image on nor-
malised Ls for multi-ray processing (solid lines) at various degrees of
incoherent integration De and for averaging of the resolution elements
(dashed lines) at various Ge ; λ = 3 cm, R = 10 km; 1, 5–0 (curves
overlap); 2, 6–0.25(λR/2)1/2 ; 3, 7–(λR/2)1/2 ; 4, 8–2.25(λR/2)1/2
different aperture stop sizes. It is clear that the image correlation at Ls > (λR/2)1/2
(the focused processing region) will only slightly vary with De but the correlation
range in incoherent integration will become larger (the defocused processing region).
The parameter Q then takes the form
Its quantitative analysis indicates that it does not much affect the signal-to-noise
ratio.
When the resolutions of neighbouring elements are averaged according to
Eq. (4.22), the processor transfer function is expressed as
with
Hence, we have
In this case, we also have uc
≈ ucn
. Figure 4.4 illustrates this dependence at
various widths of the integrating function Ge . Obviously, the image correlation
range increases in proportion with the integrating window width. The expression
for the coefficient Q coincides with Eq. (4.31), since the statistical properties of the
background and noise images are similar and cannot contribute to the power.
Qh
10
10
8
5
6
4
2
Ka = 1
0 5 10
Ni
Figure 4.5 The variation of the parameter Qh with the number of integrated signals
Ni at various values of Ka
The major SAR characteristics for viewing low contrast targets such as sea currents,
wind slicks, oil spills, etc., are the spatial resolution and radiometric (contrast) res-
olution determined by the number of incoherent signal integrations [58]. It is clear
that a proper choice of the proportion between spatial and radiometric resolutions
(coherent and incoherent integration) will depend not only on the radar parameters
but on the properties of the target to be viewed. So it is reasonable to consider opti-
misation of SAR performance in the context of partial coherence of signals reflected
by an extended target.
Recall that the process of imaging includes two stages. First, the received signal
is recorded on a radar hologram as u(y ) = w(y − y)f (y), where w(y − y) is the
impulse response of the aperture receiver, f (y) is a function describing the spatial
distribution of the target reflectivity, y is the coordinate in the viewed surface plane,
y = vt is the SAR carrier coordinate, and t is current viewing time. Second, the
image field is recorded: g(y ) = u(y )h(y − y ), where h(y − y ) is the impulse
response of the aperture processor and y is the image coordinate.
where So (ω) is the space frequency spectrum of the SCS of the object and KR (ω) is
the FCC of the aperture.
For instance, if the average SCS of the background is σ0 , the distribution of a low
contrast target is described by the function
where m < 1 is a factor defining the target’s initial contrast Kin = (1 − m)/(1 + m)
with respect to the background, A = 2π/l 2 is a parameter related to the target’s size l,
and the aperture FCC is given by the expression
where z denotes its width. Then using Eq. (4.53), we can write the spatial distribution
of the image intensity:
It is clear that the contrast and target size on the image become distorted but the
knowledge of the explicit quantity KR (ω) can give the real object’s parameters.
For targets whose reflectivity varies with time randomly, the signal received by
the aperture possesses a partial coherence and the hologram function u(y ) is no longer
a convolution integral. In that case it would be unreasonable to use linear filtration
theory. We shall show, however, that statistical methods and physical assumptions
concerning the time fluctuations of objects’ reflectivities can make this convenient
formalism work successfully.
For this, we shall find the aperture response for a low contrast target (m 1),
whose reflectivity distribution is described by the function
and g
are the average values of the target’s SCS and image intensity, respectively.
The correlation function of the field in Eq. (4.59) is defined as
Rf = f (y1 , t1 )f ∗ (y2 , t2 )
= [1 + m cos(y1 )]f (y1 )f ∗ (y2 )α(t1 | y1 )α ∗ (t2 | y2 )
.
(4.60)
For many real surfaces, f (y) in the centimetre wavelength range is a Gaussian process
with a zero average and a correlation function in the form of the Dirac delta-function
of Eq. (4.8). Assuming the time fluctuations of the signal to be a steady-state random
process, we can use the approximation of Eq. (4.11). Together with Eq. (4.60) and
m 1, y = vt, we shall have
Rf = [1 + m cos(y1 ) + m cos(y2 )]δ(y1 − −y2 ) exp[−(y1 − y2 )2 B/2
with B = 2π/(vτ )2 .
The average image intensity is
g
= h(y1 )h∗ (y2 )Ru (y1 , y2 ) dy1 dy2 , (4.61)
Using the Gaussian approximations of the impulse responses in Eqs (4.15) and (4.16),
we obtain, instead of Eq. (4.61), g
= g0
+ 2g
, where g0
= (2)1/2 π [aL(a +
L + (a + L)]−1/2 is the average intensity of the fluctuating background image and
g
= mg0
× exp[−(2 /4)((a + L)(a + L + 2B)/aL(a + L + 2B) + b2 (a + L))].
(4.62)
For real viewing, we have b2 aL and b2 LB, which reduces Eq. (4.62) to
Kout = 2m exp[−2 (a + L + 2B)/4b2 ],
(4.63)
KR () = exp[−2 (a + L + 2B)/4b2 ].
There is a certain relationship between the FCC and the azimuthal resolution of the
aperture. The latter can be found from the width of the averaged impulse response to
a fluctuating point target:
δa = g(y )
/g(0)
dy . (4.64)
The signal reflected by this target can be prescribed as f (y, y ) = δ(y)α(y ), where
α(y ) describes the time fluctuations of the signal, whose correlation properties are
defined by Eq. (4.11). With Eqs (4.61) and (4.64), we get
δa = [π(a + L + 2B)/b2 ]1/2 .
Ωe,
rad/m
7
tc → ∞
6
5
tc = 0.4 s
4
3
tc = 0.2 s
2
tc = 0.1 s
1
Figure 4.6 The variation of the parameter e with the synthesis range Ls at various
signal correlation times τc
where K0 () = exp[−2 (a + L)/(4b2 )] is the aperture FCC in the absence of signal
fluctuations and Kτ () = exp[−2 B/(2b2 )] is multiplicative noise arising from
fluctuations in the radar channel.
Therefore, a SAR can be described as a set of two filters – a filter of space
frequencies K0 () and a narrow band space–time filter Kτ (), whose bandwidth is
determined by the time of the surface fluctuation correlation. The image has a spatial
intensity spectrum SI () = S0 ()KR (). On the other hand, one can consider that the
aperture measures the space–time spectrum S0τ () = S0 ()Kτ () if one assumes
its FCC being independent of the target’s properties and describes the radar with the
function K0 ().
Q
1.0
tc → ∞
0.8
tc = 0.4 s
0.6
0.4
tc = 0.2 s
0.2 tc = 0.1 s
Figure 4.7 The parameter Q as a function of the synthesis range Ls at various signal
correlation times τc
To conclude, the parameters of radar apertures for viewing fluctuating targets can
be optimised by matching the characteristics K0 () ≈ Kτ (). The latter equality
provides the imaging of a surface with nearly as much detail as possible potentially
for a particular type of object. This equality can be obtained by choosing the value of
Ls equal to Ls = vτ , which means that the synthesis time should not be longer than the
time of the signal correlation. As a result, the aperture resolution appears to be limited
to δa = λR/2Ls but this choice of Ls provides the (N =
R/Ls ) number of image
realisations. The aperture contrast resolution, defined by the number of incoherent
integrations N , is in turn independent of the signal coherence time τ . So the choice
of Ls > vτ does not provide the desired spatial resolution but it decreases N , making
the contrast resolution poorer.
The potentiality of the SAR in viewing low contrast targets can be conveniently
described by the parameter Q = Ndh /(2δa ) equal to unity at zero fluctuations. If
the fluctuations are present, Q essentially depends on the chosen synthesis range Ls
(Fig. 4.7). For example, the signal fluctuations at Ls < vτ do not noticeably affect
the image quality and Q = 1. At Ls > vτ , the aperture performance proves to be
inferior to its potentiality (Q < 1), since the real aperture resolution does not fit the
chosen value of Ls but is rather defined by the signal correlation time τ .
We can draw the following conclusions from these results:
• To describe the imaging of fluctuating targets, one can make use of linear filtration
theory, representing the radar as a filter with a certain FCC. The aperture can be
considered as a device measuring the space–time spectrum of the object being
viewed.
• One can suggest that the time fluctuations of the signal in the viewing channel
create multiplicative noise decreasing the azimuthal resolution of the aperture.
• This approach provides a reasonable compromise between the potential azimuthal
resolution and the aperture contrast resolution. This compromise can be achieved
by choosing the synthesis time equal to the signal correlation time.
The overall analysis of the results presented in this chapter shows that the available
methods for describing the properties of sea surface images can be supplemented
by a more general approach to SAR viewing of partially coherent objects. The con-
cept of partial coherence allows one to cover a much larger class of targets and to
describe the basic principles of their imaging. The advantages of this approach are
as follows: first, it is based on a fairly general model of the radar signal. Expres-
sion (4.10) accounts for general and specific features of the viewing of fluctuating
targets. We shall show in the following chapters that the correlation function of time
fluctuations in Eq. (4.13) can be used, for example, to describe trajectory instabilities
of the SAR carrier. Second, this approach provides an analytical description of the
major statistical characteristics of images of partially coherent targets; these, in turn,
enable one to evaluate image quality. Finally, the relative simplicity of mathematical
calculations and the clear physical sense of the results obtained make this approach
advantageous and convenient as a tool for solving practical tasks associated with SAR
designing and for remote sensing of partially coherent targets.
The possibility of using the rotation of an object to resolve its scattering centres was,
probably, first shown by W. M. Brown and R. J. Fredericks [21]. Independently,
microwave video imaging of rotating objects was demonstrated theoretically and
experimentally by other researchers [109].
An analysis of three approaches (in terms of the antenna, range-Doppler and
cross-correlation theories) was made in References 104 and 146 for the imaging of
rotating targets. Here we discuss this problem in terms of a holographic approach.
We shall start with the basic principles of inverse synthesis of microwave holograms
of an object rotating around the centre of mass. The analysis will be based on the
holographic approach discussed in Sections 1.2 and 2.4.
Lens-free optical Fourier holography [131] implies that an optical hologram is
recorded when the amplitude and phase of the field scattered by the object are fixed in
a certain range of bistatic angles 0 < β < β0 (Fig. 5.1). In the microwave range, this
is equivalent to the displacement of the radar receiver along arc L of radius R0 from
point A to point B, while the transmitter remains immobile. A coherent background
must be created by a reference supply located in the object plane. Since such a supply
is unfeasible, the coherent background is created by an artificial reference wave in
the radar receiver (Chapter 2). In further analysis, we shall use a model object made
up of scattering centres described by Eq. (2.3). Then a direct synthesis along arc L
of radius R0 by a bistatic radar system (Fig. 5.1) can produce a classical microwave
Fourier hologram [109], with a subsequent image reconstruction as a 1D distribution
of the scattering centres and their effective scattering surfaces.
To discuss the principles of inverse synthesis and formation of a 1D microwave
Fourier hologram, we shall make use of the well-known relation for uni- and bistatic
Target
0 y
L
0
A
R
b0
x
C
B
1 N
radars [69]. According to Kell’s theorem, at small bistatic angles β the bistatic radar
cross-section (RCS) for the angle α (Eq. 2.5) and the bistatic angle β is equal to the
unistatic RCS measured along the bisectrix of the angle β at a frequency reduced by
a factor of cos(β/2) (Chapter 2).
Kell’s theorem and the fact that the rotation of a transmitter–receiver unit around
the object can be replaced by the rotation of the object round its axis passing through
the centre of mass normal to the radar viewing line lead one to the conclusion that
such a unit, fixed at the point C (Fig. 5.2), can synthesise a 1D microwave Fourier
hologram identical to a lens-free optical Fourier hologram. This approach was first
discussed by S. A. Popov et al. [109].
In order to find analytical relations for the classical and synthesised Fourier holo-
grams, let us consider the schematic diagram in Fig. 5.3. To simplify the calculations,
we shall deal only with one kth scattering centre with the coordinates
rkz = rk cos θk ,
→
Ω
Target
0 y
A b0
0
R
x
C B
z
→
Ω
Target
uK rK
0 y
wK + w
→ → → →
R0 d 0(t K, R0)
x
O1
Radar
Figure 5.3 The geometry of data acquisition for the synthesis of a 1D microwave
Fourier hologram of a rotating object
With Eq. (5.1), the input receiver signal can be described as a function of the
object rotation angle:
N
4π ϕ
u̇r (ϕ) = u0 σk exp −j d(rk , R0 ) · exp iω0 , (5.2)
λ1
k=1
where
0) ∼ rk
d(rk , R = 0
R 1 − [sin γ sin θ cos(ϕ + θk ) + cos γ cos θk ] , (5.3)
R0
λ1 = 2π c/ω0 is the radar wavelength; σk is the amplitude coefficient accounting
for the reflection characteristics of the kth scattering centre; γ = arc tg(xo /yo ) is the
angle between the vector R 0 and the positive z-axis; xo , yo , zo are the observation point
coordinates; O1 is the observation point; and R0 = |R| = xo2 + zo2 is the distance
between the observation point and the centre of mass of the object.
In order to derive the hologram function in a way shown in Chapter 2, it is
reasonable to use the multiplication procedure performed by an amplitude-phase
detector, followed by an averaging. The artificial reference signal is
ϕ
with
2π
βk = rk (cos γ cos θk + sin γ sin θk cos ϕk ) (5.7)
λ1
and
ϕ2 ϕ3
lk (ϕ) = sin γ sin θk ϕ sin ϕk + cos ϕk − sin ϕk . (5.8)
2 6
Consider now the microwave hologram function of the same object (Fig. 5.1),
obtained by a classical method. In this method, the radar receiver scans, with an
angular velocity , the surface of a cylinder of radius R0 sin γ , having the gener-
atrix parallel to the z-axis. The transmitter is at the point A with the coordinates
where the functions βk and lk (ϕ) are similar to those of Eqs (5.6) and (5.7).
A comparison of Eqs (5.6) and (5.9) shows that the function Hcl (ϕ) differs from
the function H (ϕ) for the synthesised hologram of the same object in having the
factor ( 12 ) in the second term of the argument cos 2[· · · ]. It is clear that the synthesised
hologram possesses a double capacity to change the argument and, hence, it has twice
as high resolution because it looks like the classical hologram recorded in a field with
a wavelength twice as short as the real one. This effect is due to the simultaneous
scanning by several elements of the transmitter–object–receiver system. It is easy
to see that a microwave hologram recorded by a simultaneous receiver–transmitter
scanning of a fixed object along the arc L (Fig. 5.1) is totally identical to the HA (ϕ)
hologram. In the case of inverse scanning, however, the rotation of the object alone
is equivalent to the movement of two devices – the transmitter and the receiver.
We shall show below that the constant initial phase βk does not affect the structure
of microwave radar imagery. We shall use a simplified expression for the synthesised
Fourier hologram:
N
4π
H1 (ϕ) ∼
= σk cos rk sin θk cos(ϕk + ϕ) , (5.10)
λ1
k=1
where rk , θk , ϕk are the spherical coordinates of the kth centre. Equation (5.10) was
derived from Eq. (5.5) on the assumption of γ = 90◦ and is valid for the far-zone
approximation.
Since the H1 (ϕ) function basically coincides with Hcl (ϕ), the image reconstruc-
tion from a synthesised Fourier hologram can be made in visible light, using the same
techniques as those of optical Fourier holography [131].
Sometimes, a microwave hologram recorded on a flat transparency is placed in
the front focal plane of the lens L (Fig. 5.4(a)). When the transparency is illuminated
by a plane coherent light wave, two real conjugate images of the object, M and M ,
are formed near the rear focal plane of the lens. An alternative is to use a spherical
transparency of radius F0 , illuminated by a coherent light beam converging at the
sphere centre (Fig. 5.4(b)). The two variants are identical in the sense that the opera-
tions to be performed are the same. Practically, it is convenient to use the first variant
but to analyse the second one.
If a microwave hologram is recorded on an optical transparency uniformly moving
with velocity vt , the angular coordinate α = vt τ/F0 on the transparency in the
reconstruction space will be related to the angular coordinate ϕ = τ on the hologram
in the recording space:
(a) Lens u
uM M
I0
v
0 vM
M⬘
H
F0
(b) d(u,v,a) u
A
M (u,v)
F0
a I0
v
2ao 0
L
M⬘
α0
2π
I0 = A exp −j d(u, v, α) dα,
λ2
−α0
α0
A
I±1 = exp[jψ±1 (u, v, α)]dα,
2
−α0
π 2π
I±1 (u, v, α) = ±4 rk cos(µα + ϕk ) − d(u, v, α),
λ1 λ2
d(u, v, α) = [F02 + 2F0 (v cos α − u sin α) + u2 + v2 ]1/2 ,
where |I (u)|2 is the light intensity distribution across the scattering centre image and
uM is the coordinate of the maximum intensity of the image focusing.
Equation (5.17) describes the receiver pulse response to the point object. Then,
neglecting all the terms in Eq. (5.13) except for the first one and using the scale
relations of Eq. (5.16), we can define the resolving power of the object as
u λ1 λ1
x (λ1 , ψS ) = = = , (5.18)
mx 4ϕ0 2ψS
where ψS is the object angle variation during the recording. Therefore, when the holo-
gram angles are small, the resolving power of the object varies with the wavelength
and the synthesised aperture angles, rather than with the distance to the object or the
reconstruction parameters.
With the scale relations from Eq. (5.16), we find for µ = 1
λ2
m y = mx = 2 .
λ1
Then the criterion described by Eq. (5.18) can yield the resolution of a video
microwave image:
λ2
u (α0 ) = x (λ1 , ψS )mx = . (5.19)
2ϕ0
It follows from Eq. (5.19) that the resolution of a microwave image obtained by
inverse synthesis and optimal processing is fully consistent with the Abbe criterion
for optical devices (Chapter 1).
Consider now distortions arising from the reconstruction of a microwave image.
These are defined by the high-order terms of Eq. (5.13) for the following reason. When
an image is viewed in one plane, some of the scattering centres are shifted relative to
this plane, that is, they are defocused. With the quadratic term of Eq. (5.13), the field
distribution in a defocused point image is defined as
α0 4rx p2
I+1 (p, t0 ) = A exp πj −
t0 λ1 2
× {C(t0 + p) + C(t0 − p) + j[S(t0 + p) + S(t − p)]}, (5.20)
√
where t = 2(vM − v)/λ2 describes the viewing plane shift relative to the focusing
plane and p = uM /λ2 t, t0 = α0 t and S(z), C(z) are the Fresnel integrals.
The resolution of a defocused microwave image is described by the function
∞
|I+1 (p, t0 )|2
(to ) = dp (5.21)
|I+1 (O, t0 )|2
−∞
shown in Fig. 5.5. Obviously, the best resolution ˆ = 1.2 is achieved at a certain
optimal value of t0 = t̂0 = 1 and an optimal aperture size
α̂0 = [2(vM − v)/λ2 ]−1/2 . (5.22)
∆ (t0)
2.0
1.8
1.6
1.4
1.2
1.0 t0
0.4 0.6 0.8 1.0 1.2 1.4 1.6
At v = 0, when the viewing plane is superimposed with the focal plane of the lens,
we can use Eq. (5.15) to get
−1 √ −1
α̂0 = 2µ yM /λ1 = µ τmax , (5.23)
where τmax = 2Lmax /λ1 is the maximum longitudinal dimension of the object,
expressed as half-wavelengths.
As the size of the object or the aperture increases, the influence of the high-order
terms of Eq. (5.13) becomes more pronounced resulting in distortions and a lower
resolution. These factors impose constraints on the synthesised aperture size.
The image reconstruction of microwave Fourier holograms has some specificity
associated with the way the artificial reference wave is created. If the reference
signal phase is not modulated, the phase of the coherent reference background along
the hologram is constant, a situation equivalent to the position of a point object
at the rotation centre. So during the reconstruction, the three images – that of the
reference source and the two conjugate images of the object – overlap. To separate
these images, one should introduce a space carrier frequency (SCF) by changing the
phase of the reference signal at a constant rate, like in the expression
where rmax is the vector radius of the scattering centre located at the maximum distance
from the object rotation centre.
The reference wave phase can be modulated by a phase shifter or by introducing
translational motion along the viewing line, in addition to the rotational motion. In
the latter case, the translational velocity v must satisfy the inequality v > rmax .
N
4π
H2 (ϕ) ∼
= σk sin rk sin θk cos(ϕk + ϕ) . (5.25)
λ1
k=1
According to Eq. (2.23), the holograms H1 (ϕ) and H2 (ϕ) can form a complex Fourier
hologram:
N
4π
H (ϕ) = H1 (ϕ) + jH2 (ϕ) = σk exp j rk sin θk cos(ϕk + ϕ) . (5.26)
λ1
k=1
where u and are the amplitude and phase (in the recording plane) of the total
field scattered by the object. The argument ϕ of the H function has been replaced
by the linear x-coordinate, since a 1D microwave hologram is recorded on a flat
transparency.
The image reconstruction by a plane wave in a paraxial approximation is reduced
to the Fourier transformation of the hologram function, assuming for simplicity that
the recording and the reconstruction are performed at the same wavelength:
∞
V (ωx ) = H (x) exp(−jωx x) dx, (5.28)
−∞
where ωx is the space frequency corresponding to the coordinate in the image plane.
The substitution into Eq. (5.28) of the expressions for the quadrature holograms
in Eqs (5.10) and (5.25), re-written as Eq. (5.27), gives
∞
1
V1 (ωx ) = u exp( j) exp(−jωx x) dx
2
−∞
∞
+ u exp(−j) exp(−jωx x) dx , (5.29)
−∞
It is seen that each quadrature hologram gives two conjugate images described by the
appropriate terms in Eqs (5.29) and (5.30).
In a complex hologram, the first quadrature component gives two conjugate
images in Eq. (5.29), while the second component reconstructs the images
∞
1
V2 (ωx ) = u exp( j) exp(−jωx x) dx
2
−∞
∞
− u exp(−j) exp(−jωx x) dx . (5.31)
−∞
The first terms in Eqs (5.29) and (5.31) are identical, while the second terms differ
in the phase by the value π . A combined reconstruction after summing up the fields
in Eqs (5.29) and (5.31) yields one pair of conjugate images that enhance each other
and another pair of images that annihilate each other; so we eventually have
∞
V (ωx ) = u exp( j) exp(−jωx x) dx. (5.32)
−∞
The complex-valued function V (ωx ) describes the only image reconstructed from a
complex hologram [145]. The image intensity can be defined as
To illustrate, consider the case when the object is a point and the parameters θ1 and
ϕ1 are equal to π/2. For small values of ϕ (ϕ < 1 rad.) and ϕ = x/vt , where vt is
the velocity of the recording transparency, Eq. (5.26) reduces to
∼ 4π
H (x) = u exp j r x . (5.34)
λ1 v t
Since the hologram is recorded in a finite time interval,τ ∈ [−T /2, T /2], Eq. (5.28)
yields
t T /2
v
The substitution of Eq. (5.34) into Eq. (5.35) and the integration give
4π vt T 4π
V (ωx ) = 2σ sin r − ωx r − ωx . (5.36)
λ 1 vt 2 λ1 v t
Clearly, this function is of the sin z/z type and has a maximum at ωx =
(4π/λ1 )r( /vt ), which corresponds to the image of the point.
Digital reconstruction reduces to the calculation of the integral in Eq. (5.28) and
has no zeroth order. So a complex hologram can be formed without introducing
the carrier frequency, which decreases the amount of data to be processed: a single
quadrature hologram requires, at least, twice as many discrete counts because of the
high carrier frequency.
Optical reconstruction produces the zeroth order, in addition to a single image,
because of the presence of the reference level of Hr (Eq. (2.20)). During the process-
ing of a complex hologram recorded without the carrier frequency, the zeroth order
overlaps the image. Their spatial separation can be made by just introducing the car-
rier frequency. Then the use of a complex hologram has no sense, since one does not
have to remove the conjugate image. Besides, the optical reconstruction of a complex
hologram is hard to make due to the strict requirements on the adjustment of the two-
channel processing suggested in Reference 35. Thus, complex microwave holograms
should be recorded without introducing the carrier frequency and reconstructed only
digitally.
0.2 0.2
0.2 0.2
Figure 5.6 Microwave images reconstructed from Fourier holograms: (a) quadra-
ture hologram, (b) complex hologram with carrier frequency, (c) com-
plex hologram without carrier frequency and (d,e,f) the variation of the
reconstructed image with the hologram angle ψs (complex hologram
without carrier frequency)
was not needed. This is clearly seen in Fig. 5.6(c) showing the image reconstructed
from a complex hologram recorded without the carrier frequency.
Figure 5.6(d–f) presents the variation of the reconstructed image with the holo-
gram angle. The comparison of these results supports the above conclusion that
there is an optimal size of the synthesised aperture. As the angle ψS becomes larger,
the resolution increases to a certain limit, beyond which distortions arise in the
image structure. The resolving power of this technique estimated from the results
of the digital simulation is ∼λ1 .
Currently, there are two methods used in microwave Fourier holography. One is
based on the recording of a single quadrature phase-amplitude hologram of the type
described by Eq. (5.10) with the carrier frequency and optical image reconstruction.
The other method records a complex hologram of the type described by Eq. (5.26)
without introducing the carrier frequency but using a digital image reconstruction.
The application of the first method involves some problems associated with the
use of an anechoic chamber (AEC), because the linear displacement of the object
for introducing the carrier frequency leads to the camera decompensation. So we
Input H1 Input H2
Normalisation Normalisation
H1 H2
Selection of Selection of
synthesis interval synthesis interval
Interpolation Interpolation
Fast Fourier
transform
Computation W = /V/2
Output W
recommend the second technique when one uses an anechoic camera. We shall discuss
some of the results obtained by the second method.
Figure 5.7 illustrates the algorithm of digital image reconstruction, which operates
as follows. The setting of discrete data is followed by their normalisation, that is, the
data are reduced to the variation range [−1, 1]. The hologram is usually recorded for
a full 2π rad rotation; so for the subsequent processing, one selects a series of counts
in such a way that their number describes the optimal aperture and their position in
the array corresponds to the required aspect. An interpolation unit makes it possible
to reduce the number of signal records to 2m , where m is a natural number. The image
reconstruction is performed by a Fourier transform unit using the FFT algorithm for
the complex-valued function H (x). Arrays of Re(V ) and Im(V ) numbers that define
the image, whose intensity is found as W = Re2 (V ) + Im2 (V ), are produced at the
unit output.
Figure 5.8 presents the results of digital processing of 1D complex Fourier holo-
grams recorded experimentally with an anechoic camera. The image intensity is
plotted in relative units along the y-axis and its linear dimension along the x-axis.
The object is a metallic sphere of radius 0.3λ1 , rotating along a circumference of
radius 3λ1 . The positions of the point image in Fig. 5.8(a–c) are different and vary
with the object aspect ψ0 as shown schematically in each figure.
(a)
c0 = p/12, cs = p/6
W, rel. un.
1.0
0.6
0.2
–12.8 0 12.8 r, cm
W, rel. un.
1.0
0.6
0.2
–12.8 0 12.8 r, cm
W, rel. un.
1.0
0.6
0.2
–12.8 0 12.8 r, cm
The methods we have discussed have some advantages and limitations. The
recording of single quadrature holograms is made in one channel but requires that
the carrier frequency should be introduced in this way or another. The recording of
complex holograms does not require the carrier frequency but it is more complicated
because the channels must have a strict quadrature character, their parameters must
be identical, and the measurements must be well synchronised. However, the record-
ing errors associated with these characteristics of a two-channel system can be easily
eliminated by the processing. (We have mentioned above that complex microwave
Fourier holograms should be processed only digitally.) The image reconstruction
from quadrature holograms can be made both digitally and optically. The possibil-
ity of recording a hologram in a form suitable for digital processing increases the
dynamic range of the system. It does not then need the use of sophisticated units,
Second, since the radar is a coherent system, it seems important to define the
discretisation step δθ of the θ angle as the target aspect changes. The criterion for
choosing a δθ value can be formulated as follows: the phase shift of the echo signal
from the point scatterer most remote from the target centre of mass should not be
larger than π when the target aspect changes by δθ . This criterion is written as
This expression is valid for relatively narrowband signals, whose spectral width is
much less than the carrier frequency. Otherwise, one should substitute λc in Eq. (6.2)
by the wavelength of the highest frequency component in the signal spectrum.
It is worth noting that the method of synthesising the so-called unfocused aperture
is a particular case of the above processing algorithm for the frequency domain. The
movement of a point scatterer along an arc is approximated by the movement along a
tangent to it. By substituting v = y cos θ − x sin θ into Eq. (2.40) and using sin θ ≈ θ
and cos θ ≈ 1 − θ 2 /2, we get
∞
S( f ) =H ( f ) g(x, y){exp[ j(kc + k)θy2 ]}
−∞
If we eliminate the squared phase term, it will be clear that the ĝ(x, y) function can be
reconstructed by an inverse Fourier transform (IFT) over the rectangular raster which
has replaced the respective region of the polar raster. This approximation works well
only if the aspect variation during the data acquisition was small.
Let us discuss now the processing algorithm for the space domain, or the convo-
lution algorithm. For this, Eq. (2.48) will be transformed from the Cartesian to polar
coordinates:
π ∞
ĝ(x, y) = dθ Sθ ( fp )|fp | exp[ j2π fp r cos(θ − ϕ)]dfp . (6.3)
0 −∞
The inner integral in Eq. (6.3) represents the IFT of the product of fp and the function
defined by expression (2.43). The result is the convolution of the quantity F −1 {Sθ ( fp )}
with the so-called kernel function q(v) = F −1 {|fp |}. If one uses the window function
F( fp ) to reduce the effect of high-frequency spectral noise, one gets
The result of the integration with respect to the variable fp in Eq. (6.3) using Eq. (6.4)
is known as a convolutional projection. It can be used for making a back projection
procedure:
π
ĝ(x, y) = ξθ [r cos(θ − ϕ)] dθ. (6.5)
0
This procedure implies the integration of the contribution of each convolutional pro-
jection ξ0 (·) to the resulting image. The substitution of the integral in Eq. (6.4) by the
Riman sum gives
M
=1
ĝ(xi , yj ) = ξ0 [r(xi , yj )mθ ] δθ , (6.6)
m=0
where
r(xi , yj , mδθ ) = xi2 + yj2 cos[mδθ − arctg(xi /yj )]. (6.7)
The latter expression is used to find (by interpolation) the contribution of the convo-
lutional projection obtained at the mth target aspect to each of the (xi , yj ) pixels of
the rectangular image grid.
An important advantage of the convolution algorithm is the possibility of pro-
cessing data as they become handy, because the contribution of every projection to
the final image is computed individually.
If the transmitter signal contains a finite number L of discrete frequencies, Eq. (6.3)
will take the form:
L
π
4πfp l
ĝ(x, y) = Sθ ( fp l) exp[ j2π fp lr cos(θ − ϕ)] dθ (6.8)
c
l=1 0
and the processing algorithm reduces to summing up 1D integrals with respect to
the variable θ . We can make computations with formula (6.8) in two ways. One is to
calculate the integral for every value of (xi , yj ) and the other is to solve the subintegral
expression for the M number of aspects for every frequency value, followed by
interpolation, as in the common convolution algorithm.
Thus, radar imaging of extended compact targets by inverse aperture synthesis
can be made by using a number of algorithms well known in computerised tomog-
raphy. The application of the convolution algorithm of the back projection method
allows a reduction in the imaging time, as compared with the time of reconstruction
in the frequency domain, due to the processing of individual echo signals. The inter-
polation can be omitted in the case of discrete-frequency transmitter signals, giving
an additional reduction in the processing time.
Another important feature of an imaging radar is its coherence, so it provides more
information than conventional systems using computerised tomography. On the other
hand, coherence must be maintained in all of the radar units during the operation. This
circumstance also imposes restrictions on the minimum repetition rate of transmitter
pulses.
It has been shown in Chapter 5 that inverse aperture synthesis is the most promising
technique for imaging extended proper and extended compact targets with a high
angular resolution. The fact that such targets can be imaged during their arbitrary
motion makes it possible to use this technique in available radar systems (Chapter 9).
The conditions for microwave hologram recording are primarily determined by the
application of the images to be obtained. For example, if radar responses are studied
in an anechoic chamber (AEC) (Chapter 9), it is sufficient to use a 2D geometry
with an equidistant arrangement of the aspect angles. The target rotates uniformly
around the axis normal to the line of sight. By deviating the rotation axis from this
normal after every measurement run, one can, in principle, obtain 2D images even
with monochromatic radar pulses.
(a) 350.0
31 grad
66 grad
88 grad
300.0
Aspect angle a, grad
250.0
200.0
0.0 100.0 200.0 300.0 400.0 500.0
Observation time, s
(b) 60.0
50.0
Aspect angle b, grad
40.0
30.0
20.0
10.0
0.0
–10.0
0.0 100.0 200.0 300.0 400.0 500.0
Observation time, s
Figure 6.1 The aspect variation relative to the line of sight of a ground radar as a
function of the viewing time for a satellite at the culmination altitudes
of 31◦ , 66◦ and 88◦ : (a) aspect α and (b) aspect β
It follows from Eq. (2.34) that signal noise due to the presence of coordinate infor-
mation can be corrected by the receiver. The correction consists in selecting the time
strobe position in accordance with the delay 2|R o |/c and in introducing the phase
o |/c) in the reference signal during the coherent sensing.
factor exp j2πf0 (2|R
As a result of the compensation for the radial displacement of the satellite, the
family of spectra of video pulses must be represented as a microwave hologram. For
this, we go from time frequencies to space frequencies to get
S( fpo + fp ) = F{Sv (ct/2)} = H ( fp ) g(rno ) exp−j2π( fpo + fp )d(t) drno ,
v (6.10)
where F{·} is the Fourier transform operator, W ( fp ) = F{w(v)} is the space frequency
spectrum of the transmitter pulse, fpo = 2fo /c is the space frequency corresponding to
the spectral carrier frequency, 2fl /c < fp < 2fu /c is the space frequency determined
over the whole frequency bandwidth of the transmitter pulse, H ( fp ) = W ( fp )K( fp )
is the aperture function, and K( fp ) is the transfer function of the filter for the range
processing of video pulses.
The above analytical description of video pulse spectra in terms of space fre-
quencies has not changed the r̂no (t) function, which is still considered to be a time
function at the synthesis step. Now the pair of angular coordinates θ , B in the 3D
frequency space (Fig. 6.2(b)) will be compared at every moment t of the synthesis
step. The microwave hologram function can be presented as a 3D Fourier transform
in the spherical coordinates fp , θ , B:
S( fp ) = H ( fp ) g(rno ) exp(−j2π fp rno ) drno , (6.11)
v
where fp = ( fpo +fp )e(θ, B) is the radius vector of the space frequency in the frequency
domain.
The geometrical relations for the recording of such a hologram will be derived for
two typical cases of ground radar viewing of orbiting satellites. Fig. 6.2(a) shows the
viewing geometry and Fig. 6.2(b) illustrates fragments of the holograms obtained.
The angular position of the radar line of sight (RLOS) is described by the azimuthal
angle θ = α − 3π/2 and the polar angle β with respect to the whole body-related
coordinate system xyz. The line of sight is represented in space as a line across a
unit sphere with the centre at the coordinate origin. The arrangement of the hologram
pixels in the frequency domain is defined relative to the fx fy fz coordinates by the
angles θ, B and the radial fp coordinate (Fig. 6.2(b)). The hologram recording should
meet the conditions θ = θ ∗ and B = β ∗ , where θ ∗ and β ∗ are the estimates of the θ
and β angles.
In the first of the above cases, a narrowband radar tracks a satellite, stabilised
by the body-related coordinates along the three axes, during its translational motion
along the orbit. The line of sight turns relative to the satellite to describe a curve
on the unit sphere (the left side of Fig. 6.2(a)), which represents an arc in the xy plane
if the radar is located in the orbital plane, or a 3D curve in all other cases. If the radar
transmits a continuous wave, the hologram reproduces the shape of this line on the
sphere fpo in the frequency domain (Fig. 6.2(b)).
If a radar transmits a pulsed signal with the repetition rate Fr or if a continuous echo
signal is appropriately discretised, a hologram will represent a series of individual
(a) v
z
b b2 b1
q y
q
a
dfB
DB Dfp
B
u fy
Du Du
fx
Figure 6.2 Geometrical relations for 3D microwave hologram recording: (a) data
acquisition geometry; a–b, trajectory projection onto a unit surface
relative to the radar motion and (b) hologram recording geometry
samples separated by δfψ = fpo θ̇ ∗ cos β ∗ /Fr , where θ̇ ∗ = d θ̇ ∗ (t)/dt is the angular
velocity of the satellite rotation in the orbital plane.
In the second case, one gets a wideband hologram of a satellite stabilised by
rotation of the body-related coordinates around the z-axis (the right side of Fig. 6.2(b)).
During the tracking, the angle between the line of sight and the rotation axis changes
slowly by the value β = β2 − β1 with β̇ θ̇ . The interception of the unit sphere
surface by the line of sight forms a spiral confined between two conic surfaces with the
half angles π/2 − β1 and π/2 − β2 at the vertex. The resulting hologram represents a
multiplicity of real beams that form a spiral band (Fig. 6.2(b)). The band is transversely
bounded by two spherical surfaces and is ‘fitted’ between two conical surfaces, with
B1 = β1 and B2 = β2 . The radii of the spheres are equal to the lower fpl and upper fpu
space frequencies of the hologram. Figure 6.2(b) shows a fragment of such a hologram
bounded by the azimuthal step θ, while the satellite makes the θ̇t/2π number of
rotations during the synthesis time step t. The adjacent hologram slices synthesised
during consecutive rotations are spaced by the frequency step δfu = 2π fpo β̇ ∗ /θ̇ ∗ .
Under the condition
δfu−1 ≥ D,
where D is the maximum linear size of a satellite, the resolution can be achieved
by the synthesis in the plane intercepting the z-axis. The resulting 3D wideband
hologram containing, at least, several slices will be referred to as a surface hologram.
A surface hologram is usually synthesised by a wideband radar, when tracking a
satellite stabilised along the three axes, or when dealing with a model target in an
AEC. In the latter case, a hologram lies entirely in the fx –fy plane.
Every beam of a wideband microwave hologram corresponds to a single echo
signal and is made up of a certain number of discrete pixels, L, since digital hologram
processing implies discretisation of the echo pulse spectrum.
It is clear from the foregoing that the conditions for recording a hologram of a target
performing a complex movement relative to an imaging radar are the compensation
for its radial displacement and the recording of the video signal spectrum in a form
adequate for the respective aspect variation, that is, in a spherical or polar geometry.
where Hr (fp ) is a non-zero aperture function within the chosen boundaries fph of the
hologram (Fig. 6.2(b)):
1, fp ⊂ Vf ,
Ho (fp ) = rect( fp /fph ) = (6.13)
0, fp ⊂ Vf ;
and Hr ( fp ) = exp[j2π( fpo + fp )|ra |] is the transfer function of the compensation step
of the target radial displacement.
The process of image reconstruction from a hologram described by Eq. (6.11) can
be represented as
ĝ(rno ) = F {S( fp )Hf ( fp )} = S( fp )Hf ( fp ) exp( j2π fp rno ) d fp
−1
Vf
where ho (rno ) = F −1 {Ho ( fp )} is a perfect impulse response which only describes the
image noise due to the finite diffraction limit, or to the limited size of the aperture
function Ho ( fp ).
Thus, the processing of an echo signal during the imaging includes two stages
(Fig. 6.3). The signal preprocessing is aimed at synthesising a Fourier hologram,
whose size and shape are determined by the transmitter pulse parameters and the
target aspect variation. The structure and composition of processing operations 1–5
are conventional radar operations and can be varied with the type of transmitter
Echo signal
Step 1
Pre processing
Coherent detection 1
Range processing 2
DFT 4
Range
estimation Annihilation of target radial displacement 6
Aspect
estimation Spherical (polar) recording 7
Microwave hologram
Radar image
Figure 6.3 The sequence of operations in radar data processing during imaging
signal, the processing techniques used, and the tracking conditions. For example,
a monochromatic pulse does not require operations 2 and 4. When a signal with a
LFM is subjected to correlated processing, operations 1 and 2 coincide, and oper-
ation 4 becomes unnecessary. The compensation for the radial displacement of a
satellite during hologram recording in field conditions is a fairly complex problem
[8,10]. In an AEC, the latter operation reduces to the introduction of the phase factor
exp j2πfpl (2Ro /c), where fpl is the space frequency of the first spectral component
of the hologram and R0 is the distance between the antenna phase centre and the target
rotation centre [8,10]. Obviously, the phase factor is constant for a particular AEC.
A necessary operation specific to ISAR systems at the preprocessing stage is the
recording of the target aspect variation. It is assumed that each pixel on the hologram
is compared by a digital recorder with the family of coordinates defining its position
in the frequency domain fx fy fz (in the frequency plane fx –fy ) (see Fig. 6.2).
It is worth discussing a possible application of available processing algorithms
for image reconstruction from a microwave hologram.
The experience gained from the application of inverse aperture synthesis for imag-
ing aircraft and spacecraft as well as from the study of local radar characteristics has
stimulated the development of algorithms for processing echo signals by coherent
radars. A fairly detailed analysis of the algorithms can be found in Reference 8 and
in Chapter 2 of this book, so we shall discuss only the possibility of applying them
to the aspect variation of real targets.
It has been shown in Section 2.3.2 and in the References 9 and 10 that the condi-
tions for tracking real targets differ from the conditions in which available algorithms
operate. First, discrete aspect pixels are not equidistant because of a constant rep-
etition rate of the transmitter pulses. Second, the angle between the RLOS and the
target rotation axis changes during the viewing. An inevitable result of the latter is
the consideration of a 3D character of the problem. Attempts at applying the 2D
algorithms discussed above to the processing of 3D data lead to essential errors in the
images [8]. The level of errors rises with increasing relative size of a target (the ratio
of the maximum target size to the carrier radiation wavelength) and with increasing
deviation from 90◦ of the angle formed by the line of sight and the target rotation axis.
To conclude, radar imaging should consider the viewing geometry, which requires
the use of a radically new approach to data processing. The approach should provide
3D microwave holograms and be able to overcome a non-equidistant arrangement of
echo pixels representing the aspect variation of space targets.
It has been shown earlier that image reconstruction from a microwave hologram
should generally include a 3D IFT of the hologram function. The obtained estimate
of ĝ(rno ) is a distorted representation of the target reflectivity function.
If there is no processing noise and the radial displacement has been perfectly
compensated, an error may be due to a limited bandwidth of the transmitter pulse
(a) ∆Ψ ≈ ∆u cos B
∆Ψ ≈ ∆B
Radial Transversal
(b)
Radial Transversal
(c)
can be separated only from volume holograms. They are regions of spherical
surfaces with fpo = const.
If the angular discretisation of a hologram is uniform, the maximum angle
of 1D transverse, 2D and 3D PHs are chosen from the following considerations.
When a spherical coordinate grid (or a polar grid for plane holograms) is replaced
by a rectangular grid, the phase noise at the PH edges should not exceed π/2.
This criterion leads to the following restrictions:
ψ ≤ (λ/D)1/2 , (6.15)
ψ ≤ c/Df . (6.16)
(a) (b)
(c)
∆Ψ
Figure 6.5 Subdivision of a 3D surface hologram into partial holograms: (a) radial,
(b) 1D partial transversal and (c) 2D partial
When choosing the PH angle, one should always follow the more rigid of
the above criteria. The restriction on the PH size is introduced in order to keep
the deviation of the hologram samples from the rectangular grid nodes within a
prescribed limit. The PH angles can be easily calculated analytically at a con-
stant or slightly varying value of one of the angles of the spherical coordinates
describing the PHs (Fig. 6.4(a)). In that case the PH boundaries will be close to
the coordinate surfaces. If both angles θ and B change markedly (Fig. 6.5), the
angular step ψ should be found in the plane tangent to the PH.
Stage 2. Every PH should be subjected to a DFT providing a radar image with
the same dimensionality as that of the PH, while the resolution is determined by
its size.
Stage 3. The contributions of partial images to the integral image are computed.
When the dimensionalities of a PH and a partial image are the same, the pixels
of the latter are interpolated to those of the integral image. If the dimensionality
of the integral image is higher, the major procedure for the computation is that of
back projection [127].
Consider algorithms for the reconstruction of 2D images by processing narrow
and wideband surface holograms (Fig. 6.5) produced by a three-axially stabilised
ground radar. With such algorithms we shall try to justify the specific features
of coherent summation of partial components: (1) the possibility of highlighting
partial regions of various shapes on a PH and their independent processing and
(2) the possibility to increase the resolution of the integral image as the individual
contributions of the partial components are accumulated and the diffraction limit
corresponding to the initial hologram size is achieved.
The above analysis allows the following conclusions to be drawn. The most general
approach to radar imaging of a satellite by inverse aperture synthesis, no matter
how it moves and what probing radiation is used, includes two stages of echo
signal processing. The preprocessing involves some conventional operations, the
compensation for the phase noise specific to coherent radars, and data recording
allows the aspect variation to produce a microwave hologram. The second stage is to
reconstruct the image by a special digital processing of PHs.
A procedure specific to preprocessing is the compensation for the phase shift due
to the radial displacement of a space target. In the case of an AEC, this operation is
replaced by the introduction of constant phase factors in the wideband echo signal. The
use of monochromatic transmitter pulses does not require this operation (Chapter 5).
The complex pattern of aspect variation of low orbit satellites requires a 3D
hologram with a non-equidistant arrangement of the aspect samples. Since there are
no adequate methods for processing such holograms, we have designed a way of image
reconstruction by coherent summation of PHs. This reduces the digital processing of
a hologram of complex geometry to a number of simple operations. A hologram
is subdivided into PHs, from which partial images are reconstructed using a fast
Fourier transform (FFT). The contributions of the partial images to the integral image
are computed.
We should first change Eq. (2.38) generally relating the hologram and image functions
to the Cartesian coordinates necessary for a DFT:
The substitution of Eq. (6.18) into Eq. (2.38) reduces it to the conventional 3D Fourier
transform. However, it is impossible to apply it directly to a microwave hologram
recorded in spherical coordinates (Fig. 6.2(b)). The transition to pixels located at rect-
angular grid nodes is considered as an interpolation problem. Even a first-order inter-
polation for a 2D case would require large computational resources. Besides, any noise
arising from the interpolation would lead to large errors in the reconstructed image.
The procedure of coherent summation of partial components will simplify this
problem if we use the reverse order of computational operations: a number of DFT
operations and the interpolation of their results (partial images) to the rectangular
grid nodes of the integral image. Of special practical importance is the case when a
PH and its partial image have a lower dimensionality than the integral image. This
is due to a higher computation efficiency of the algorithms used. The interpolation
θf fpu
ĝ(r, ν) = S( fpo + fp , θ)|fp | exp[j2π( fpo + fp )r cos(ν − θ)] dfp dθ, (6.25)
θi fpl
where θi and θf are the initial and final values of the angle θ of the hologram (Figs 6.6
and 6.7), fpl = fpo − fp /2 and fpu = fpo − fp /2 are the lower and upper boundaries
of the space frequency band along the hologram radius.
It is easier to start the analysis of processing algorithms with a simple case of
narrowband microwave holograms. The limit of expression (6.25) at fp → 0 is
θf
g(r) = fpo S( fpo , θ ) exp[ j2π fpo r cos(ν − θ)] dθ . (6.26)
θi
This expression coincides with the formula for the CCA for a narrowband signal
[94]. When an image is reconstructed by this algorithm, circular convolution is per-
formed for every sample of the polar coordinate r in the image space with respect
to the parameter θ of the hologram function and the phase factor. The contribution
of all hologram samples to every (r, ν) node of the image polar grid is computed. If
the satellite aspect changes non-uniformly, the samples are arranged along the holo-
gram circumference with a variable step, so a discrete circular convolution becomes
impossible.
Let us single out a series of adjacent regions on a hologram, or PHs shown in
Fig. 6.6(a), with an angle satisfying the condition of Eq. (6.15). The convolution step
of Eq. (6.26) over the whole hologram angle can be represented as a sum of integrals,
each taken over a limited angle step θ:
M
ĝ(r, ν) = fpo Sm ( fpo , θ) exp[ j2π fpo r cos(ν − θ)] dθ , (6.27)
m=1
where Sm ( fpo θ ) is the mth PH and M is the total number of such holograms.
(a) fy
uM
Du
fpo
uf
O fx
ui
u1
(b) y
xm
ym
(xn, yn)
rno
oq x
m
We now introduce the Cartesian xm ym coordinates (Fig. 6.6(b)) for each mth
PH with the origin O coinciding with that of the rectangular x–y coordinates of the
integral image. The xm -axis is parallel to the tangent to the arc connecting the mth
PH pixels at its centre. Since the microwave hologram in question is 2D, let us
introduce the azimuthal coordinate fpθ = fpo θ to describe it in the frequency fx –fy
plane (Fig. 6.6(a)), in addition to the radial polar coordinate fp . With xm = r sin θm and
ym = r cos θm , the transformation of the phase factor under the integral of Eq. (6.27)
(a) fy
dfu
Dfp
uf
fpu fpe Du
Of fx
u1
ui
dfp
(b) y
ym xm
rno
oq x
m
will give
fθm +f
θ /2
M
ĝ(x, y) = S( fpo , θ) exp[ j2π fθ xm ] dfθ m , (6.28)
m=1f −f /2
θm θ
where dfθ = fpo dθ is the differential of the space frequency f0 , while fθm is the space
frequency corresponding to the mth PH centre.
It is clear from Eq. (6.33) that the complex phase factors varying with the xm , ym
coordinates and located at the integral image point corresponding to the position of
the scatterer response have the maximum values equal to unity. The contribution
of the PH to the integral image is defined by the product of the local radar target
characteristic of the scatterer and the sin(x)/x-type of function. Therefore, the PHs
are summed equiphasically at the point xn = rno sin ϑn , yn = rno cos ϑn and at other
points, of the image, they are mutually neutralised.
The width of the major lobe of the scatterer in the partial image (a function of the
sin(x)/x-type) is determined by the PH length fθ or by its angle θ (Fig. 6.6(a)).
The limiting value of the response width in the partial image derived from Eq. (6.14)
is expressed by the inequality δx ≥ 0.5(λD)1/2 . Since D λ, the major lobe width
is much greater than the transmitter pulse wavelength.
It follows from this treatment that the mth partial component of the integral image
may be regarded as a 2D plane wave superimposed on the image plane. The wave front
is normal to the ym -axis and its period is equal to the half wavelength of the trans-
mitter pulse. The initial wave phase (along the xm -axis) is determined by the phasor
exp[ jπ(xm − xmn )fθ ] in such a way that a positive half-wave always arrives at the
scatterer’s xmn , ymn position. The wave amplitude along the xm -axis is described by a
sin(x)/x function with a maximum at the point xm . For this reason, the partial com-
ponent has a ‘comb’ elongated by the back projection of the partial image parallel to
the ym -axis.
Note that the resolution of the integral image is defined by the scatterer wavelength
rather than by the response width in the partial image. The reduction in the PH size
from the maximum value prescribed by Eq. (6.15) to a single sample should not affect
the result of summation in a PH. Therefore the synthesised aperture can be focused
accurately over the whole image field. Keeping in mind
fθm +f
θ /2
lim S( fpo , θ ) exp(j2π fθ xm ) dfθ = fpo S( fpo , θ) dθ, (6.34)
fθ →0
fθm +fθ /2
we obtain from Eq. (2.4) the algorithm for coherent summation of a PH made up of
individual samples of the initial hologram:
M
ĝ(x, y) = fpo Sm ( fpo , θ)m . (6.35)
m=1
The coherent summation algorithm for hologram samples essentially represents a
particular case for 1D transverse (azimuthal) partial images described by Eq. (6.28).
However, each has its own specificity.
The major advantage of the algorithm for hologram samples is the absence of
phase errors due to either the PH approximation or the non-equidistant distribu-
tion of samples. As a consequence, this algorithm is applicable to the processing
of microwave holograms with any known sample arrangement. On the other hand,
the coherent summation algorithm for partial images does not require excessive com-
puter resources because the exhaustive search of the raster pixels in the integral image
during the computation of the partial contribution is made for a group of PH samples
rather than for every single hologram sample.
Figure 6.8–6.9 compares the computational complexity of the two algorithms as a
function of the target size for a narrowband microwave hologram. The criterion for the
degree of complexity is taken to be the algorithmic time of the programme realisation.
The unit of measure of the algorithmic time is, in turn, taken to be 1 flop (floating
point), that is, the time for one elementary operation of summation/multiplication of
two operands with a floating point. So we have 1 Mflop = 106 flops. The estimations
of the computational complexity and the programme realisation time have been made
for a 2D image of 512 × 512 raster pixels in size and 2D microwave holograms with
a 120◦ angle.
When going from a narrowband hologram to a wideband one, we can just suggest
that the number of spectral components increases from 1 to L. As the size of a one-digit
image and the hologram discretisation step are inversely proportional to each other,
the minimal number of spectral components at a given pulse frequency bandwidth
must be proportional to the target size. Table 6.1 presents the L values for various
PHs as a function of the maximum target size. The computations have been made for
0.04 m carrier (centre) frequency of the transmitter pulse spectrum and the ratio of
the image field size to the maximum target length k = 1.5. One can easily see that
the number of azimuthal PH samples rises with the target size as long as the limiting
PH angle obeys the inequality (6.15).
When a target is rather large and the relative frequency bandwidth is µ = f /f0
(the lower right-hand side of Table 6.1), the inequality (6.16) imposes a more rigid
restriction on the PH size. Then both the PH size and its discretisation step decrease
inversely with respect to the target size. Therefore, the number of PH azimuthal
samples at a given transmitter pulse width f remains constant with increasing target
size D.
We shall start the discussion of digital processing of 2D wideband holograms with
the algorithm for coherent summation of 1D azimuthal partial images, which is the
(a) 0.6
0.5
Kpar im, Mflop · 103
0.4
0.3
0.2
0.1
0.0
0.0 0.5 1.0 1.5 2.0 2.5
Target dimension, m
(b) 12.0
10.0
Khol sam, Mflop · 103
8.0
6.0
4.0
2.0
0.0
0.0 0.5 1.0 1.5 2.0 2.5
Target dimension, m
where fpl , fθl are the radial and azimuthal space frequencies and ml is the coherent
processing phasor. By summing up the L number of PHs in each of the M number of
partial angle steps, we get
M
fθm +f
θ /2
L
ĝ(x, y) = fpl Sm ( fpl , θ) exp( j2π fpθ xm ) dfθ ml . (6.37)
m=1 l=1 fθm −fθ /2
• the L number of azimuthal PHs are selected in each mth partial angle step;
• the DFT is applied to each PH to get the L number of 1D partial images;
• the L number of partial images in every mth group are back projected and the
obtained contributions are multiplied by the coherent processing phasor ml .
The analysis of Eq. (6.37) shows that the consecutive multiplication by the phasor ml
of the contributions of partial images can be supplemented with a DFT. The result is a
new processing algorithm – the coherent summation algorithm for 2D partial images:
f /2 fθ m +f
θ /2
M
p
ĝ(x, y) = |fp | Sm ( fp , θ) exp( j2π fpθ xm ) dfθ
m=1 −f /2 fθm −fθ /2
p
× exp( j2πfp ym ) dfp m . (6.38)
• the M number of 2D PHs with an angle defined by the conditions of Eqs (6.15)
and (6.16) are selected in the initial microwave hologram;
• each PH is subjected to a 2D DFT to produce the M number of 2D partial images.
All of these have a common centre which coincides with the integral image centre
and are rotated by the angle θ relative to one another;
• the contribution of each partial image to the integral image is calculated using
a 2D interpolation and the result is multiplied by the coherent processing
phasor.
The last operation generally requires large computer resources. So we shall further
refer to the coherent summation algorithm for 2D partial images only to preserve a
theoretical completeness.
The advantages of coherent summation of individual samples discussed above
for narrowband holograms are fully valid for wideband holograms as well.
Equations (6.37) and (6.34) yield
M
L
ĝ(x, y) = fpl Sm ( fpl , θ)ml dθ . (6.39)
m=1 l=1
Among the wideband processing algorithms, the one described by Eq. (6.39) is
the most simple but it requires a large number of arithmetic operations to be made
because the processing is made online. The computational efficiency of this algorithm
can be raised by using a 1D DFT along the mth hologram beam:
f
p /2
M
ĝ(x, y) = S( fp , θ)|fp | exp( j2π fp ym ) dfp m dθ. (6.40)
m=1 −fp /2
(a) 6.0
5.0
4.0
Kpar im/KCCA
3.0
2.0
1.0
0.0
0.0 0.5 1.0 1.5 2.0 2.5
Target dimension, m
(b) 150.0
125.0
100.0
Khol sam /KCCA
75.0
50.0
25.0
0.0
0.0 0.5 1.0 1.5 2.0 2.5
Target dimension, m
It is clear that the time for a wideband hologram processing by the above algo-
rithms, estimated from the product of the computational complexity and the time
for an elementary multiplication/summation operation, is excessively long, so one
should consider the possibility of separate, independent processing of PHs in order
to considerably reduce this parameter.
35.0
30.0
0.06
0.02
25.0 0.08
20.0
Khol sam /Kpar im
0.04
15.0
10.0
5.0
0.0
0.0 0.5 1.0 1.5 2.0 2.5
Target dimension, m
7.0
6.5
6.0
5.5
5.0
4.5
4.0
Kpar im/Kzad par im
0.02
3.5
3.0
2.5
0.04
2.0
1.5
0.06
0.08
1.0
0.5
0.0
0.0 0.5 1.0 1.5 2.0 2.5
Target dimension, m
z
fz, f ⬙zm
ym
f ⬙ym
jm
um Bm Of fy y
fym, f ⬘ym
f ⬘xm, f ⬙xm
fx
x xm
Figure 6.12 The transformation of the partial coordinate frame in the processing
of a 3D hologram by coherent summation of transverse partial images
line. However, a digital simulation of the aspect variation of a real target has shown
that the phase error is negligible. Therefore, we shall assume the PH angle to be
defined by the conditions of Eqs (6.15) and (6.16).
To describe the positions of PH samples in the fxm –fym plane, we introduce the
angle ψ and write down the partial Cartesian coordinates of the pixels as fxm =
fp sin ψ, fym = −fp cos ψ. An acceptable processing algorithm can be obtained if
the fxm –fym plane is superimposed with the fx –fy plane which corresponds to the x–y
plane in the image space containing the integral image. The superposition operation
will be made by two consecutive rotations of the partial coordinates (Fig. 6.12):
• the rotation by the angle ξm = arctg(dB/dθ)|θ =θm | round the fym -axis gives the
f f coordinates, whose f -axis lies in the f –f plane;
polar fxm ym zm xm x y
f f coordinates by the angle B round the f -axis
• the rotation of the polar fxm ym zm m xm
gives the sought for polar fxm f f coordinates.
ym zm
These transformations of the polar coordinates result in the following expression for
the scalar product at the mth partial angle step:
fp rno = −fp bm xm
sin ψ + fpe ym cos ψ (6.44)
with
xm = xm cos ζm + ym sin ζm ,
ym = −xm sin ζm + ym cos ζm ,
bm = (1 − sin2 ξm cos2 Bm )1/2 ,
fpe = fp cos Bm .
In turn,
sin ζm = sin ζm sin Bm /bm ,
cos ζm = cos ξm /bm .
Thus, the variation of the polar angle B during hologram recording introduces two
specific features in the coherent summation algorithm for transverse partial images.
One is the necessity to make an additional rotation of the partial xm , ym coordinates
by the ζm angle round the zm -axis. The other is a change in the partial image scale
along the xm - and ym -axes by a factor of bm and cos Bm , respectively.
Let us now derive an expression for the coherent summation algorithm for trans-
verse partial images in the case of wideband pulses. This can be done by substituting
Eq. (6.44) into Eq. (6.14) and reducing the result to the form:
fψm +f
ψ /2
M
L
ĝ(x, y) = fple S( fp , θ , B) exp( j2π fψ o xm
) dfψo mlθ ,
m=1 l=1 fψm −fψ /2
(6.45)
When a target moves in a straight line normal to the radar line of sight, the inverse
synthesis of a tracking aperture can be regarded in terms of Doppler information
processing, in a way similar to the processing aimed at a high azimuthal resolution
by a side-looking radar. Clearly, an inverse aperture can then be considered as a
linear antenna array performing a periodic time discretisation of the radiation wave
front. This is the so-called antenna approach, and its capabilities are discussed in
Reference 139. The author analysed an equivalent array made up of (2N + 1) records
of target movement across a real ground antenna beam of sufficient width. It was
shown that the azimuthal range resolution R0 and the resolution along the ϕ direction
could be defined as
λR0
= , (7.1)
2VTr (2N + 1) cos ϕ
where λ is the transmitter pulse wavelength, V is the target velocity, ϕ is the angle
between the line directed to the target and normal to the synthesising aperture, and
Tr is the repetition rate of transmitter pulses.
Inverse aperture synthesis for a linearly moving target can also be examined
in terms of a holographic approach. This was first done by H. Rogers to study
ionosphere [85], making use of D. Gabor’s ideas of holography. Rogers described
a method for hologram recording of microwaves reflected by ionospheric inho-
mogeneities. The principle of this method is as follows. When an ionospheric
inhomogeneity moves, the resulting diffraction pattern on the earth surface also moves
across the receiver aperture. A signal that has been sensed is recorded on a photofilm
as a hologram. What is actually recorded is the wave front, and one can reconstruct
the inhomogeneity image from the hologram. For these reasons, E. Leith considered
Rogers’ device to be truly holographic rather than quasi-holographic.
Holographic concepts were successfully introduced in radar imaging by
W. E. Kock [71] who showed that echo signals from a linearly moving target, recorded
by the receiver of a coherent continuous pulse radar, were structurally equivalent
Assuming that f (x) is the distribution of the complex scattering amplitude (the target
reflectivity) along the cross range x-coordinate, ϕ is an angle characterising the aspect
variation, and z(ϕ) is an echo signal, we have
4π
z(ϕ) = f (x) exp −j ϕx dx. (7.2)
λ
After the reconstruction of the radar image, which reduces to a Fourier transform of
the echo signal (7.2) with the weight function w(ϕ) and intensity, we obtain
2
|ν(s)| = f (x1 )f ∗ (x2 )U (s − x1 , s − x2 ) dx1 dx2 + η(x), (7.3)
U (s1 , s2 ) = w(ϕ1 )w∗ (ϕ2 )
j4π
× exp jψ(ϕ1 ) − ψ(ϕ2 ) + (s1 ϕ1 − s2 ϕ2 ) dϕ1 dϕ2 , (7.4)
λ
where s is the cross range coordinate in the image plane, the sign ∗ indicates complex
conjugation, η(x) is complex noise on the image, and U (s1 , s2 ) is the cross correlation
function of the hologram.
The statistical characteristics |ν(s)|2 and U (s1 , s2 ) will be analysed on the
assumption that f (x) is a sum of the δ-functions of point scatterers and ψ(ϕ) is
defined by the normal distribution law. Consider the average U (s1 , s2 ) value over
the phase fluctuations ψ(ϕ), taking them to be Gaussian. With the formula for the
characteristic function and the expansion of ρ(ϕ1 −ϕ2 ) into a Taylor series at σ 2 1,
we get
exp{ j|ψ(ϕ1 ) − ψ(ϕ2 )|} = exp{−σ 2 [1 − ρ(ϕ1 − ϕ2 )]}
∼
= exp{−σ 2 /22 (ϕ1 − ϕ2 )2 },
where σ 2 is the phase noise dispersion, ρ(ϕ1 − ϕ2 ) is the correlation factor and 2 is
a quantity inverse to the second derivative of the correlation factor at zero, which
describes the angle correlation step of the target aspect variation.
Assuming w(ϕ) = exp−ϕ 2 /(2θ 2 ), where θ describes the angle step of the
synthesis, we find
λ2 C
U (s1 , s2 ) =
64π (ds2 + dc2 )2 − dc4
ds2 + dc2 2 ds2
× exp − · (s1 − s 2 ) + 2 s s
1 2 ,
2[(ds2 + dc2 )2 − dc2 ] ds2 + dc2
(7.5)
where C1 is the same factor of the exponent as in Eq. (7.5), A is the signal amplitude,
and x is the scatterer coordinate.
For a target composed of a multiplicity of scatterers, each scatterer will be rep-
resented by a peak in the image described by Eq. (7.6). Its image position of each
scatterer along the s-coordinate is its real position along the x-coordinate in the target
plane. Moreover, every pair of scatterers will be represented in the image function by
an interference term
∗ [s − (x1 + x2 )/2]2 (x1 + x2 )2
U (s − x1 , s − x2 ) = C1 Re A1 A2 exp − − ,
ds2 + 2dc2 4ds2
(7.7)
The additional term in Eq. (7.7) defines the peak located half way between the images
of the respective scatterers; it has the same width as the peak for any other scatterer
and is described by the ratio of the interscatterer distance to the resolution step value
at zero phase noise. If this ratio is large, the interference term due to the superposition
of side lobes in individual pixel images is negligible as compared with the average
image intensity.
Under the conditions of partial signal coherence, the real resolution can be found
from the 0.5 level of the maximum intensity |ν(s)|2 :
ds = 2s |ν(s)|2 =0.5 = C2 ds + 2dc = C2 ds 1 + 2(σ Ts /)2 ,
2 2 (7.8)
where C2 is a constant defined by the function w(ϕ) in exponential and uniform
approximations of C2 ∼ = 1.66 and C2 = 1, respectively. Obviously, if ds decreases
by the value s , the real resolution ds will improve only by s (Fig. 7.1(a)):
s = C2 ds2 + 2dc2 − C2 (ds − c )2 + 2dc2 (7.9)
and with increasing Ts the gain in the real resolution will become still smaller.
Equation (7.9) can be reduced to
ad 2s + bd s + c = 0, where a = 4(p2 − 2s ); b = 4s (2s − p2 );
c = p2 (22s + 4dc2 ) − p4 − 4s ; p = s /C2 .
We can now calculate ds and Ts values that may be considered most suitable for the
synthesis at given s and s :
ds opt = s /2 + 2s /4 − c/a, (7.10)
Ts opt = λro /2V sin αds opt . (7.11)
At the values of λ = 0.1 m, ro = 50 km, V = 600 m/s, α = 90◦ ,
s = 0.1 m,
s = 0.05 m and C2 = 1, we find Ts opt = 1.83 s for Tc = 1.5 s and Tc = 3 s,
respectively (dc = 6.98 m and dc = 3.49 m).
Formula (7.11) defines the synthesis time of a partially coherent signal, which
is optimal in the sense that it will require greater computer resources but will not
essentially improve image quality determined by the real resolution ds or the s /s
(a)
30
d⬘s, m
20 D⬘
s
1
10 Ds
2
3
0
Ts, S
(b)
1.0
D⬘s/Ds, rel. un.
0.5
2
1
0
1 2 3 Ts, S
ratio (Fig. 7.1(b)). This ratio quantitatively describes the gain in the angular radar
resolution owing to the synthesis of partially coherent signals, as compared with that
for perfect viewing conditions (dc → 0).
In the next section, we shall estimate the synthesis conditions by numerical
simulation. The key factor in the imaging model to be described is target path
fluctuations.
where Y2x [n] and Y2y [n] are the current values of random velocity deviations along
the x - and y -axes, respectively; for comparison, the mean square deviation of the
velocity is σx ,y = 0.1 or 0.2 m/s at Tc = 1.5 or 3 s. The values of σx ,y and Tc are
presented here courtesy of A. Bogdanov, O. Vasiliev, A. Savelyev and M. Chernykh
who measured them in real flight conditions. Their experimental data on coherent
radar signals in the centimetre wave range are also described in Reference 28.
The current angle between the antenna pattern axis and the vector Vr [n], in this
model, is
α [n] = α + arctg(Y2y [n]/(V + Y2x [n])). (7.14)
With Eqs (7.13) and (7.14) combined with the viewing conditions of model II, we
have computed the real current range rT [n] to the target.
To make the next step in the modelling of a radar image, we suggest that the predeter-
mined path component of a point target is normal to the antenna pattern axis, that is,
α = 90◦ , the transmitter pulses have a spectral width fc = 75 MHz, and their other
parameters are chosen with the account of well-known restrictions for the removal of
image inhomogeneities [104].
The range image of a target was formed by coherent correlation processing of
every echo signal. For every pixel on the range image, the nth (n = 1, . . . , 256)
value of a complex echo signal was recorded to form a microwave hologram [138].
The reference function was formed ignoring the errors in the estimated parameters of
target motion. The reconstructed image |ν(r, s)|2 was 2D in the r- and s-coordinates
(range and cross range). The simulation showed that the phase noise due to path
instabilities did not affect the range image of a target. Therefore, we shall further
treat only its cross range section along the range axis.
A visual analysis of impulse responses during the imaging of partially coherent
echo signals (Tc = 3 s, Ts = 1.5 s) indicates that phase fluctuations largely pro-
duce the following types of noise (Fig. 7.2). First, there is a shift of the impulse
response along the s-axis in the image field (Fig. 7.2(a)). Second, the peak of the
0.5
0
20 40 60 0 20 40 60
s, m s, m
Figure 7.2 Typical errors in the impulse response of an imaging device along the
s-axis: (a) response shift, (b) response broadening, (c) increased ampli-
tude of the response side lobes and (d) combined effect of the above
factors
major impulse response becomes broader (Fig. 7.2(b)). Third, the side lobes of the
impulse response become larger to form some additional features commensurable in
their intensities with the major peak (Fig. 7.2(c)).
Combinations of the three effects on the final image are also possible (Fig. 7.2(d)).
It is worth noting that the first effect can be eliminated during the image processing
by relating the window centre to the nth pixel with maximum intensity.
The presence of distorting effects necessitates finding ways to measure a real
resolution step. A conventional way of estimating resolution is by measuring the
impulse response of the processing device at the level 0.5 of the maximum intensity
|ν(s)|2 . In that case, analysis is made of all the images along the s-axis, independent
of phase noise.
Another way of measuring a resolution step is that all additional features on a
point target image at the 0.5 level are considered to be side lobes, irrespective of their
intensity, and can be removed in advance.
Figures 7.3 and 7.4 present the estimates of an average resolution step ds for
models I and II of path instabilities, respectively. The average value was calculated
from 100 records of path instability of a point target for every discrete time moment
Ts (Ts = 0.1, . . . , 2.9 s). The estimation of a resolution step within model I fails to
predict the degree of partial coherence effect on the radar image, since we know
nothing about a perfect image a priori. The analysis of Fig. 7.3 has shown that the
resolution step error is fairly large at σ Ts / ≥ 1, where σ = 2π σp /λ. It is the
appearance of false features above the 0.5 level with increasing synthesis time that
leads to an overestimation of the resolution step computed from the impulse response
(a) 60
40
d9s, m 2 29
1
20
19
(b) 80
60
d9s, m
40 1 19 2 29
20
0
1 2 3
Ts, S
Figure 7.3 The resolving power of an imaging device in the presence of range
instabilities versus the synthesis time Ts and the method of resolution
step measurement: (a) −σp = 0.04 m; 1 and 1 (2 and 2 ) – first (second)
way of resolution step measurement; 1 and 2 – Tc = 1.5 s, 1 and
2 – Tc = 3 s; (b) −σp = 0.05 m, 1 and 1 (2 and 2 ) – first (second) way
of resolution step measurement; 1 and 2 – Tc = 1.5 s, 1 and 2 – Tc = 3 s
width and, hence, to a larger error in the target size measurement. Such an error is
inherent in this method of resolution evaluation.
In the model of velocity instabilities (model II), the ds (Ts ) curves in Fig. 7.3 show
a reasonable agreement with the theoretical curves in Fig. 7.1(a). The curve behaviour
in Fig. 7.4 differs from the calculated dependences and from the model computations
shown in Fig. 7.3 in that the ds (Ts ) curve has a minimum. The latter is due to an error
in the method of estimating a resolution step, although the calculated ds (Ts ) curve
does not indicate the presence of extrema.
The simulation results (curve 1 in Fig. 7.4(a)) can be used to find the synthesis
time intervals for a particular type of signal (or a particular imaging algorithm):
I – totally coherent, II – partially coherent and III – incoherent. One can choose
various imaging algorithms for available statistical characteristics of path instabilities
and for a particular time Ts . For instance, it is reasonable to use incoherent processing
algorithms at synthesis times for which a signal can be considered as incoherent [78].
For shorter intervals I and II, one should use coherent processing algorithms and
evaluate their performance in terms of the criterion s /s (Fig. 7.5).
(a) 60
I II III
40
d9s, m
20 19 2 29
1
(b) 60
40
d9s, m
1
20
2 29
19
0
1 2 3
Ts, s
Figure 7.4 The resolving power of an imaging system in the presence of velocity
instabilities versus the synthesis time Ts and the method of resolution step
measurement: (a) σx = σy = 0.01 m/s (other details as in Fig. 7.3),
(b) σx = σy = 0.2 m/s (other details as in Fig. 7.3)
1.0
0.8
D⬘S/DS, rel. un.
0.6
0.4
0.2 2
1
0
0.5 1.0 1.5 2.0 2.5
Ts, S
The resolution estimate obtained by the second method is close to the theoretical
value. However, this approach has a serious limitation because a real target possesses
a large number of scatterers. The positions of respective intensity peaks on a radar
image are unknown a priori, so the application of this technique may lead to a loss of
information on adjacent scatterers on an image. This method proves to work well if
one knows in advance that the target being viewed is a point object or that a range pixel
corresponds to a single scatterer. In that case, the imaging device can be ‘calibrated’
by evaluating the phase noise effect on it.
The discrepancy between the simulation results presented in Figs 7.3 and 7.4
may be interpreted as follows. Model I of target path instabilities simulates random
phase noise associated only with the displacement of range aperture pixels. Model II
introduces greater phase errors in the echo signal, because the aperture is synthesised
by non-equidistant pixels, which are additionally range-displaced. This model seems
to better represent the real tracking conditions, since it accounts for random target
yawing in addition to random range displacements.
The analytical expressions given earlier and the simulation results on partially
coherent signals with zero compensation for the phase noise can provide the real
resolving power of an imaging device. Today, there are no generally accepted criteria
for evaluation of the performance of radar devices for imaging partially coherent
signals. The results discussed in this chapter allow estimation of the device perfor-
mance in the ideal case of dc → 0; on the other hand, they enable one to evaluate
the efficiency of computer resources to be used in terms of the possible gain in the
resolving power.
Track instabilities of real aerodynamic targets and other factors introducing phase
noise give rise to numerous defects on an image. So the application of conventional
ways of estimating the resolving power of imaging systems leads to errors. However,
there is an optimal synthesis time interval which provides the best angular resolution
with a minimal effect of phase fluctuations. Therefore, when phase noise cannot
be avoided, which is usually the case in practice, it is reasonable to make use of a
statistical database on fluctuations of motion parameters for various classes of targets
and viewing conditions. The processing model we have suggested can be helpful in the
evaluation of the optimal time of aperture synthesis in particular viewing conditions.
The viewing conditions also require a specific processing algorithm to be used,
so radar-imaging devices should also be classified into coherent, partially coherent or
incoherent. The simulation results presented in Fig. 7.4 do not question the validity of
analytical relations (7.4), (7.5) and (7.7) but rather define their applicability, because
a signal becomes incoherent when a fluctuating target is viewed for a long time.
Possible sources of phase fluctuations of an echo signal, which negatively affect the
aperture synthesis, are turbulent flows in the troposphere and ionosphere. Fluctua-
tions of the refractive index due to tropospheric turbulence impose restrictions on
the aperture centimetre wavelengths. Ionospheric turbulence affects far-decimetre
wavelengths. Phase fluctuations decrease the resolving power of a synthetic aperture,
leading to a lower image quality.
whirls (globules) arise and their size may exceed Lo . Such whirls are produced owing
to the energy of translational flow movement, for example, to the wind power. This
power is then given off to whirls of size Lo , and so on. Eventually, the energy is
dissipated because of viscous friction in the smallest whirls of size lo known as the
inner-scale size of turbulence. In this way, huge whirls gradually split into smaller
ones, and this process goes on until the power of rotational motion of the smallest
whirls transforms to heat in overcoming the viscous force. For this reason, a region
where huge whirls transform to small ones is called an inertia region. Within such
a region, the instantaneous distribution of the refractive index n(r ) is an unsteady
random function. However, the difference
n(r1 ) − n(r2 )
In other words, n(r ) appears to be a random function with the first increments being
steady. Random processes, like those discussed in the books [132,133], can be conve-
niently described by structure functions. The one for the refractive index distribution
has the form:
where Cn2 is a structure constant of the refractive index. Equation (8.8) describes the
so-called 2/3 law by Obukhov and Kolmogorov for the refractive index distribution.
Numerous measurements made in the near-earth troposphere [132,133] showed a
good agreement between the fluctuation characteristics of n and the 2/3 law. The
value of lo in the troposphere is found to be ∼1 mm. The quantity Lo is a function
of direction and altitude. Therefore, one may assume that the horizontal extension
of large whirls near the earth surface will have the same order of magnitude as the
altitude, as far as the maximum altitudes lie in the range from 100 to 1000 m [110].
Energy dissipation
Whirl origin region region
Inertia region
10–4
10–8
–11
10–12 ( x) 3
10–10
lo≈l mm
10–20 xo = 2p (Lo ~ l m) xm = 2p
Lo lo
x(m–1)
Figure 8.1 The normalised refractive index spectrum n (χ )/Cn2 as a function of the
wave number χ in various models: 1 – Tatarsky’s model-I, 2 – Tatarsky’s
model-II, 3 – Carman’s model, 4 – modified Carman’s model
where χo ∼ (2π/Lo ), χm ∼ (2π/lo ) and χ is the spatial wave number. It has been
found experimentally that the n (χ ) spectrum has the form of χ −11/3 in an inertia
region where the wave numbers are larger than χo . Figure 8.1 shows the normalised
spectra for three regions: for the region of whirl origin (χ < (2π/Lo )), for the inertia
region ((2π/Lo ) χ (2π/lo )) and for the dissipation region (χ ≥ (2π/lo )).
It is seen that the spectral density n (χ ) in the region of χ ≥ (2π/lo ) decreases
much faster than might be expected from the (χ −11/3 ) formula. But in what way
n (χ ) decreases in this region is still unclear theoretically. One usually deals with
three kinds of spectra in the dissipation region. One obeys the χ −11/3 law, another
drops abruptly at χ = χm , implying that n (χ ) = 0 at χ = χm , and, finally, the
spectrum changes on addition of the factor exp[−(χ 2 /χm2 )].
The second case obeys Eq. (8.9) in practice. We have termed the respective model
spectrum Tatarsky’s model-I. It has been successfully employed in Reference 133 and
some other studies. In Reference 132, V. Tatarsky used the following expression for
δn21 L2o 2π
n (χ ) = 0.063 at χ , (8.11)
(1 + χ 2 L2o )11/6 Lo
bear in mind the following factors. First, the spectra are valid in the inertia region of
a locally uniform and isotropic turbulence. Sometimes, the turbulence spectrum may
strongly differ from the above models. Second, the spectrum at χ ≤ χo is, at best, an
approximation, even though one may use Carman’s spectra. At χ ≥ χm , the model
spectra are only good approximations. Note that the spectrum of the form (8.11)
transforms to that of (8.9) at χ 2 L2 1. In addition to the three types of spectra, there
is a spectrum of the form:
α exp(−χ 2 /χm2 )
n (χ ) = ,
(1 + χ 2 L2o )11/6
χm Lo ≥ 5.92 × 103 .
Keeping in mind this fact and
(11/6)
≈ 0.06,
π 3/2 (1/3)
we get
δn21 L χ2
n (χ ) = 0.06 exp − (8.16)
[1 + χ 2 L2o ]−11/6 χm2
or
11/3
Cn2 Lo χ2
n (χ ) = 0.06 exp − . (8.17)
[1 + χ 2 L2o ]−11/6 χm2
It would be reasonable to call a spectrum of the type (8.16) or (8.17) Carman’s mod-
ified spectrum. If relation (8.12) is fulfilled, this spectrum will coincide with that
described by Eqs (8.10) and (8.14) at large values of χ . But in the χ range, it coin-
cides with the Carman spectrum shown in Fig. 8.1. The choice of a particular type
of spectrum varies with the problem to be solved. Fluctuations of some electromag-
netic wave parameters, such as phase and amplitude, are often sensitive to a certain
turbulence spectrum, or to large- or small-scale whirls. Keeping this important fact
in mind, one should analyse carefully the applicability of the chosen spectrum before
using it.
The best way of verifying a model is to compare the results obtained with available
experimental data. Although the models of (8.9) and (8.10) are rather approximate at
χ < (2π/Lo ), they still provide a good agreement with measurements (e.g. of phase
fluctuations). Moreover, they can give the results in an analytical form. On the other
hand, the models of (8.11) and (8.15) are more accurate for large whirls but they are
unable to give clear analytical results. These circumstances have predetermined the
applicability of the models of (8.8) and (8.10). In the study of phase fluctuations,
both models yield similar analytical expressions.
It is of importance to discuss in some detail a vertical profile model of the struc-
ture constant. This constant describes the degree of refractive index non-uniformity,
because it relates the quantities D(r) and r (see Eq. (8.8)). The structure constant Cn2
is related to the tropospheric parameters δn21 and r. For radiation propagation along an
oblique path, the turbulence ‘intensity’ changes with the altitude, and the Cn2 values
will be different at different altitudes. The structure function of n(r) will then be
where Cn2 (h) is a structure constant varying with altitude. To obtain quantitative
results, one is first to find the Cn2 (h) variation. The theoretical treatment of the problem
of parameter fluctuations for a plane wave in a turbulent troposphere [132] included
the following Cn2 (h) models:
h
Cn2 = Cn0 2
exp − , (8.18)
h0
1
Cn2 = Cn0
2
, (8.19)
1 + (h/h0 )2
where Cn0 2 is the structure constant of the refractive index near the earth surface, h is
C 2n (cm–2/3)
10–13
10–14
10–15
10–16
10–17
0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3.0
h
(km)
Figure 8.2 The profile of the structure constant Cn2 versus the altitude for April at
the SAR wavelength of 3.12 cm
criterion based on the assumption of a normal error distribution. The Cn records that
differed from the average by more than a possible maximum of the statistical spread
and were lying within the 0.98 error limit were eliminated from further analysis.
The plots thus obtained were approximated by exponential functions, using the least
square method. As a result, the following analytical dependencies were derived for
the structure constant profile at the wavelength of 3.12 cm:
(a) the Cn (h) model for April:
h
Cn2 (h) = Cn0
2
exp − (8.20)
h0
2 = 3.69 × 10−15 cm −2/3 and h = 2.17 × 105 cm;
with Cn0 0
C 2n (cm–2/3)
10–13
10–14
10–15
10–16
10–17
0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3.0
h
(km)
Figure 8.3 The profile of the structure constant Cn2 versus the altitude for November
at the SAR wavelength of 3.12 cm
We can see that the refractive index fluctuations decrease with altitude. The major
contribution to the fluctuations is made by a tropospheric stratum 3 km thick above the
earth. The contribution of the other 7 km thickness (the total thickness of the tropo-
sphere is taken to be 10 km) is five times smaller. It is known that the fluctuation
of n increases with rising humidity. The most intense fluctuations are observed at
the air–cloud interface and inside the clouds. This model, however, ignores these
effects because of the lack of experimental data. But some data are available on
the effect of humidity and clouds on the dispersion δn2 of the refractive index val-
ues. Therefore, the model of the vertical δn2 profile allows estimation, in a first
approximation, of the cloud effect on phase fluctuations.
To conclude, it seems reasonable to extend the results on λ = 3.12 cm waves to
other centimetre wavelengths, since the Smith–Wentraub formula (8.2) indicates only
a slight dependence of n on the wavelength λ within the centimetre frequency band.
∞
−3
Bξ (r ) = (2π) ξ (χ )e−jχ K d 3 χ , (8.23)
−∞
above that the power index varies between 2 < P < 3, whereas the power index for
the troposphere is P = 8/3 (Kolmogorov’s spectrum).
The turbulence parameter is described as
((P + 1)/2)
CS = 8(π )3/2 (χ )P−2
Ne2 (8.26)
((P − 1)/2)
where (·) is the gamma-function and
Ne2 is the mean square value of the fluc-
tuation component of the electron density. For a typical fluctuation distribution in the
ionosphere, CS ∼ 1021 (mKs). The quantity
Ne2 varies remarkably with the
ionospheric conditions, so CS fluctuates from 6.5 × 1019 (mKs) at P = 2.9 to
1.3 × 1023 (mKs) at P = 1.5 [22]. The ionosphere has a thickness of about 200 km.
The maximum electron density lies in the Nm F2 stratum at an altitude between 250
and 350 km.
The outer-scale size of a turbulent whirl along the shortest distance (ionospheric
whirls are anisotropic) is about 10 km. The respective value for a turbulent troposphere
is about 1 km.
Ls ≈ βH ,
where ρ is the distance between the points, at which the phase fluctuations are to be
measured, for example, ρ = de . To find an analytical expression for D(ρ), consider a
2D spectrum of wave phase fluctuations in a turbulent troposphere. Using a gradual
perturbation approach, the authors of Reference 133 derived a simple formula relat-
ing the phase fluctuation parameters to the spectral density of the refractive index
fluctuations n (χ ). The 2D spectral density Fϕ (χ , 0) and n (χ ) have the simplest
relation, because the former is a 2D Fourier transform of the respective phase structure
function in the plane x = const. normal to the wave propagation direction. For a plane
line
th L s ier track
leng Carr
hesis
Synt
Z
A2
H
V
u n x
A1 ectio
Ro n e proj h
k li eart
R Trac to the
on
0 de
Lf
q
a
ht
Rq A
Point target
y
Swath width
where ρ is the distance between the points, at which the structure function is to be
measured in the plane x = L. It follows from Eq. (8.28) that the 2D spectrum of
Fϕ (χ , 0) is similar to the spectrum of the refractive index fluctuations n (χ ) multi-
plied by the filtering function (in square brackets). Therefore, the wave propagation
through a turbulent medium is similar to the linear filter effect in circuit theory.
The filtering function of phase fluctuations is only slightly sensitive to the parame-
ter variations. For example, at χ = 0, Fϕ (χ , 0) is equal to 2π k 2 L, changing smoothly
with increasing χ as far as π k 2 L. Therefore, the filtering occurs relatively uni-
formly. The maximum product of the filtering function and n (χ ) for typical SARs
is observed at small values of χ , or in large whirls. For this reason, phase fluctuations
and phase correlation are most sensitive to the outer-scale size of turbulence, Lo .
With Eq. (8.29) and the turbulence models of (8.9) and (8.10), we can arrive at
an expression for a uniform turbulence and a plane wave:
Dϕ (ρ) = αk12 L2 ρ 5/3 , (8.30)
where
√
2.91, at ρ ≥ λL,
α= √
1.46, at lo ρ λL,
The last two expressions show that phase fluctuations are equally affected by all
whirls, irrespective√ of their distance to the observation point. Moreover, when ρ passes
through the value λL, which is usually somewhere at the beginning of the path,
the factor in front of Dϕ (ρ) increases 2-fold. Therefore,
√ the experimental structure
function Dϕ (ρ) must have a positive rise at ρ = λL.
It is interesting to follow how Dϕ (ρ) changes when a plane wave is replaced by
a spherical one. The formula relating the mean square value of the phase difference
fluctuation to the base ‘ρ’ for a spherical and plane wave [132] is
1
(ϕ1 − ϕ2 )2sp = [Dϕ (ρ)]sp = Dϕ (ρt) dt.
0
L √
3
[Dϕ (ρ)]sp = 1.46k 2 ρ 5/3 Cn2 (h) dh, (lo ρ λL), (8.34)
8
0
L √
3
[Dϕ (ρ)]sp = 2.91k 2 ρ 5/3 Cn2 (h) dh, (ρ > λL). (8.35)
8
0
The initial expression for the structure function evaluation in a SAR is Eq. (8.35),
because there is the relation
√
ρ = de > λL.
where Ls = V̄ Ts , Ts is the synthesis time, V is the track velocity of the radar carrier
and βo = 1.09.
Equation (8.36) also allows finding the standard deviation of the phase difference
fluctuations at the synthetic aperture ends:
σϕ (ρ) = Dϕ (ρ). (8.37)
We shall now examine how phase errors due to tropospheric turbulence affect the res-
olution limit and optimal length of a synthetic aperture. W. Brown and Y. Riordan [23]
have calculated both parameters for the case of phase errors, with the structure func-
tion obeying a power law. It was stated that the phase difference [ϕ(r + ρ) − ϕ(r)]
has a Gaussian distribution, and this is supported experimentally. For the above type
of phase errors, the expression for the aperture resolution along the track is found
to be
λR
ρx = (8.38)
4πρo
with ρo = 0.985b. The quantity b is to be calculated from the equation for the structure
function of a phase error:
Dϕ (ρ) = bn ρ n , n = 5/3. (8.39)
Then Eqs (8.38) and (8.39) yield
λR [Dϕ (ρ)]3/5
ρx = . (8.40)
4π ρ
Using the equation for the structure function of a phase error (8.36) and ρ = de ,
we get
3/5
ht cosec ϑ
ρx = λ−1/5 RC0 (Cn0 ) (h0 )3/5 1 − exp −
2 3/5
, (8.41)
h0
where C0 = const. This equation shows that ρx varies but slightly with λ and increases
slowly with increasing λ.
The optimal synthetic aperture affected by a turbulent troposphere [23] can be
found as
13.4
Lopt = . (8.42)
b
Then Eqs (8.42) and (8.39) give
d0 λ6/5
Lopt =
3/5 (8.43)
2 )3/5 (h )3/5 1 − exp − ht cosec ϑ
(Cn0 0 h0
with d0 = const.
The mean square value of the phase error between the optimal aperture centre and
its extremal point is
σϕ = (Dϕ (Lopt /2))1/2 , (8.44)
where Dϕ and Lopt are to be calculated from Eqs (8.36) and (8.43).
It was shown in the Appendix to Reference 114 that a good approximation for the
structure function of phase fluctuations is the expression:
D(y) ∼ 2
= Cδ |y|2ν−1 , 0.5 < ν < 1.5. (8.45)
2 is defined as
The phase structure constant Cδ
Cp 2(1.5 − ν)
2
Cδ = , 0.5 < ν < 1.5, (8.46)
2π (ν + 0.5)(2ν − 1)22ν−1
√ χ −2ν+1 (ν − 1/2)
δ2 = 2 πre2 λ2 lp CS G o , (8.47)
4π (ν + 1/2)
where the factor G was borrowed from the Appendix to Reference 113. This factor
accounts for:
• the velocity of the scanning beam motion relative to electron density whirls (νo ),
• the geometrical parameter due to the electron density anisotropy (),
• the effective velocity of the scanning beam across the earth surface (Vef ),
• the synthesised aperture length Ls .
characterises the maximum SAP relative to the background created by the side
lobes.
3. The maximum side lobe level is
bm = Ims /Im , (8.51)
where Ims and Im are the maximum side and major lobe senses, respectively. This
parameter is effective in sensing microwave-contrast targets against a weakly
reflecting background. The integral and maximum senses of the side lobes, as
well as the major lobe width, vary with the weighting function used in the SAR
(Table 8.1). The relative width in the Table is the SAP width normalised to that
for a uniform weighting function.
4. The azimuthal sample characteristic is
ka = ρβ /ρ
, (8.52)
where ρ
is a step between the azimuthal counts of an image digital signal.
According to the theorem of samples, the sample characteristic must meet the
condition ρ
< ρβ .
This parameter denotes the number of digital signal counts per azimuthal
resolution element and describes the radar capability to reconstruct an image. The
larger the sample characteristic, the greater the image contrast. However, a larger
coefficient entails a greater complexity of the image reconstruction design. The
optimal value of this parameter is taken to be ka = 1.2.
5. Image stability characterises the ability of an image digital reconstruction device
to sense and count the relative positions of partial frame centres and to provide the
proper scale over all the sample characteristics when partial frames are matched
and superimposed.
6. The gain in the signal-to-noise ratio in coherent and incoherent integration is cal-
culated from the variations of this parameter at the processor output. It is assumed
that the echo and image signals are integrated linearly in both coherent [17] and
incoherent integration [59], whereas noise is integrated in quadratures. Therefore,
the total gain in the signal-to-noise ratio Kg is
√
Kg = Nn, (8.53)
where n is the number of echo counts over a synthesis step in one range channel
and N is the number of incoherently integrated partial frames.
In real flight conditions, the actual aperture characteristics differ from the potential
ones. The reason for this is the noise from processing and micronavigation devices,
as well as the limitations of imaging systems.
7. The intrinsic aperture noise level is the mean image signal level when there is
only noise at the aperture input and its gain corresponds to the mean image
signal. This parameter covers the total effect of the aperture noise during the
synthesis.
8. The radar swath width is determined by the screen parameters (the number of
lines and the number of pixels in a line) and by the discretisation step in range
and azimuth. An acceptable number of image pixels on a screen normally varies
from 512 × 512 to 1024 × 1024.
9. Geometrical distortions of an image are defined as the standard deviation of the
positions of reference scatterers relative to their actual positions. The central
reference mark is superimposed with the real reference. The standard deviation
value is affected by the range, the view angle, altitude, the distance between the
reference and the image centre, as well as by the imaging time.
10. The imaging time is an important parameter of an aperture operating in real time.
A typical test ground for the study of aperture characteristics is a statistically
uniform surface with three-edge corner reflectors (Fig. 8.5) arranged at different
distances from each other (for evaluation of the aperture sharpness). The reflec-
tors possess different reflectivities, so one can measure the dynamic range of
the system. In addition to a uniform background, a test ground usually includes
some common objects such as roads, fields, smooth surfaces, railway roads, etc.
In order to understand better the difference between the potential and real char-
acteristics of a synthetic aperture and a SAR as a whole, we shall make use of test
results with digital image reconstruction (the AN/APQ-102A modification) [53]. Its
potential resolution was 12.2 m along the azimuth and range coordinates. The dis-
cretisation step for evaluation of a real azimuthal resolution was taken to be 3.04 m.
Figure 8.6 shows an azimuthal signal from two corner reflectors. When the valley
Flight direction
1600 m
1600 m
Figure 8.5 A schematic test ground with corner reflectors for investigation of SAR
performance
1.0
0.5
0.25
between their images was 2 dB, the azimuthal resolution was found to be 21.28 m, or
7 pixels in an image line.
Part of the test ground image was obtained by a 14-fold incoherent integration
with the mean signal value of 0.671 and a standard deviation of 0.201. The evaluated
speckle was found to be 0.3, which is a sufficiently low level.
The dark level was typically 23 dB of the grey-level value. Hence, the SAR
dynamic range is 33 dB, with the contrast of adjacent samples being 2.8 or 4.5 dB. For
a synthetic aperture with strongly suppressed side lobes, this parameter was 6–10 dB.
The large standard deviation in this case is due to the use of corner reflectors with a
large RCS.
The dynamic range is estimated from these data to be 33 dB, with the contrast of
adjacent samples being 2.8 or 4.5 dB. For a synthetic aperture with strongly suppressed
sidelobes, this parameter is 6–10 dB. The large mean square value of the image signals
is due to the application of corner reflectors with a large RCS.
Figure 8.7 shows a histogram of the noise distribution at the aperture output, and
one may suggest that the probability density has a Rayleigh pattern. The mean value
of 0.21 was taken to be the dark level. One of the dark regions exhibits a Rayleigh
distribution with a mean value of 0.42. A screen with 384 × 360 pixels covered a
view zone of 4.8 × 4.5 km. The errors in the measurement of the range positions of
the corner reflectors were 14 km and 18 m at a distance of 1600 m from the image
centre, whereas the radar was at 14.5 km from it. The azimuth measurement error
was ∼50 m under the same conditions.
Frequency
rg
Approximation
Experiment
1 10 100 N
Figure 8.8 The grey-level (half-tone) resolution versus the number of incoherently
integrated frames N
An important experimental finding was the critical volume Vc – for a single frame
synthesised by the aperture (N = 1). For the majority of frames, the length per square
resolution element in the case of a 37 per cent interpretability was found to be 9.14 m.
Such objects were vegetation and urban areas, low-contrast regions, communication
lines, city and country roads, etc. Exceptions were the boundaries of water bodies
and vegetation covers showing a 37 per cent interpretability even at the lowest linear
resolution in azimuth and range (13.72 m). Since the grey-level resolution at N = 1
(Fig. 8.8) is 22, it is easy to find the critical volume:
Vc = pa pr pg ≈ 9.142 × 22 ∼ 1850. (8.57)
With this, the final interpretability expression takes the form:
U = 4 exp{−pa pr pg /1850}. (8.58)
Note that the calculation of the critical volume used the linear resolution of 9.14 m.
Figure 8.9 shows the interpretability plotted against the linear resolution pa = pr = p
for different numbers of incoherent integrations.
When analysing the plots in Fig. 8.9, one should bear in mind that both the
measurements and the calculations were based on some a priori assumptions. For
example, the half-tone scale was chosen on the assumption that a photograph had the
maximum interpretability and that it had an infinite number of incoherent integrations
(N = ∞) and the half-tone resolution pg – (Fig. 8.8). An image synthesised without
incoherent integrations (N = 1) was thought to have the poorest half-tone resolution,
but the resolution was to be finite (pg < ∞), since the image preserved some, though
very low, interpretability. It was established experimentally that the poorest half-tone
resolution was equal to 22 (Fig. 8.8).
The interpretability was evaluated by three qualified and experienced interpreters
of radar and optical images, using the four-level scale (from 0 to 4) mentioned above.
The interpreters worked with prints of 20.32 cm × 25.40 cm in size. The resolution
elements varied in shape from square to rectangular (with the side ratio of 1:10) and
in the number of incoherent integrations varying from 1 to ∞. All the experiments
U/Uo
0.8
10 N=∞
1 3
0.6
0.4
0.2
0 25 50 p, m
Figure 8.9 The dependence of the image interpretability on the resolution versus
linear resolution pa = pr = p
were carried out using a quadratic detector because the detection was performed on a
quadratic film. It can be demonstrated theoretically, however, that experimental data
can also be useful in linear detection of image signals if the half-tone resolution is
calculated by another approximate formula:
√ √
pgl ≈ ( N + 0.6175)/( N − 0.6175). (8.59)
The major result of this series of investigations [99] was the experimental support of
the idea that image interpretability depended only on the half-tone volume resolution,
or on the product of the azimuthal, range and half-tone resolutions. Therefore, this
parameter varies with the area rather than the shape of a resolution element (square
or rectangular). On the other hand, it depends on the resolution element area and the
number of incoherent integrations. So one can make a compromise when choosing
the resolution in azimuth pa , in range pr and in half-tones pg [99]. Identical inter-
pretabilities can be achieved by using different combinations of these parameters.
This conclusion proved to be quite unexpected and may play an important role in
solving some applied problems when one has to choose between the complexity and
the cost of aperture processing techniques.
Indeed, if this conclusion is correct, it is worth making an effort to achieve a
high image interpretability by improving low-cost resolutions. To illustrate, a higher
range resolution and an incoherent integration in spaceborne SARs can be achieved
in a simpler way than a higher azimuthal resolution. For example, one can fix the
azimuthal resolution but improve the range resolution or increase the number of
incoherent integrations.
We shall give a good example to illustrate the effectiveness of resolution redistri-
bution with reference to a side-looking synthetic aperture. In this type of aperture, the
azimuthal resolution depends linearly on the number of incoherent integrations N :
pg
20
10
0 1 2 3 4 5 6 7 8 9 N
Figure 8.10 The dependence of the half-tone resolution on the number of incoherent
integrations over the total real antenna pattern
where λ is the wavelength, ro is the oblique range, Lm is the maximum possible length
of the aperture, and po = λro (2Lm ) is the best aperture resolution. If we now fix the
range resolution, the minimum product of pa Npg will show the optimal combination
of azimuthal resolution and incoherent integration (Fig. 8.10). This optimum is found
to lie at N = 3; hence, pa = 3po .
The integral criterion for image evaluation from the half-tone volume resolution
is convenient and relatively simple. But when using it in practice, one should bear in
mind that the available amount of statistical data is insufficient, so the estimations of
image quality may be quite subjective.
Synthetic aperture radar remote sensing of the earth is becoming increasingly popular
in many areas of human activity (Chapter 9.1). The analysis of images may be made
in terms of a qualitative or quantitative approach [2].
A qualitative analysis is largely made by conventional methods of visual inter-
pretation of aerial photography, combined with the researcher’s knowledge and
experience. Although radar images have much in common with aerophotographs
(Chapter 1), the physical mechanisms of their synthesis set limits on the applicability
of interpretation methods elaborated for optical imagery. Additional difficulties arise
from the presence of speckle noise.
A quantitive analysis is based on the measurement of target characteristics for
various backgrounds and objects [2], followed by computerised processing of video
information. The latter is normally used to solve the following tasks. One often
has to improve image quality and interpretation procedures at the pre-processing
stage, which includes various corrections, noise reduction, contrast enhancement,
highlighting contours, etc. It may also be necessary to compress and code images to
with the mean value of Ī = 2σo2 and the dispersion σI2 = 4σo4 , while the phase θ of
the image pixels is equiprobable in the range from −π to +π .
Another reflection model is applied when a resolution element has one bright
point together with other point scatterers, such that the total echo signal contains
one dominant signal of much higher intensity along with many random independent
signals of nearly the same lower intensity. Then the amplitude of the total signal is
described by the Rice distribution, or by a generalised Rayleigh distribution. This
kind of model is called the Rice reflection model.
The distribution of the intensity probability density at single pixels is
√
1 x − so xso
pI (x) = exp − Io (8.62)
2σo2 2σo2 σo2
with the mean value of Ī = 2σo2 + so and the dispersion σI2 = 4σo4 (1 + 2r), where so
is the square amplitude of the highest intensity component of the signal r = so /(2σo2 ),
Io (·) is a modified zero-order Bessel function of the first kind, and the distribution of
the phase probability density is
2
1 a cos x x
pθ (x) = exp − + a √ (a cos x) exp −a2 sin2 , (8.63)
2π 2 2π 2
where
t
√ 1 τ2
a = so /σo , (t) = exp − dτ
2 2
−∞
to the SAR carrier track is F = (1/2)F(k) exp(jxk). For randomly arranged point
scatterers, the signal received by the aperture is defined as
1 L
F (k) = F(k) exp(jxl k).
2π
l=1
more complex models. The authors consider the possibility of employing Wiener’s
and Calman’s filtering algorithms, homomorphic processing and various heuristic
techniques to suppress speckle.
However, a lack of objective criteria for evaluation of image quality by visual
perception creates additional difficulties. For this reason, nearly all the researchers
cited below compare the processing results with expertise, which makes a comparative
analysis of the suggested algorithms quite problematic.
The first attempts to suppress speckle by aposterior techniques used the Wiener
filtering algorithm which varies with the signal [2]. The workers analysed an additive,
signal-modelled noise approach and a multiplicative noise model. In the former,
a distorted image is described by the expression:
where h(x, y) is the space impulse response, f is commonly a non-linear function and
n(x, y) is signal s(x, y) independent noise. By introducing the designations n (x, y) =
s (x, y) × n(x, y) and s (x, y) = f [s(x, y) ∗ h(x, y)], we transform Eq. (8.65) to
where n(x, y) is signal-independent multiplicative noise. The Wiener’s filter has the
transfer function M (µ, ν) = zs (µ, ν)/zz (µ, ν) and minimises the standard devia-
tion of the filtering, provided that z(x, y) and s(x, y) are wideband spatially uniform
random fields, zs and zz are the respective power density spectra. With Eq. (8.65),
the first noise model gives the following transfer function of a Wiener’s filter:
It is clear from (8.69) that at n(x, y) = 0 the filter transfer function is M2 (µ, ν) = 0.
Suppose we have n1 (x, y) = n(x, y) − n, then
ss (µ, ν)H ∗ (µ, ν)/n
M2 (µ, ν) = .
ss (µ, ν)|H (µ, ν)|2 + (1/n2 )n1 n1 (µ, ν) ⊗ [ss (µ, ν)|H (µ, ν)|2 ]
(8.70)
Obviously, at n = 1 filters with the transfer functions (8.68) and (8.70) are equiva-
lent. Modelling has shown that a Wiener’s filter for signal-dependent noise with the
characteristics M1 and M2 is better than that for additive, signal-independent noise.
But the essential limitations of the former are the need for a large amount of a priori
information about the signal and the noise, as well as vast computations. Calman’s
filtering algorithms [2] suffer from similar disadvantages.
The possibility of a homomorphic image processing is discussed in Reference 2.
A homomorphic processing is supposed to be any conversion of observable quantities
if the signal fluctuations are transformed to additive and signal-independent noise.
Within the multiplicative speckle model, Eq. (8.64) yields
N −1
NN I NI
p(I ) = exp − (8.71)
(N )I I I
2
with σI2 = I /N 2 . Then the homomorphic transformation reduces to taking the
logarithms. The distribution density of the quantity D = ln I is described as
z = sn = s
and
σz2 = M [(sn − s n)2 ] = M [s2 ]M [n2 ] − s̄2 n̄2 .
If the signal intensity averaged over the processing window is constant, the
expressions are
M [s2 ] = s2 and σz2 = s2 (M [n2 ] − n̄2 ) = s̄2 σn2 or σn = σz /z̄.
This model is consistent with the data obtained from the analysis of uniform surface
imagery. The standard deviation σn is found to be about 0.28, which is due to a multi-
beam processing and the use of other algorithms for improving images synthesised
by the SAR SEASAT-A. Using the local statistics technique for a selected window
(usually with 5 × 5 or 7 × 7 resolution elements), one can find the moving local
average z̄ and the dispersion σ 2 . Then one gets
σz2 + z̄ 2
s̄ = z̄/n̄, σs2 = − s̄2 . (8.73)
σn2 + n̄2
The expansion of z into a Taylor series with the account of the first-order terms only
yields
z = n̄s + s̄(n − n̄). (8.74)
According to Eqs (8.73) and (8.74), the minimisation of the mean square error of
speckle suppression leads to the following formula for ŝ:
ŝ = s̄ + k(z − n̄s̄) (8.75)
with
n̄σs2
k= .
s̄2 σn2 + n̄2 σs2
Then at n = 1, one gets
σs2
ŝ = s̄ + k(z − s̄), k= . (8.76)
s̄σn2 + σs2
The heuristic algorithm derived from the local statistics approach is especially effec-
tive for speckle suppression on images of uniform and isotropic surfaces. It does not
remove the contours of extended proper targets. This algorithm has provided good
results when processing imagery from the SAR SEASAT-A. Its major advantages
are simplicity and adaptive properties associated with the computation of the local
statistics. It has, however, a serious limitation: it cannot predict the error behaviour
during the speckle suppression. Besides, the necessity of computing the local average
and, especially, the dispersion in a common 7 × 7 window considerably reduces the
algorithm efficiency.
In order to decrease the computational costs inherent in local statistics algo-
rithms, some workers have suggested using a sigma-filter. For a moving window
of (2m1 + 1) × (2m2 + 1) in size (m1 and m2 are integer numbers) with the central
resolution element zij , the signal ŝij is found from the formula:
m1 +i m2 +j
m 1 +i m2 +j
ŝij = δkl zkl δkl , (8.77)
k=i−m1 l=j−m2 k=i−m1 l=j−m2
where
1, at (1 − 2σn )zij ≤ zkl ≤ (1 + 2σn )zij .
δkl =
0, otherwise.
It is clear that a filter with the characteristic (8.77) will be more cost-effective than that
with (8.76). A 11 × 11 window was used in Reference 2 to estimate σn . It was found
that two passes of a sigma-filter were sufficient to get a satisfactory suppression
of speckle noise without smearing the contours. When the number of passes was
increased to four and more, the image was damaged.
The following modification of the sigma-filter was discussed in Reference 2 for
filtering impulse noise together with speckle suppression. One chooses the thresh-
old B. If the number of elements to be removed in accordance with Eq. (8.77) is smaller
than or equal to the threshold B, the average of four neighbouring elements is ascribed
to the estimated position of the moving window. The choice of a threshold is critical
because it affects the contours. It is pointed out in this work that the threshold value
for a 7 × 7 window should be less than 4 and for a 5 × 5 window less than 3. The
use of a sigma-filter with a 11 × 11 window and then another sigma-filter with a
3 × 3 window at the threshold B = 1 proved to be most effective. A small window
allows suppression of impulse noise in the vicinity of sharp contours. Other filter
modifications are also possible. This type of filter was compared with a filter with the
characteristic (8.76) and with a median and an averaged filter. It was concluded from
the expertise that a sigma-filter provides better results. Its disadvantage is that one
cannot estimate a priori the behaviour of the speckle suppression error. An important
merit of this type of filter is its simplicity, a high computational efficiency and addi-
tive properties. These characteristics make the filter suitable for application in digital
image processing in a real-time mode.
The local statistics method can also be implemented with a linear filter minimising
the mean square error of the filtering. In addition to the algorithms described above,
there is a large number of heuristic algorithms for speckle suppression. Among these
are algorithms for median filtering, averaging over a moving window with various
weighting functions, algorithms for a nonlinear transformation of the initial image, the
reduction of an image histogram to a symmetric form, etc. Most heuristic algorithms
are simple to use and have a fairly high computation efficiency but all of them possess
a serious drawback – they practically ignore the specific process of SAR imaging:
while suppressing noise, they partly suppress the useful signal. It is usually hard to
estimate the speckle suppression error when using such algorithms.
To conclude, image processing covers a wide range of tasks and problems, many of
which have not been dealt with in this chapter. Among these are the processing based
on the properties of a human visual analyser, the criteria for image quality and image
optimisation, quantitative evaluation of information contained in an image, etc. Due to
a rapid development of cybernetics, information theory, iconics and computer science
and practice, these areas of investigation are constantly trying new approaches. For
example, they have tested some concepts of artificial intelligence in the processing of
data on remote probing of the earth, the use of radar imagery as a database for visual
interpretation and complexing of images obtained in different wavelength ranges.
The results obtained from such studies can provide more information about the earth
and other planets.
1 Sections 9.1.1 and 9.1.2 were written by V. Y. Alexandrov, O. M. Johannessen and S. Sandven, Nansen
International Environmental and Remote Sensing Centre, St Petersburg, Russia Nansen Environmental and
Remote Sensing Centre, Bergen, Norway. Section 9.1.3 was written by D. B. Akimov, Nansen International
Environmental and Remote Sensing Centre, St Petersburg, Russia.
Table 9.1 Technical parameters of SARs borne by the SEASAT and Shuttle
Parameter SAR
SEASAT SIR-A SIR-B SIR-C/X X-SAR
Parameter Value
Parameter Value
The first European Space Agency ERS-1 satellite with a C-band SAR aboard
operated successfully from its launch in July 1991 until 1996 and provided a large
amount of global and repeated observations of the environment. The focus was on
ocean studies and sea ice monitoring [62,64]. In the high-resolution imaging mode,
the ERS-1 SAR provides three-look, noise-reduced images with a spatial resolution of
26 m in range (across-track) and 30 m in azimuth (along-track) (Table 9.3). Because of
the absence of onboard data storage, a network of ground receiving stations enabled a
wide coverage by SAR images. ERS-2, a second satellite of this series, was launched
in April 1995 and since mid-August 1995 both satellites operated in a tandem mode,
when ERS-2 imaged the same area as ERS-1 one day later.
The RADARSAT launched by the Canadian Space Agency in November 1995
was the first SAR satellite with a clear operational objective to deliver data on various
earth objects. Using the onboard data storage, it provides a much wider coverage than
the ERS SAR [77]. Processed SAR data could be delivered to users within several
hours after acquisition. The RADARSAT operates in the C-band and HH-polarisation,
and in several imaging modes with different combinations of the swath width and
resolution (Table 9.4). One of its main applications is sea ice monitoring [42].
The advanced SAR (ASAR) onboard the European Space Agency ENVISAT
satellite, has been providing image acquisition since 2002 [43]. While its major
parameters are similar to those of the RADARSAT, the ASAR can also operate at
multipolarisation modes using two out of five polarisation combinations: VV, HH,
VV/HH, HV/HH and VH/VV. The five major modes are: global, wide swath, image,
alternating polarisation and wave modes (Table 9.5). In the image and alternating
polarisation modes the ASAR gives high-resolution data (30 m and 3 look) in a rela-
tively narrow swath (60–100 km), which can be located at different distances from the
subsatellite track at the incidence angles from 15◦ to 45◦ . The alternating polarisation
mode provides two versions of the same scene, at HH, VV and/or cross-polarisation.
The wide swath mode provides a 420 km swath with a spatial resolution of 150 m
and 12 looks. In the global monitoring mode, the ASAR continuously gives a 420 km
swath with a spatial resolution of 1000 m and 8 looks.
Operation mode Image mode Alternating/ Wide swath Global Wave mode
parameter cross- mode monitoring
polarisation
Polarisation VV or HH VV/HH, VV or HH VV or HH VV or HH
HH/HV or
VV/VH
Spatial resolution 28 × 28 29 × 30 150 × 150 950 × 980 28 × 30
(along-track and
across-track) (m)
Radiometric 1.5 2.5 1.5–1.7 1.4 1.5
resolution (dB)
Swath Up to 100 Up to 100 400 ≥400 5 KM
width (km) (seven (seven (five (five (vignette
subswaths) subswaths) subswaths) subswaths) seven
subswaths)
Incidence 15–45 15–45 15–45
angle (◦ )
At present, SAR data from the ERS, RADARSAT and ENVISAT satellites are
widely used in earth observations and monitoring of various natural objects and
phenomena. With its fine-scale resolution, a SAR is capable of observing a number
of unique oceanic phenomena [117]. These include wind and waves [46,75], ocean
circulation [63], internal waves [33], oil spills [40,41], shallow sea bathymetry [6], etc.
Imaging radars are also used in a number of land applications, such as the study of soil
moisture [84], forestry [97] and the studying and monitoring of urban areas [135]. The
use of satellite SAR data for monitoring the Arctic sea ice is briefly discribed below.
efficiency, although the radar iceberg identification remains problematic even with
modern techniques. The RADARSAT ScanSAR wide data provide a daily coverage of
the Canadian Arctic, and higher resolution modes are used for sea ice monitoring near
the ports, in several selected routes and in the rivers. SAR images are synthesised
at the receiving stations Prince Albert and Gatineau and are transmitted to the Ice
Centre within 2.5 h to be processed and transmitted to the icebreakers of the Canadian
Coast Guard and the department of ice operations for visualisation and analysis. Sea
ice monitoring is the most successful online application of the RADARSAT data in
Canada, which provides the best combination of geographic coverage and resolution
to save about 6 million dollars annually, as compared with airborne radar survey [38].
From February 1996 until the end of 2003, CIS used approximately 25,000 scenes
for this purpose [42]. During 2003, a special service carried out iceberg detection
and monitoring from satellite SAR imagery, and the International Ice Patrol was the
user of this information [42]. Now the RADARSAT ScanSAR imagery is the main
data source for sea ice mapping in the Greenland waters. Wind conditions may be
an important limitation to the operational use of radar satellite imagery in this area.
Small (<50 m across) yet thick ice in concentrations less than 7/10 are frequently
undetectable on radar images as they are obscured by a strong backscatter from the
sea waves. Therefore, active research into filtering and enhancement techniques has
been undertaken to improve discrimination between ice and water [48,49].
The ENVISAT ASAR imagery with almost the same swath as that of the
RADARSAT ScanSAR in the VV- and HH-polarisations is an example of further
development of SAR technology. The wide swath mode of the ENVISAT satellite
is especially suitable for sea ice monitoring, providing a practically daily cover-
age of most of the Arctic with a high spatial resolution. In mid-2003, the Canadian
Ice service began to receive the ENVISAT ASAR data to be used as an additional
source to the RADARSAT-1 data for routine production of ice charts, bulletins and
forecasts [43].
The Nansen Centres in Bergen and St Petersburg, in collaboration with the
European Space Agency and Murmansk Shipping Company, have done a series of
projects to demonstrate the possibilities of SAR data for sea ice monitoring and for
supporting navigation in the Northern Sea Route (NSR) [64–66]. The NSR, which
is a major Russian transport corridor in the Arctic, includes routes suitable for ice
navigation confined to the entries to the Novaya Zemlya straits and to the meridian
north of Cape Zhelaniya in the west and to the region of the Bering Strait in the east.
In August 1991, just after the launch of the ERS-1 satellite, SAR imagery was trans-
mitted in near-real time aboard the French vessel L’Astrolabe via the INMARSAT
communication system during her voyage from Europe to Japan in selecting her route
in ice [66]. During the period from July 1993 to September 1994, the European Space
Agency provided approximately 1000 SAR scenes for sea ice monitoring. Three spe-
cific demonstration campaigns in the NSR in the periods of freeze-up, winter and
late summer, revealed the ERS SAR capability to map the key ice parameters. The
SAR imagery was successfully used to solve tasks of navigation through hard ice. In
1996 the ESA and the Russian Space Agency initiated their first joint project, named
ICEWATCH with an overall objective to integrate SAR data into the Russian sea
ice monitoring system to support ice navigation in the NSR [65]. During January–
February 1996, an experiment was made aboard the icebreakers Vaygach and Taymyr,
when the ERS-1 and ERS-2 SARs were operating in a ‘Tandem mission’, giving a
unique opportunity to have SAR coverage over the same area with only a 1-day inter-
val. However, the narrow 100 km swath of the ERS SAR resulted in a substantial
spatial and temporal discontinuity in coverage [64].
In August–September 1997, the RADARSAT ScanSAR data were used to sup-
port the icebreaker Sovetsky Soyuz operations in the Laptev Sea [119]. With its
wide swath, the ScanSAR provided a much better coverage than the ERS SAR, and
the selection of scenes along a given ship route was simplified significantly. The
ScanSAR data proved to be a very useful supplement to conventional ice maps and
could contribute significantly to the ice information. Starting from April 1998, the
ScanSAR and the ERS-2 SAR data were acquired and analysed to support the expedi-
tions aboard the icebreaker Sovetsky Soyuz from Murmansk to the Yenisey Gulf [4]
and the EC ARCDEV expedition with the Finnish tanker Uikku and the icebreaker
Kapitan Dranitsyn from Murmansk to Sabeta in the Ob River [107]. Throughout the
expedition, ScanSAR imagery, aboard the icebreaker was used to detect some impor-
tant ice parameters, such as the ice types, old and fast ice boundaries, flaw polynyas,
wide leads, single ice floes and large areas of rough ice and to solve tactical tasks
of navigation. Areas of level and deformed fast ice were identified in the Ob estu-
ary, and an optimal sailing route was selected through the areas with level ice [107].
These expeditions clearly showed that ScanSAR imagery is particularly important
for supporting navigation in difficult ice conditions, such as those in the Kara Sea
during April–May 1998.
During the summer of 2003, the ENVISAT Wide Swath ASAR imagery was
acquired and transmitted aboard the icebreaker Sovetsky Soyuz during her voyage in
the Kara Sea, together with visible AVHRR NOAA images. The satellite images and
ice maps were displayed in the electronic cartographic navigation system, such that
the navigator could see the current icebreaker location overlaid on a satellite image
and ice chart in order to select the sailing route.
A series of demonstration campaigns conducted in the NSR since 1991 have
shown that high-resolution light- and weather-independent SAR imagery can be effec-
tively used for sea ice monitoring. The sea ice conditions were interpreted and found
quite useful for selecting a sailing route. The speed of convoys significantly depends
on the ice conditions and varies from about 11–14 knots in polynyas to 4–6 knots in
areas with a medium and thick level FY ice and 2 knots in heavily ridged ice [4].
The onboard use of satellite SAR imagery significantly increases the convoy speed
in the pack ice (Fig. 9.1). High-latitude telecommunication systems are the main
‘bottleneck’ in using SAR imagery aboard the icebreakers operating in the NSR. It
must be averaged and compressed to about 100–200 kB for their digital transmission.
During the first half of 2004, the ENVISAT ASAR imagery was used for sea ice mon-
itoring of the NSR on an experimental basis. Preliminarily processed images were
transferred by e-mail to the Murmansk Shipping Company and then were transmitted
via the TV channels of the Orbita system to the nuclear icebreakers Yamal, Sovet-
sky Soyuz, Arktika, Vaygach and Taymyr. The icebreaker navigators could interpret
hi(M) VKN
V
2.5 1.5
hi
2 10
1 5
hi
Figure 9.1 The mean monthly convoy speed in the NSR changes from V0 (without
satellite data) to V1 (SAR images used by the icebreaker’s crew to select
the route in sea ice). The mean ice thickness (hi ) is shown as a function
of the season. (N. Babich, personal communications)
them, adequately selecting the easiest sailing through level thin ice and along leads
and polynyas with prevailing nilas and grey ice. As a result, the speed of convoys’
steering increased by 40–60 per cent on average.
(a)
Grease ice
Open water
(b)
Figure 9.2 (a) Photo of grease ice and (b) a characteristic dark SAR signature of
grease ice. © European Space Agency
15–30 cm, respectively. During winter, young ice is quite common in polynyas and
fractures. It has a relatively high backscatter coefficient [102] and can be distinguished
from both nilas and first-year ice due to its bright SAR signature (Fig. 9.4). The first-
year ice, which is subdivided into thin (30–70 cm), medium (70–120 cm) and thick
(over 120 cm) first-year ice, has a typical dark tone. It is difficult to separate thin,
medium and thick first-year ice using only their SAR signatures, so knowledge of sea
ice conditions in different Arctic regions is used to partly solve this problem. Old ice
that has survived melting during at least one summer, is often reliably discriminated
from first-year ice due to its brighter tone, rounded floes and distinctive texture
(Fig. 9.5). When old and first-year ice breaks into small ice floes with size less than
the SAR spatial resolution, their separation is impossible. SAR signatures of second-
year and multiyear ice are quite similar, and it is hard to distinguish these types of
ice [102].
The backscatter from the ice of the same age depends on its prevailing forms (floe
size) and surface roughness. Pancake ice has a rough surface due to characteristic
raised pancake rims at the plate edges that lead to a high backscatter and a bright
tone in a SAR image (Fig. 9.6). Areas of small ice floes unresolved by radar may
have a specific bright SAR signature. When the size of ice floes greatly exceeds the
radar spatial resolution, they can be detected in SAR imagery. Single ice floes of
even relatively small size can be detected from the dark tone on a bright radar image
of wind-roughened water surface, whereas their detection in calm water surface is
more difficult. The analysis of ice floes becomes complicated when they touch each
other [120,128]. The backscatter of deformed ice is much higher than that of level ice,
Nilas
N78°
N76°
N77°
Young ice
N75°
N76°
First-year ice
N74°
N75°
E60° E65° E70°
Figure 9.4 A RADARSAT ScanSAR Wide image of 25 April 1998, covering an area
of 500 km×500 km around the northern Novaya Zemlya. A geographical
grid and the coastline are superimposed on the image. © Canadian Space
Agency
therefore, areas of weakly, moderately and strongly deformed ice are detectable in
ERS, RADARSAT and ENVISAT SAR imagery (Fig. 9.7). Identification of strongly
deformed ice hazardous to navigation is particularly important.
Detection of open water areas among sea ice, such as fractures, leads and polynyas,
is necessary for selection of an icebreaker’s route. Shore and flaw polynyas can be
detected reliably, and their width, as well as the type of sea ice can be determined. For
example, flaw polynya along the western coast of Novaya Zemlya is clearly evident
in RADARSAT ScanSAR imagery (Fig. 9.4) together with a number of fractures
covered with nilas (dark tone) or young ice (light tone). It was found that the detection
of 100-m wide leads in compact first-year ice is feasible in ScanSAR images.
In winter, fast ice covers large areas in the coastal zones of the Eurasian Arctic
Seas. The SAR signature of fast ice is similar to that of drifting ice, and it changes with
the surface roughness and, to some degree, with salinity. Level fast ice has a uniformly
dark tone, and its boundary can often be identified in SAR images (Fig. 9.7).
The ice edge presents a boundary between open water and sea ice of any type and
concentration; it may be both compact and diverged, separating open ice from water.
Multiyear ice
First-year ice
Mainland
Figure 9.5 A RADARSAT ScanSAR Wide image of 3 March 1998, covering the
boundary between old and first-year sea ice in the area to north Alaska.
© Canadian Space Agency
The ice edge may be well-defined or diffuse, straight or meandering, with ice eddies
and ice tongues, extending into open water [67]. Ice tongues at the ice edge in the
Barents Sea are evident in ENVISAT ASAR imagery (Fig. 9.8). With frequent SAR
images, one can investigate the ice edge development in much detail [120]. The sea
ice concentration and ice edge location are the most important parameters during the
summer; they can be derived from SAR images together with large ice floes, stripes
of ice in water, ice drift vectors and areas of convergence/divergence [119].
A high-resolution SAR is considered to be an optimal remote sensing instrument
for detection of icebergs. Its backscatter coefficient significantly exceeds that of sea
ice and calm sea surface; icebergs that are much larger than the radar spatial resolution
are evident as bright spots. In some cases, iceberg shadows and tracks in the sea ice can
be detected [125]. Identification of smaller icebergs is complicated by speckle-noise
of SAR systems. Areas of iceberg spreading in Franz Josef Land, east of Severnaya
Zemlya, and in the northwest Novaya Zemlya have been identified from ERS and
RADARSAT SAR data. ERS-2 SAR imagery of Severnaya Zemlya (Fig. 9.9) shows
a number of icebergs as bright spots in the Red Army Strait.
Recent studies have shown that the sea ice classification can be improved by
using the ENVISAT alternating polarisation mode. Cross-polarisation will improve
(a)
(b)
Figure 9.6 (a) Photo of a typical pancake ice edge and (b) a characteristic ERS SAR
signature of pancake ice. A mixed bright and dark backscatter signature
is typical for pancake and grease ice found at the ice edge. © European
Space Agency
the potential for distinguishing ice from open water, which can sometimes be diffi-
cult to do only with HH or VV polarisation. In addition to the backscatter variation
in single polarisation data, a proper combination of VV and HH dual polarisa-
tion and cross-polarisation imagery provides additional information on the sea ice
parameters [54,101,122].
Moderately
hummocked ice
Strongly
hummocked ice
Fast ice
Open water
Figure 9.7 A RADARSAT ScanSAR Wide image of 8 May 1998, covering the south-
western Kara Sea. © Canadian Space Agency
Some of the sea ice parameters cannot be found from SAR imagery. For example,
it is quite difficult to distinguish thin, medium and thick first-year ice, or second-year
and multiyear ice types. It is impossible to determine the snow depth on sea ice and
some other parameters. In some cases large ridges and narrow leads covered with
grey ice may have similar SAR signatures.
9.1.2.3 Conclusions
The studies have clearly shown that a satellite SAR is a powerful instrument for sea
ice monitoring, and SAR data are widely used for this purpose in countries with a
perennial or seasonal ice cover. Modern SARs provide a practically daily coverage of
the Arctic regions. The most important sea ice parameters can be derived from SAR
imagery, and their use increases the safety of navigation and speeds of convoys in
severe Arctic ice conditions.
Figure 9.8 An ENVISAT ASAR image of 28 March 2003, covering the ice edge in
the Barents Sea westward and southward of Svalbard. © European Space
Agency
coastal zone, surface roughness characteristics and surface polluted zones of different
nature. SAR data help to monitor ocean dynamic processes, frontal boundaries,
convergence zones, etc.
The normalised radar cross-section (NRCS) is a measure of intensity of the echo
signal. In the range of the microwave frequencies, a radar is sensitive to small per-
turbations of the ocean surface. The NRCS is directly related to the sea roughness,
that is, to statistical properties of the sea surface. This allows a radar to detect a larger
number of near-surface phenomena than any other remote sensing tool. On the other
hand, this makes the radar data extremely hard to interpret, especially quantitatively,
and requires the use of sophisticated models.
When dealing with the ocean, one has to consider surface velocities. The motion
associated with travelling waves affects significantly the SAR imaging mechanisms.
In particular, an azimuthal image shift is due to the motion of the target in the range
direction. This motion has little effect on the radial velocities and is unaffected by
the pulse compression. It is intense enough to have an influence on the aperture.
The azimuthal shift and reduction in the signal amplitude are associated with the
motion of the target in the range direction. Wave motion in the azimuthal direction is
also a source of image degradation but is of less importance. It is known as azimuth
Outlet glacier
Figure 9.9 An ERS-2 SAR image of 11 September 2001, covering the Red Army
Strait in the Severnaya Zemlya Archipelago. © European Space Agency
defocusing and is due to the difference between the Doppler history of the target and
the reference signal.
A satellite-borne SAR can monitor large- and small-scale structural fluctuations
through the description of the energy distribution of the ocean waves in the spec-
tral domain. The latter is formally described by the wave action balance equation
for the spectrum evolution under the combined influence of wind forcing, dissipa-
tion, resonant wave–wave interaction, the presence of surfactants and surface current
velocity gradients. The possibility of identifying oceanic processes is directly related
to changes in the surface scattering characteristics which depend on these processes.
For this reason, the detection becomes impossible when no wind is present.
When these phenomena are known, an imaging model can be used to derive
the wave spectrum from the image spectrum. Unfortunately, the mechanisms respon-
sible for the spectrum modulation are not fully understood. The analysis of a SAR
image is always complicated by interpretation ambiguity. The reason is that one
and the same NRCS contrast may be caused by the variation in different physical
parameters. Moreover, one and the same phenomenon may manifest itself in some
observation conditions and not in others. One of the generally recognised features
of radar imagery is the fact that surface phenomena are more clearly observed in the
horizontal polarisation than in the vertical one.
A simultaneous study of synchronous SAR images and other data sources (e.g.
infrared and visible images, weather maps) helps in getting a correct interpretation.
It should be added that since the influence of current velocity gradients, sea surface
temperature, surfactant concentration and other environmental parameters on the
wind wave spectrum depends upon the wavelength, a radar using a combination of
different wavelengths may be quite useful in revealing the mechanisms responsible
for the NRCS contrast.
A number of mechanisms have been suggested which are responsible for man-
ifestation of dynamic ocean phenomena in radar images. It is assumed that the
wave–current interaction reveals most processes having the scale of the current non-
uniformity of about 0.1–10 km. The following phenomena fall into this category:
internal waves, current boundaries, convergence zones, eddies and deep-sea convec-
tion. The degree of the ocean front manifestation in a SAR is strongly determined by
the atmospheric boundary and by its transformation over the sea surface temperature
non-uniformities. In any case, the comparative significance of a mechanism depends
on the whole set of factors, including the observed process, wind conditions, regional
specificity and unknown circumstances (e.g. Reference 16).
Below we give several examples of how different ocean phenomena may become
apparent in SAR images. The ERS-2 SAR image in Fig. 9.10, taken on 24 June
2000 over the Black Sea (east of the Crimea peninsula), illustrates the manifestation
of temperature fronts, zones of upwelling and slicks of natural films. The fronts
are clear from both the bright and dark departures from the background NRCS. As
was mentioned before, a correct image interpretation needs additional information.
Figure 9.11 shows the sea surface temperature (SST) from the NOAA AVHRR data a
few hours after ERS-2 passage. It gives the temperature distribution helpful in image
interpretation. The spatial resolution of the infrared image is 1 km as compared with
100 m provided by a SAR. An upwelling is clearly visible in the upper right corner
black partially covered with clouds (with SST about 16◦ C). The black square is the
position of the SAR image and the black curved lines are the distinctive features
taken from the SAR image. There appears to be a remarkable correlation between the
features in the SST and NRCS fields. The insignificant shift is due to the difference
in the time of imaging.
The dark region in the upper left corner of the SAR image shows upwelling, when
strong winds force the warm water of the upper layer away from the shore and the cold
deep water comes up from below. Upwellings are known to occur quite often near
the region of the Crimean shoreline. A patch of cold water manifests itself through
a modulation of the so-called friction velocity. This quantity may be described as
‘effective wind’ because it is friction velocity determining the energy flux from the
wind to the waves. The stratification of the atmospheric boundary layer over cold
water is more stable than over the surrounding warm water. This results in a lower
friction velocity, which means that the wind of the same speed (at a given height)
would generate less waves in cold water than in warm water. Surface roughness of the
upwelling zone is decreased reducing its NRCS. Other conditions being equal, cold
water will appear darker than warm water on a radar image (e.g. Reference 16). This
feature allows a radar to sense the temperature non-uniformities of the sea surface in
general.
There are dark stretched features all over the SAR image. The accumulation of
surfactants is assumed to be the cause of these areas of low backscatter. It may take
place in regions of high biological activity. When natural (organic) substances reach
Figure 9.10 An ERS-2 SAR image (100 km × 100 km) taken on 24 June 2000 over
the Black Sea (region to the East Crimea peninsula) and showing
upwelling, natural films
the surface, they tend to be adsorbed at the air–water interface and remain there as a
microlayer. Waves travelling across a film-covered surface compress and expand the
film, giving rise to surface tension gradients, which lead to vertical velocity gradients
within the surface layers. This induces viscous damping and attenuation of short
Bragg waves. As a result, the scattered signal returning to the SAR is very much
reduced. Natural films are usually dissolved at wind speeds above 7 m/s. Because
currents easily redistribute them, such slicks often configure into spatial structures
related to the surface current circulation pattern.
Figure 9.12 illustrates how very long ocean waves, the swell, are imaged by a
SAR. This image was obtained on 30 September 1995 over the Northern Sea; the
land on the right is the Norwegian coast.
We have pointed out that ocean surface roughness of the centimetre scale is
due to the local wind (wind stress). Small-scale roughness is modulated by large-
scale structures (longer waves or swells). Three mechanisms are considered to be
Figure 9.11 SST retrieved from a NOAA AVHRR image on 24 June 2000.
responsible for the longer wave imaging: the tilt modulation, the hydrodynamic effect
and velocity bunching. The first mechanism is that long waves tilt the resonant ripples
so that the local incident angle changes, modifying the backscatter. The hydrodynamic
interaction between the long waves and the scattering ripples lead to the accumulation
of scatterers on the up-wind face of the swell. This effect is greatest (as for the tilt
modulation) for range travelling waves, and there is no modulation if the ripples
are perpendicular to the swell. These first two mechanisms, responsible for swell
manifestation, reveal themselves in both synthetic and real aperture imagery. The
latter – the so-called velocity bunching effect – is responsible for swell manifestation
in the case of long waves travelling close to the azimuthal direction; this effect is
observable only in SAR images.
A SAR creates a high-resolution image by recording the phase and amplitude
of the electromagnetic radiation reflected by the scatterers and by processing it with
a compression filter. The filter is designed to match the phase perfectly for a static
target. For the dynamic ocean surface, the motion of each scatterer within the scene
distorts the expected phase function with two important implications. First, the linear
component of the target motion shifts the azimuth of the imaged location of each
target. This leads to a strong wave-like modulation in the SAR image due to a periodic
forward and backward shift of the scatterer positions. This mechanism is exactly
what is known as the velocity bunching. The other implication of the distorted phase
function is the degradation of the image azimuthal resolution due to higher order
components of the target motion (e.g. Reference 56).
The SAR image enables one to study swell transformation as it approaches the
coast. The wavelength decreases as the swell comes to shallow water, so the wave-
length is about 350 m at point A while near the coast at point B it is only 90 m
(Fig. 9.12). Another observable feature is the swell refraction on the sea bottom
relief. This effect is due to the fact that the wave velocity decreases with decreasing
depth. The wave crests rotate so as to be parallel to the isobaths. It is clearly visible at
points B and C that the swell goes parallel to the curved shore line, though initially it
was not. Finally, at point D we can see an interference pattern produced by two swell
systems going in approximately perpendicular directions.
Figure 9.13 shows the manifestation of the mentioned ocean features and
some new ones. This SAR image was acquired on 28 September 1995 over the
Northern Sea.
Figure 9.13 An ERS-2 SAR image (100 km × 100 km) taken on 28 September 1995
over the Northern Sea and showing an oil spill, wind shadow, low wind
and ocean fronts
The first distinctive feature marked as ‘A’ in Fig 9.13 can definitely be identified
as an oil spill. Oil slicks are seen as patches of different shapes with very low NRCS
and relatively sharp borders. Quite often, the spill source (ship or oil drill platform)
is visible nearby. As compared to natural films, oil films have a higher viscosity,
damping short waves more effectively and remaining observable at higher winds
when natural slicks would disappear. Another characteristic to distinguish between
oil and natural films is that the latter nearly never appear as single localised features
but tend to cover vast areas of intricate patterns produced by currents. Anthropogenic
oil spills on the sea surface may originate from leaks from ships, offshore oil plants
and ship wrecks. In the case of ship wreck, a SAR can contribute to oil spill detection
and monitoring, keeping track of the drift and spread of the slicks.
Usually, the shorter the radar wavelength, the more intense is the backscattering
reduction due to oil presence. The reduction in the radar backscattering also depends
on the incidence angle. Optimum range of angles is defined by the radar wavelength.
One of the strongest obstacles to oil spill detection is the state of the sea. At low
(2–3 m/s) wind speeds, SAR images of the ocean become dark because the Bragg
scattering waves are not present. In this case almost no features can be distinguished
on the sea surface. At high winds, most kinds of oil are dispersed into the water
column by the wind waves and also become unobservable (e.g. Reference 39).
The second feature in Fig 9.13 (‘B’) reveals a clearly lined dark zone near the shore
which seems to have the same direction as the dominating wind. The mountainous
coastal landscape and the sharp outline allow attributing this feature to wind sheltering
by land. It can be seen that the NRCS becomes larger as the distance from the shore
along the wind direction increases and the sea roughness becomes better developed.
The dark areas ‘C1’ and ‘C2’ have blurred contours and may be interpreted as low
wind zones.
Besides this, one can see numerous manifestations of the current boundaries
(‘D1’, ‘D2’, ‘D3’). At moderate wind speeds (3–10 m/s), the SAR is capable of
revealing the current boundaries, meanders and eddies. The NRCS variation in the
vicinity of the current boundary/front is associated with several phenomena, including
changes of the stability of the atmospheric boundary layer, wave–current interaction
and surfactant accumulation. The exact view of the ocean front on a radar image is
affected by many factors: the radar parameters, the observation geometry, the wind
conditions, surface current and temperature gradients, etc. Nevertheless, some simple
rules of thumb exist. One of them was already mentioned: cold water looks darker
than warm water. Another is that convergent current fronts usually appear bright,
while divergent fronts appear dark. It is assumed that the features ‘D1’ and ‘D3’
are the ocean fronts where the non-uniform current distribution is combined with
SST changes. Lack of additional sources of information (e.g. IR images) retains the
interpretation ambiguity since a dark area can also be associated with low winds.
Sometimes, atmospheric phenomena may be observable on SAR images, when
they affect the near-surface wind. Depending on the observation conditions, such
phenomena increase or decrease the radar backscattering by intensifying or damping
the Bragg waves. One example is present in the ERS-1 SAR image of Fig. 9.14, taken
on 29 September 1995 over the Northern Sea. There are several rain cells of different
size scattered throughout the scene. The falling rain drops entrain the air to form a
downward flux of cold air. When hitting the ocean surface, the flux transfers cold air
mass away from the cell centre to form a wind squall – a line of abrupt increase in
the wind speed. The rain cells become visible because the background wind at their
boundaries is summed with the wind due to the rain cold air motion. As a result, the
wind squall on the lee side of the cell increases the background wind, decreasing it on
the opposite side. Thus, one half of the rain cell becomes brighter than the background
while the opposite side becomes darker. The distinct boundaries between the wind
squalls and the surrounding background water are called squall lines. When the rain
is heavy, the centre of a rain cell may appear dark because the falling drops create a
turbulence in the upper water layer, damping the Bragg waves. Such phenomena are
typical of subtropical regions but may be encountered anywhere else [62].
Figure 9.15 shows a ERS-2 SAR image taken on 30 November 1995 over the
Northern Sea. Points ‘A’, ‘B’ and ‘C’ are examples of internal waves on the SAR
Figure 9.14 An ERS-1 SAR image (100 km × 100 km) taken on 29 September 1995
over the Northern Sea showing rain cells
imagery. Internal waves are one of the most interesting ocean features revealed by
SAR imagery. At the beginning of SAR history their detection was entirely unex-
pected. At present, they are found on SAR images in many regions of the World Ocean
at various wind speeds and water depths. They appear as dark crests (troughs) against
a lighter background or as light ones against a dark background. The crests always
occur as packets called trains. In this image, three trains can be observed. Often,
internal waves correlate (parallel) with the bottom topography, when they are caused
by the interaction between the tidal currents and abrupt topographic features. The
distance between individual dark and light bands varies from several hundred metres
to a few kilometres, decreasing from a leading wave to a trailing edge (e.g. [126]).
Orbital motions induced by an internal wave train generate an intermittent pattern
of convergent and divergent zones on the sea, which moves with the phase velocity
of the internal wave. Convergent zones are generated behind the internal wave crest
and divergent zones are behind the troughs. It is these zones that make internal
waves visible on radar imagery. There are few commonly accepted explanations about
Figure 9.15 An ERS-2 SAR image (18 km × 32 km) taken on 30 September 1995
over the Northern Sea showing an internal wave and a ship wake
how this may happen. According to one point of view, surfactants are accumulated
in the convergence zones, which results in short wave damping and makes these
zones appear dark on radar images. Another theory states that convergence zones
appear bright because these are zones of enhanced roughness due to intensified wave
breaking there. The question of which imaging mechanism dominates and under what
conditions is still open.
The next distinctive feature clearly observable on the image (‘D’), is a ship wake.
The ship itself is seen as an extremely bright spot because of many metallic structures
that serve as corner reflectors. The wake is a narrow V-shaped feature associated with
the ship mark. It appears on radar images only in low wind conditions due to the
short lifetime of the Bragg waves and the common ship speeds. The major result of
the ship movement is the appearance of the stern wake. This turbulent wake damps
the Bragg waves, producing an area of dark return, which is sometimes surrounded
by two bright lines. The lines of high backscatter originate from the Bragg waves
induced by vortices from the ship’s hull. However, there is generally a large diversity
of ship wake patterns including combinations of dark and bright stripes on the SAR
images and depending on the observational and sea conditions.
Thus, during the last decades the role of SAR data in earth observations has
increased considerably, and the SAR has become a major remote sensing tool for
environmental monitoring. Improvement of image interpretation techniques, automa-
tised data interpretation, improvement of high-latitude telecommunication systems
and a convenient presentation of the information products to the user are necessary
for further development of SAR earth monitoring.
The imaging techniques we have discussed in Chapters 5 and 6 did not use holographic
or tomographic principles but were developed within a purely radar approach in the
United States about 40 years ago. The first device was designed and constructed by
the Westinghouse company and represented a narrowband radar with a discrete vari-
ation of the carrier frequency and a synthesised spectrum. At about the same time, the
Willow Run Laboratory in the United States initiated work on constructing a radar for
aircraft imaging; the model radars were tested on an open test ground. Somewhat later,
two experimental types of radar were designed for spacecraft identification. One was
constructed at the US Air Force Research Center in collaboration with the General
Electric Company and the Syracuse Research Corporation (the design of the data pro-
cessor). The other type of radar was made by the Aerospace Corporation; it had the car-
rier frequency of 94 GHz, the radiation bandwidth of 1 GHz and the pulse base of 106 .
The first quality images of low-orbit satellites were obtained by ALCOR radar
with the range resolution of 50 cm in the early 1970s. Further efforts by the designers
(the Lincoln Laboratory, the Massachusetts Institute of Technology and the Syracuse
Research Corporation) to improve this system within a global program for space object
identification resulted in the creation, in the late 1970s, of a long-range imaging radar
(LRIR) [20,52,83] with better characteristics (Table 9.6).
The major advantages of this radar system are a high-frequency stability, a pulse
repetition rate higher than the maximum Doppler frequency of an echo signal, and a
controlled repetition rate necessary for time discretisation of transmitted and received
pulses. Besides, a LRIR system provides imaging of targets on far-off orbits (including
geostationary orbits) and having high rotation rates.
The Doppler-range method of echo signal processing for 2D imaging of the
Russian orbiting stations Salut-7 and Kosmos-1686 was implemented in a radar with
a 1 GHz probing pulse width [91]. A theoretical and experimental investigation of
the imaging of stabilised low-orbit satellites was described in Reference 124, using
narrowband probing pulses. The processing algorithms were based on holographic
principles. The authors believe that current interest in microwave holography is due
to the fact that many available radar systems can acquire a new function – 2D imaging
of space targets – without being radically modernised. An echo signal in such radars
is processed by inverse synthesis of microwave holograms owing to the target angle
variation during the satellite motion along its orbit. The algorithm uses an original
technique for synthesising a 2D image, in the view-flight path plane, from 1D images
obtained along a lengthy target path. The summation of partial 1D images produces
It is clear from this analysis that a closed test ground is preferable for making response
measurements for various targets, especially for aircraft and spacecraft. These facil-
ities employ large AECs providing a high accuracy of all matrix elements for a real
target, and there is no need to use scaling.
On the other hand, many applied radar problems, especially the estimation of
efficiencies of methods and devices for target detection and recognition, often require
a numerical simulation of the whole radar channel, including the microwave path,
tracking conditions and so on. To do this, one should combine analogue and digital
simulation means, including a radar measurement ground (the analogue component)
and a computer with appropriate software packages (the digital component). If such
equipment is designed for the measurement of reflected signals with their amplitudes
and phases, it essentially represents a radar capable of microwave hologram recording,
in other words, of inverse aperture synthesis. For imaging, it is sufficient to include
in the software the image reconstruction algorithms described in this book.
The next procedure at the imaging stage is the measurement of local responses,
or scattering matrices and their elements, to obtain data on individual target scatter-
ers [12,138]. Objects of simple geometry, whose local responses can be calculated
precisely, can be used as standards for calibration of measuring devices. Practically,
it is reasonable to use cylinders as standard targets. An illustration of the calculation
of local responses for cylinders by the EWM suggested by P. Ufimtzev is given in
Chapter 2.
The typical measurement facilities include:
• an AEC;
• devices for pulse generation and transmission and for reception of echo signals
of various frequencies, including superwideband pulses;
• equipment for making measurements, such as a rotating support, a target rotation
control device, etc.
• hard- and software to control measurement runs, to keep records of the incoming
and operational data, processors, etc.
• preparatory operations
• preliminary measurements
• major measurements
• control measurements
• data processing.
The preparatory stage is aimed at preparing the measuring devices for a success-
ful performance. Preliminary measurements are to provide information on the device
ability to make the necessary measurements, to choose the appropriate operation
mode and to calibrate the devices. The aim of the major measurements is to produce
microwave holograms of the target with a prescribed accuracy. Control measurements
are made in order to check the validity of the data obtained. If the amplitude and phase
errors fit into the admissible limits for this particular run, the major measurements are
considered to be valid and are fed into a processor together with the calibration data.
Primary processing is performed to bring relative data to their absolute values,
that is, to calibrate the measurements and to evaluate the errors. The final results are
set into a local database for classified storage. Further processing can be made by
various algorithms for the reconstruction of images of different dimensionalities (by
using holographic and tomographic processing of the scattering matrix elements) in
order to analyse and measure the local responses.
However, the analogue–digital software can also be used for the following tasks:
• to process the results of measurement in order to get statistical data on the scatter-
ing characteristics of the target (average values, dispersion, integral distributions,
histograms and so on) for given target angles;
• to compute the angular positions of the target during its motion with respect to
the ground radar in order to simulate the dynamic behaviour of the echo signal
and the radar viewing devices;
• to simulate the target recognition devices by using various methods to find the
target recognition parameters (from images, too) and to design decision-making
schemes.
As a result, one can get online information about various probable characteristics
necessary for the target detection and recognition.
Methods for direct imaging and for measurement of local responses in an AEC are
described in detail in Reference 138. So we shall restrict ourselves to a brief review
of the measurement procedures and some of the results obtained.
The best way of producing an image in an AEC is to record multiplicative Fourier
holograms and to subject them to a digital processing. The recording can be based
on one of the schemes shown in Fig. 2.4, and the reconstruction can be made by the
algorithm presented in Fig. 9.16.
The input data are two quadrature components hr1 (ϕ) and hr2 (ϕ) of a 1D complex
microwave hologram hr (ϕ) and the calibration results (the calibration curve). The
sampling step for the functions hr1 (ϕ) and hr2 (φ) should meet the condition ϕ ≤
λ/lmax , where lmax is the maximum linear size of the target.
We can synchronise the quadrature components by using the subroutine for jus-
tifying the data file. Normally, a microwave hologram is recorded when the target is
rotated by 2 rad and further processing is performed for a sequence of samples, whose
number corresponds to the optimal size of the synthetic aperture and the position in
the data file corresponds to the required target aspect.
The chosen sequence is normalised, because a microwave hologram can be mea-
sured with different receiving channel gain, depending on the recorded signal value.
This should be taken into account when measuring a local response in the RCS
units. In order to visualise the scatterers and to measure their relative intensities at a
given aspect angle, we should reduce the domains of the functions hr1 (ϕ) and hr2 (ϕ)
to [−1,1].
For a direct image reconstruction, one is to use a fast Fourier transform (FFT),
which is simple to make when the number of initial readouts is 2m, where m is
Formation of
quadrature
components of the
complex
radio-hologram
Choice of the synthesis interval and
object aspect angle
Normalisation
Interpolation
Data output
–4 –2 0 2 4 –4 –2 0 2 4
v/l v/l
to the view line (Fig. 2.1). The analysis of these images has shown that the scatterers
are localised just at the cylinder edges. Scatterers 2 and 3 at the ends of the cylinder
generating line are well resolved. The images of 1 and 2 merge because they are sep-
arated by a distance smaller than the resolution limit of the method. The difference
in the intensities of individual points can be interpreted in terms of the EWM or the
GTD. The dashed lines in Fig. 9.17 are for the former intensities and the latter com-
putations yield similar results. Our findings agree well with experimental data. The
polarisation properties of the scatterers manifest themselves in the varying image
intensity due to the changes in the illumination polarisation. Such images can be
used to estimate the target size and, with a more detailed analysis, its geometry, the
‘brightest’ construction elements and surface patches.
Figures 9.18 and 9.19 present the measured local scattering characteristics for a
metallic cylinder, the RCS diagram for a selected scatterer, and the simulation results
(Sections 5.2 and 5.3). The estimated standard deviation for the experimental local
responses was 1.8 dB. In addition to a methodological error of 0.5 dB, the total error
includes components due to the background echo signals in the AEC, imperfect polar-
isation channel insulation, etc. It is obvious that the theory, simulation and experiment
gave similar results within the accuracy of the total measurement error. Such measure-
ments provide data on local scattering characteristics of targets of complex geometry.
The results presented can be used for calibration of measuring setups.
Recognition of targets is a very important task in radar science and practice. By recog-
nition we mean the procedure of attributing the object being viewed to a certain class
in a prescribed alphabet of target classes, using the radar data obtained. According to
the general theory of pattern recognition, radar target recognition should include the
following stages:
• compiling a classified alphabet of radar targets to be recognised;
• viewing of targets;
• determination (measurement) of some target responses from the recorded echo
signal parameters to compile target descriptions, or patterns;
snE
10 log , dB
pa2
0
1
–20
–40
–60
Scattering Experiment
centre 1 Simulation
3
–80
Scattering Experiment
centre 2 Simulation
• identification and selection of informative signs (features) from the compiled lists;
• target classification or attribution of a particular target to one of the classes on the
basis of discriminating signs.
The problem of making up an alphabet of target classes and selecting informative signs
to describe each class reliably is quite complicated and is to be solved by qualified and
experienced specialists. Of course, classification may be based on various principles.
One of them is to group targets in terms of their function and application. For example,
a successful management of air traffic needs a classification of aircraft: heavy and
light passenger planes, military planes, helicopters, etc.
Each class of radar targets can be described by a definite set of discriminating char-
acteristics to be used for classification: configuration, the presence of well-defined
and readily observable parts, dynamic parameters (e.g. altitude, flight velocity), etc.
A specific feature of all radar targets is that the radar input senses a target pattern in the
echo signal domain. The size scale of this domain and the physical meaning of each
of its components differ considerably from those of the parameter vectors of the target
(a) s1H
10 log , dB
pa2
= Experiment
= Simulation
10
–10
(b) s2H
10 log , dB
pa2
= Experiment
= Simulation
10
–10
(c) s3H
10 log , dB
pa2
= Experiment
= Simulation
class and each characteristic individually. No matter how many identification signs
a target possesses, one can get information only about those characteristics that are
contained in the recorded echo signal parameters. We believe that a holographic
The set of sign vectors was stored in the recognition device to be used for creating
a teaching or testing standard of sign vectors. The vectors were normalised such that
one could compare vectors made up of signs of different physical nature. Smaller-
scale sign vectors were created for further use. Table 9.7 presents the vectors for
the entire sign domain constructed to minimise the sign vectors and compare their
informative characteristics for further recognition. The minimum size was 3 and the
maximum 9.
A sequence of recognition sign vectors arrives at the classifier input. We had
employed a Bayes classifier and a nonparametric classifier based on the method of
potential functions. The former is optimal in the sense that it minimises the average
risk of wrong decisions. The teaching of the Bayes classifier included the evalua-
tion of unknown parameters of the conditioned probability distribution of the sign
vector x in the class Ai − p(x/Ai ), which was taken to be normal. This decision rule
is Bayes-optimal at the equal cost of errors for a more general distribution; in prac-
tice, however, the difference between the actual and normal distributions is usually
neglected if the former is smooth and has one maximum [12]. The other classifier
was used when there was no information on the sign vector distribution function. It
was assumed that the general decision function was known and its parameters were
estimated from the teaching samples [12].
Each experimental run provided a K × K matrix of decisions (K is the number of
classes) at the classifier output. The element kij of the matrix is the number of objects
in the ith class attributed to the jth class. From the matrix K, we can estimate the
probability of correct recognition events, the probability of a false alarm, etc.
The model suggested was used to test the recognition capabilities for various
objects. We also planned to estimate the efficiency of recognition, to compare
the information contents of different sign vectors and investigate the stability of
the classification algorithms in terms of the size of the teaching sample. For this, we
employed metallic cones with a spherical apex (class 1) and a spherical base (class 2) of
about the same length. The probabilistic structure of the sign domain was estimated by
constructing experimental holograms. Their unimodal character was tested to justify
the use of a Bayes classifier. An experimental series was equal to 100 in all the runs.
Table 9.8 compares the valid recognition probability for objects of both classes
and the size of the teaching sequence at different sign vectors for the case of a
Bayes classifier. One can see that the largest vectors made up of local responses are
most effective. The geometrical characteristics gave poorer results, as was expected,
because the objects in both classes were of about the same size. When the number of
teaching vectors is decreased, there is a tendency for a lower recognition efficiency.
Type of Polarisation
sign vector
1 2 3 4 5
Table 9.9 shows similar results for a classifier based on the method of potential
functions. The recognition efficiency is higher but the time necessary for the teaching
is an order of magnitude longer.
The sequence of operations in this model can be used as a procedure for an
estimation of recognition efficiency for various targets at the stage of designing the
radar or the targets. This model provides a greater efficiency of pre-tests at the device
designing stage because one can
• obtain statistical data on possible recognition of various targets in a short time at
lower cost;
• get teaching or experimental sequences of practically any size;
• evaluate the effective parameters of antirecognition devices during direct statis-
tical experiments, etc.
58 ILYIN, A. L., and PASMUROV, A. Ya.: ‘Fluctuated objects and SAR charac-
teristics’, Izvestiya vysshykh uchebnykh zavedeniy – Radioelectronica, 1989, 32
(2), pp. 65–68 (in Russian)
59 ILYIN, A. L., and PASMUROV, A. Ya.: ‘Mapping of partial coherence
extended targets by SAR’, Zarubezhnaya Radioelectronica, 1985, 6, pp. 3–15
(in Russian)
60 ILYIN, A. L., and PASMUROV, A. Ya.: ‘Radar imagery characteristics of fluc-
tuated extended targets’, Radiotekhnika i Electronica, 1987, 31 (1), pp. 69–76
(in Russian)
61 IVANOV, A. V.: ‘On the synthetic aperture radar imaging of ocean surface
waves’, IEEE Journal of Oceanic Engineering, 1982, OE-7 (2), pp. 96–103
62 JOHANNESSEN, J., DIGRANES, G., ESPEDAL, H., JOHANNESSEN, O. M.,
and SAMUEL, P.: ‘SAR ocean feature catalogue’ (ESA Publications Division,
ESTEC, Noordwijk, The Netherlands, 1994)
63 JOHANNESSEN, J. A., SHUCHMAN, R. A., JOHANNESSEN, O. M.,
DAVIDSON, K. L., and LYZENGA, D. R.: ‘Synthetic aperture radar imaging
of upper ocean circulation features and wind fronts’, Journal of Geophysical
Research, 1991, 96 (9), pp. 10411–22
64 JOHANNESSEN, O. M., SANDVEN, S., PETTERSSON, L. H. et al.: ‘Near-
real time sea ice monitoring in the Northern Sea Route using ERS-1 SAR
and DMSP SSM/I microwave data’, Acta Astronautica, 1996, 38 (4–8),
pp. 457–65
65 JOHANNESSEN, O. M., VOLKOV, A. M., BOBYLEV, L. P. et al.: ‘ICE-
WATCH – Real-time sea ice monitoring of the Northern Sea Route using
satellite radar (a cooperative earth observation project between the Russian and
European Space Agencies)’, Earth Observations and Remote Sensing, 2000,
16 (2), pp. 257–68
66 JOHANNESSEN, O. M., and SANDVEN, S.: ‘ERS-1 SAR ice routing
of L’Astrolabe through the Northeast Passage’, Arctic News-Record, Polar
Bulletin, 8 (2), pp. 26–31
67 JOHANNESSEN, O. M., CAMPBELL, W. J., SHUCHMAN, R. et al.:
‘Microwave study programs of air–ice–ocean interactive processes in the sea-
sonal ice zone of the Greenland and Barents Seas’, in ‘Microwave remote sensing
of sea ice’ (American Geophysical Union, Washington, DC., 1992, Geophysical
Monograph No. 68), pp. 261–89
68 JOHANNESSEN, O. M., SANDVEN, S., DROTTNING, A., KLOSTER, K.,
HAMRE, T., and MILES, M.: ‘ERS-1 SAR sea ice catalogue’ (European Space
Agency, SP-1193, 1997)
69 KELL, P. E.: ‘About bistatic RCS evaluation using results of monostatic RCS
measurements’, Proceedings of IEEE, 1965, 53 (8), pp. 1126–32
70 KELLER, J. B.: ‘Geometrical theory of diffraction’, Journal of Optical Society
of the America, 1962, 52 (2), pp. 116–30
71 KOCK, W. E.: ‘Pulse compression with periodic gratings and zone plane
gratings’, Proceedings of IEEE, 1970, 58 (9), pp. 1395–96
72 KONDRATENKOV, G. S.: ‘The signal function of a holographic radar’,
Radiotekhnika, 1974, 29 (6), pp. 90–92 (in Russian)
136 TITOV, M. P., TOLSTOV, E. F., and FOMKIN, B. A.: ‘Mathematical modelling
in aviation‘, in BELOCERKOVSKY, S. M. (Ed.): ‘Problems of cybernetics’
(Nauka, Moscow, 1983), pp. 139–45
137 UFIMTZEV, P. Ya.: ‘Method of edge waves in physical diffraction theory’
(Sovetskoe Radio, Moscow, 1962) (in Russian)
138 VARGANOV, M. E., ZINOVIEV, J. S., ASTANIN, L. Yu. et al.: ‘Aircraft radar
characteristics’ (Radio i Svyaz, Moscow, 1985) (in Russian)
139 WIRTH, W. D.: ‘High resolution in azimuth for radar targets moving on a straight
line’, IEEE Transactions on Aerospace and Electronic Systems, 1980, AES-16
(1), pp. 101–3
140 WALKER, I. L.: ‘Range-Doppler imaging of rotating objects’, IEEE Transac-
tions on Aerospace and Electronic Systems, 1980, AES-16 (1), pp. 23–52
141 YEH, K. C., and LIN, C. H.: ‘Radio wave scintillation in the ionosphere’,
Proceedings of IEEE, 1982, 70 (4), pp. 324–60
142 YU, F. T. S.: ‘Introduction to diffraction, information processing, and hologra-
phy’ (The MIT Press, Cambridge, MA, 1973)
143 ZINOVIEV, J. S., and PASMUROV, A. Ya.: ‘Holographic principles appli-
cation for SAR analysis’, in POTEKHIN, V. A. (Ed.): ‘Image and signal
optical processing’ (USSR Academy of Sciences, Leningrad, 1981), pp. 3–15
(in Russian)
144 ZINOVIEV, J. S., and PASMUROV, A. Ya.: ‘Evaluation of SAR phase fluctu-
ations caused by turbulent troposphere’, Radiotehnika i Electronica, 1975, 20
(11), pp. 2386–88 (in Russian)
145 ZINOVIEV, J. S., and PASMUROV, A. Ya.: ‘Method for recording and pro-
cessing of 1D Fourier microwave holograms’, Pisma v Zhurnal Tekhnicheskoy
Fiziki, 1977, 3 (1), pp. 28–32 (in Russian)
146 ZINOVIEV, J. S., and PASMUROV, A. Ya.: ‘Methods of inverse aperture synthe-
sis for radar with narrow-band signals’, Zarubezhnaya Radioelectronica, 1985,
3, pp. 27–39 (in Russian)
1D One-dimensional
2D Two-dimensional
3D Three-dimensional
AB Adaptive beamforming
AEC Anechoic chamber
CAT Computer-aided tomography
CBP Convolutional backprojection method
CCA Circular convolution algorithm
CIS Canadian Ice Centre
DFT Discrete Fourier transform
ECP Extended coherent processing
ESA European Space Agencies
EWM Edge waves method
FCC Frequency contrast characteristics
FFT Fast Fourier transform
GSSR Goldstone solar system radar
GTD Geometrical theory of diffraction
IFT Inverse Fourier transform
ISAR Inverse synthetic aperture radar
LFM Linear frequency modulation
LRIR Long-range imaging radar
NRCS Normalised radar cross-section
NBM Narrowband mode
PH Partial hologram
PRR Pulse repetition rate
RCS Radar cross-section
RLOS Radar line of sight
SAP Synthetic antenna pattern
SAR Synthetic aperture radar
SCF Space carrier frequency
SCS Specific cross-section
SGL Spatial grey level
SST Sea surface temperature
WBM Wideband mode
WMO World Meterological Organization
coherent radar 21–3 ERS-2 20, 193, 195, 197, 206, 208, 210–11,
holographic processing 36–41 214
tomographic processing 41–8 mesoscale ocean phenomena 208, 210–11,
coherent signal 40–1 214
coherent summation of partial components sea ice 206
126–31 extended coherent processing 35–6
1D 139 extended targets 28–9, 31, 79–85
2D viewing geometry 131–42 compact 28–9
3D viewing geometry 141–5 partially coherent 85–6
complexity 136–7, 140–2, 145 proper 28, 31
complex microwave Fourier hologram
110–15 fast ice 201, 204
complex targets 27–8 first-year ice 200–2
computer-aided tomography 74, 76 flop 136
computerised tomography 14–20, 48 focal depth 8–10, 14, 67–70
remote-probing 15 focal length 7
remote-sensing 15 focal point 7
see also tomographic processing focused aperture 54
contrast 94–9, 175, 177 focusing depth 59
convolution back-projection algorithm forestry 195
18–19, 73–5, 118–19 Fourier microwave hologram 39–40, 52–3
correlated processing 35, 49 complex 110–15
correlation function 96 rotating target 101–9
critical volume 179 simulation 112–16
cross range resolution 148–51 Fourier space 16, 18
cross-correlation approach 33–4 Fourier transform 18
cylinder 29–31, 219, 221–4 Fraunhofer microwave hologram 39–40, 52–3
local scattering characteristics 223–4 frequency stability 40–1
frequency-contrast characteristic 95–8
dark level 175, 177 Fresnel lens 49
deformed ice 201, 204 Fresnel microwave hologram 39–40, 52
density distribution 14–16, 19 Fresnel zone plate 33, 50
diffraction 29, 116 Fresnel-Kirchhoff diffraction formula 38
diffraction-limited image 127 friction velocity 207
digital processing 112–16, 145 front-looking holographic radar 60–70
direct synthesis 31–2 hologram recording 60–3
distortion 176 image reconstruction 62–7
Doppler frequency shift 27 resolution 61–2
Doppler-range method: see range-Doppler
method gain in the signal-to-noise ratio 174–5
dynamic range 175, 177 geological structures 24
geometric accuracy 80
earth surface imaging 20, 34, 60 geometrical theory of diffraction 29
satellite SARs 191–215 globules 158–9
earth surface survey 34, 70–1, 79 Goldstone Solar System Radar 41
grease ice 198–9
echo signal 27, 46, 148, 182–3
grey-level resolution 178–81
edge wave method 29
electron density fluctuations 166–7
ENVISAT 193–4, 196–7, 205 half-tone resolution 178–81
ERS-1 20, 193, 195, 197, 212 Hankel transform 75–6