0% found this document useful (0 votes)
199 views269 pages

Radar Imaging and Holography

Uploaded by

Chang Ming
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
199 views269 pages

Radar Imaging and Holography

Uploaded by

Chang Ming
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 269

Radar, Sonar and Navigation Series 19

Radar Imaging Radar Imaging

Radar Imaging and Holography


and Holography
and Holography
Increasing information content is an important scientific problem in modern observation Alexander Ja. Pasmurov is
systems development. Radar, or microwave, imaging, a technique which combines radar Executive Director of A.S. Popov
techniques with digital or optical information processing, can be used for this purpose. Institute of Radio Broadcasting
Drawing on their own research, the authors provide an overview of the field and explain Reception and Acoustics,
St Petersburg, Russia.
why a unified approach based on wave field processing techniques, including holographic
and tomographic approaches, is necessary in high resolution radar design. Such techniques Julius S. Zinoviev is the
Scientific Adviser of A.S. Popov
use the complex field incident on an observation surface to produce a hologram, which can Institute of Radio Broadcasting
be used to reconstruct an image of the object or to restore some of its physical parameters. Reception and Acoustics,
This makes it possible to extract the size, coordinates and radar cross-section of individual St Petersburg, Russia.
scattering centres.

A. Pasmurov and J. Zinoviev


The book focuses on holography and tomography for quasimonochromatic and broadband
signals, and gives detailed coverage of the basic physical methods, inverse problems and
mathematical principles. It also contains discussion of new areas in imaging radar theory,
holographic radar, the questions of estimation and improving radar image quality, and finally
various practical applications in the fields of space, airborne radar, air traffic control, medical
diagnostics and remote sensing.
and Zinoviev
Pasmurov

The Institution of Engineering and Technology


www.theiet.org
0 86341 502 4
978-0-86341-502-9
IET Radar, Sonar and Navigation Series 19
Series Editors:  Dr N. Stewart
Professor H. Griffiths

Radar Imaging
and Holography
Other volumes in this series:
Volume 1 Optimised radar processors A. Farina (Editor)
Volume 3 Weibull radar clutter M. Sekine and Y. Mao
Volume 4 Advanced radar techniques and systems G. Galati (Editor)
Volume 7 Ultra-wideband radar measurements: analysis and processing
L. Yu. Astanin and A.A. Kostylev
Volume 8 Aviation weather surveillance systems: advanced radar and surface
sensors for flight safety and air traffic management P.R. Mahapatra
Volume 10 Radar techniques using array antennas W. Wirth
Volume 11 Air and spaceborne radar systems: an introduction P. Lacomme (Editor)
Volume 13 Introduction to RF stealth D. Lynch
Volume 14 Applications of space-time adaptive processing R. Klemm (Editor)
Volume 15 Ground penetrating radar, 2nd edition D. Daniels
Volume 16 Target detection by marine radar J. Briggs
Volume 17 Strapdown inertial navigation technology, 2nd edition D. Titterton and
J. Weston
Volume 18 Introduction to radar target recognition P. Tait
Volume 19 Radar imaging and holography A. Pasmurov and S. Zinovjev
Volume 20 Sea clutter: scattering, the K distribution and radar performance K. Ward,
R. Tough and S. Watts
Volume 21 Principles of space-time adaptive processing, 3rd edition R. Klemm
Volume 101 Introduction to airborne radar, 2nd edition G.W. Stimson
Volume 102 Low-angle radar land clutter B. Billingsley
Radar Imaging
and Holography
A. Pasmurov and J. Zinoviev

The Institution of Engineering and Technology


Published by The Institution of Engineering and Technology, London, United Kingdom

First edition © 2005 The Institution of Electrical Engineers


New cover © 2009 The Institution of Engineering and Technology

First published 2005

This publication is copyright under the Berne Convention and the Universal Copyright
Convention. All rights reserved. Apart from any fair dealing for the purposes of research or
private study, or criticism or review, as permitted under the Copyright, Designs and Patents
Act, 1988, this publication may be reproduced, stored or transmitted, in any form or by
any means, only with the prior permission in writing of the publishers, or in the case of
reprographic reproduction in accordance with the terms of licences issued by the Copyright
Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to
the publishers at the undermentioned address:

The Institution of Engineering and Technology


Michael Faraday House
Six Hills Way, Stevenage
Herts, SG1 2AY, United Kingdom

www.theiet.org

While the authors and the publishers believe that the information and guidance given in this
work are correct, all parties must rely upon their own skill and judgement when making use
of them. Neither the authors nor the publishers assume any liability to anyone for any loss
or damage caused by any error or omission in the work, whether such error or omission is
the result of negligence or any other cause. Any and all such liability is disclaimed.

The moral rights of the authors to be identified as authors of this work have been asserted
by them in accordance with the Copyright, Designs and Patents Act 1988.

British Library Cataloguing in Publication Data


Pasmurov, Alexander Ya.
Radar imaging and holography
1. Radar 2. Imaging systems 3. Radar targets 4. Holography
I. Title II. Zinoviev, Julius S. III. Institution of Electrical Engineers
621.3’848

ISBN (10 digit) 0 86341 502 4


ISBN (13 digit) 978-0-86341-502-9

Typeset in India by Newgen Imaging Systems (P) Ltd, Chennai


First printed in the UK by MPG Books Ltd, Bodmin, Cornwall
Reprinted in the UK by Lightning Source UK Ltd, Milton Keynes
Contents

List of figures ix

List of tables xvii

Introduction 1

1 Basic concepts of radar imaging 7


1.1 Optical definitions 7
1.2 Holographic concepts 10
1.3 The principles of computerised tomography 14
1.4 The principles of microwave imaging 20

2 Methods of radar imaging 27


2.1 Target models 27
2.2 Basic principles of aperture synthesis 31
2.3 Methods of signal processing in imaging radar 33
2.3.1 SAR signal processing and holographic radar for
earth surveys 33
2.3.2 ISAR signal processing 34
2.4 Coherent radar holographic and tomographic processing 36
2.4.1 The holographic approach 36
2.4.2 Tomographic processing in 2D viewing geometry 41

3 Quasi-holographic and holographic radar imaging of point targets


on the earth surface 49
3.1 Side-looking SAR as a quasi-holographic radar 49
3.1.1 The principles of hologram recording 50
3.1.2 Image reconstruction from a microwave hologram 53
3.1.3 Effects of carrier track instabilities and object’s motion
on image quality 57

zino: “fm” — 2005/11/7 — 18:58 — page v — #5


vi Contents

3.2 Front-looking holographic radar 60


3.2.1 The principles of hologram recording 60
3.2.2 Image reconstruction and scaling relations 62
3.2.3 The focal depth 67
3.3 A tomographic approach to spotlight SAR 70
3.3.1 Tomographic registration of the earth area projection 70
3.3.2 Tomographic algorithms for image reconstruction 72

4 Imaging radars and partially coherent targets 79


4.1 Imaging of extended targets 80
4.2 Mapping of rough sea surface 82
4.3 A mathematical model of imaging of partially
coherent extended targets 85
4.4 Statistical characteristics of partially coherent target images 87
4.4.1 Statistical image characteristics for zero incoherent
signal integration 88
4.4.2 Statistical image characteristics for incoherent signal
integration 90
4.5 Viewing of low contrast partially coherent targets 94

5 Radar systems for rotating target imaging


(a holographic approach) 101
5.1 Inverse synthesis of 1D microwave Fourier holograms 101
5.2 Complex 1D microwave Fourier holograms 110
5.3 Simulation of microwave Fourier holograms 112

6 Radar systems for rotating target imaging


(a tomographic approach) 117
6.1 Processing in frequency and space domains 117
6.2 Processing in 3D viewing geometry: 2D and 3D imaging 119
6.2.1 The conditions for hologram recording 120
6.2.2 Preprocessing of radar data 124
6.3 Hologram processing by coherent summation of partial
components 126
6.4 Processing algorithms for holograms of complex geometry 130
6.4.1 2D viewing geometry 131
6.4.2 3D viewing geometry 141

7 Imaging of targets moving in a straight line 147


7.1 The effect of partial signal coherence on
the cross range resolution 148
7.2 Modelling of path instabilities of an aerodynamic target 151
7.3 Modelling of radar imaging for partially coherent signals 152

zino: “fm” — 2005/11/7 — 18:58 — page vi — #6


Contents vii

8 Phase errors and improvement of image quality 157


8.1 Phase errors due to tropospheric and ionospheric turbulence 157
8.1.1 The refractive index distribution in the troposphere 157
8.1.2 The distribution of electron density fluctuations in
the ionosphere 166
8.2 A model of phase errors in a turbulent troposphere 167
8.3 A model of phase errors in a turbulent ionosphere 172
8.4 Evaluation of image quality 173
8.4.1 Potential SAR characteristics 173
8.4.2 Radar characteristics determined from images 175
8.4.3 Integral evaluation of image quality 177
8.5 Speckle noise and its suppression 181
8.5.1 Structure and statistical characteristics of speckle 182
8.5.2 Speckle suppression 184

9 Radar imaging application 191


9.1 The earth remote sensing 191
9.1.1 Satellite SARs 191
9.1.2 SAR sea ice monitoring in the Arctic 195
9.1.3 SAR imaging of mesoscale ocean phenomena 204
9.2 The application of inverse aperture synthesis for radar
imaging 215
9.3 Measurement of target characteristics 217
9.4 Target recognition 222

References 231

List of abbreviations 241

Index 243

zino: “fm” — 2005/11/7 — 18:58 — page vii — #7


zino: “fm” — 2005/11/7 — 18:58 — page viii — #8
List of figures

Chapter 1
Figure 1.1 The process of imaging by a thin lens 8
Figure 1.2 A schematic illustration of the focal depth of an optical image:
(a) image of point M lying in the optical axis; (b) image of
point A; (c) image of point B and (d) image of points A and B
in the planes M1 , M2 and M3 9
Figure 1.3 The process of optical hologram recording: 1 – reference
wave; 2 – object; 3 – photoplate and 4 – object’s wave 11
Figure 1.4 Image reconstruction from a hologram: 1 – virtual image;
2 – real image; 3 – zero diffraction order and
4 – hologram 13
Figure 1.5 Viewing geometry in computerised tomography (from
Reference 15): m – circumference for measurements;
c – circumference with the centre at point O enveloping
a cross section; p – arbitrary point in the circle with the polar
coordinates ρ and ; A, C and D – wide beam transmitters; B,
C and D – receivers; γ –γ , δ–δ – parallel elliptic arcs defining
the resolving power of transmitter–receiver pair
(CC and DD ) 15
Figure 1.6 A scheme of X-ray tomographic experiment using a collimated
beam: 1 – X-rays; 2 – projection angle;
3 – registration line; 4 – projection axis and
5 – integration line 17
Figure 1.7 The geometrical arrangement of the G(x, y) pixels in
the Fourier region of a polar grid. The parameters ϑmax and
ϑmin are the variation range of the projection angles. The
shaded region is the SAR recording area 18
Figure 1.8 Synthesis of a radar aperture pattern: (a) real antenna array
and (b) synthesised antenna array 22

zino: “fm” — 2005/11/7 — 18:58 — page ix — #9


x List of figures

Chapter 2
Figure 2.1 Viewing geometry for a rotating cylinder: 1, 2, 3 – scattering
centres (scatterers) 30
Figure 2.2 Schematic illustrations of aperture synthesis techniques:
(a) direct synthesis implemented in SAR, (b) inverse synthesis
for a target moving in a straight line and (c) inverse synthesis
for a rotating target 32
Figure 2.3 The holographic approach to signal recording and processing
in SAR: 1 – recording of a 1D Fraunhofer or Fresnel
diffraction pattern of target field in the form of a transparency
(azimuthal recording of a 1D microwave hologram), 2 – 1D
Fourier or Fresnel transformation, 3 – display 34
Figure 2.4 Synthesis of a microwave hologram: (a) quadratic hologram
recorded at a high frequency, (b) quadratic hologram recorded
at an intermediate frequency, (c) multiplicative hologram
recorded at a high frequency, (d) multiplicative hologram
recorded at an intermediate frequency, (e) quadrature
holograms, (f) phase-only hologram 37
Figure 2.5 A block diagram of a microwave holographic receiver:
1 – reference field, 2 – reference signal cos(ω0 t + ϕ0 ),
3 – input signal A cos(ω0 t + ϕ0 − ϕ), 4 – signal sin(ω0 t + ϕ0 )
and 5 – mixer 39
Figure 2.6 Illustration for the calculation of the phase variation of
a reference wave 39
Figure 2.7 The coordinates used in target viewing 42
Figure 2.8 2D data acquisition design in the tomographic approach 45
Figure 2.9 The space frequency spectrum recorded by a coherent
(microwave holographic) system. The projection slices are
shifted by the value fpo from the coordinate origin 46
Figure 2.10 The space frequency spectrum recorded by an incoherent
(tomographic) system 47

Chapter 3
Figure 3.1 A scheme illustrating the focusing properties of a Fresnel zone
plate: 1 – collimated coherent light, 2 – Fresnel zone plate,
3 – virtual image, 4 – real image and 5 – zeroth-order
diffraction 50
Figure 3.2 The basic geometrical relations in SAR 51
Figure 3.3 An equivalent scheme of 1D microwave hologram recording
by SAR 51
Figure 3.4 The viewing field of a holographic radar 60
Figure 3.5 A schematic diagram of a front-looking holographic radar 61
Figure 3.6 The resolution of a front-looking holographic radar along
the x-axis as a function of the angle ϕ 61

zino: “fm” — 2005/11/7 — 18:58 — page x — #10


List of figures xi

Figure 3.7 The resolution of a front-looking holographic radar along


the z-axis as a function of the angle ϕ 62
Figure 3.8 Generalised schemes of hologram recording (a) and
reconstruction (b) 63
Figure 3.9 Recording (a) and reconstruction (b) of a two-point object for
finding longitudinal magnifications: 1, 2 – point objects,
3 – reference wave source and 4 – reconstructing wave
source 67
Figure 3.10 The focal depth of a microwave image: 1 – reconstructing
wave source, 2 – real image of a point object and
3 – microwave hologram 68
Figure 3.11 The basic geometrical relations for a spot-light SAR 70

Chapter 4
Figure 4.1 The geometrical relations in a SAR 85
Figure 4.2 A generalised block diagram of a SAR 85
Figure 4.3 The variation of the parameter Q with the synthesis range Ls at
λ = 3 cm, = 0.02 and various values of R 90
Figure 4.4 The dependence of the spatial correlation range of the image
on normalised Ls for multi-ray processing (solid lines) at
various degrees of incoherent integration De and for averaging
of the resolution elements (dashed lines) at various
Ge : λ = 3 cm, R = 10 km; 1, 5–0 (curves overlap);
2, 6–0.25(λR/2)1/2 ; 3, 7–(λR/2)1/2 ; 4, 8–2.25(λR/2)1/2 91
Figure 4.5 The variation of the parameter Qh with the number of
integrated signals Ni at various values of Ka 93
Figure 4.6 The variation of the parameter Qe with the synthesis range Ls
at various signal correlation times τc 97
Figure 4.7 The parameter Q as a function of the synthesis range Ls at
various signal correlation times τc 98

Chapter 5
Figure 5.1 A schematic diagram of direct bistatic radar synthesis of
a microwave hologram along arc L of a circle of radius R0 :
1 – transmitter, 2 – receiver 102
Figure 5.2 A schematic diagram of inverse synthesis of a microwave
hologram by a unistatic radar located at point C 103
Figure 5.3 The geometry of data acquisition for the synthesis of a 1D
microwave Fourier hologram of a rotating object 103
Figure 5.4 Optical reconstruction of 1D microwave images from
a quadrature Fourier hologram: (a) flat transparency,
(b) spherical transparency 106
Figure 5.5 The dependence of microwave image resolution on
the normalised aperture angle of the hologram 109

zino: “fm” — 2005/11/7 — 18:58 — page xi — #11


xii List of figures

Figure 5.6 Microwave images reconstructed from Fourier holograms: (a)


quadrature hologram, (b) complex hologram with carrier
frequency, (c) complex hologram without carrier frequency
and (d,e,f) the variation of the reconstructed image with the
hologram angle ψs (complex hologram without carrier
frequency) 113
Figure 5.7 The algorithm of digital processing of 1D microwave complex
Fourier holograms 114
Figure 5.8 A microwave image of a point object, reconstructed digitally
from a complex Fourier hologram as a function of the object’s
aspects 0 (s = π/6): (a) 0 = π/12, (b) 0 = 5π/2 and
(c) 0 = 3π/4. 115

Chapter 6
Figure 6.1 The aspect variation relative to the line of sight of a ground
radar as a function of the viewing time for a satellite at
the culmination altitudes of 31◦ , 66◦ and 88◦ : (a) aspect α and
(b) aspect β 121
Figure 6.2 Geometrical relations for 3D microwave hologram recording:
(a) data acquisition geometry; a–b, trajectory projection onto
a unit surface relative to the radar motion and (b) hologram
recording geometry 123
Figure 6.3 The sequence of operations in radar data processing during
imaging 125
Figure 6.4 Subdivision of a 3D microwave hologram into partial
holograms: (a) 1D partial (radial and transversal), (b) 2D
partial (radial and transversal) and (c) 3D partial holograms 128
Figure 6.5 Subdivision of a 3D surface hologram into partial holograms:
(a) radial, (b) 1D partial transversal and (c) 2D partial 129
Figure 6.6 Coherent summation of partial hologram. A 2D narrowband
microwave hologram: (a) highlighting of partial holograms
and (b) formation of an integral image 132
Figure 6.7 Coherent summation of partial hologram. A 2D wideband
microwave hologram: (a) highlighting of partial holograms,
(b) formation of an integral image 133
Figure 6.8 The computational complexity of the coherent summation
algorithms as a function of the target dimension for a
narrowband microwave hologram: (a) transverse partial
images, (b) hologram samples 137
Figure 6.9 The relative computational complexity of coherent summation
algorithms as a function of the target dimension for a
narrowband microwave hologram: (a) transverse partial
images/CCA, (b) hologram samples/CCA 140

zino: “fm” — 2005/11/7 — 18:58 — page xii — #12


List of figures xiii

Figure 6.10 The relative computational complexity of coherent summation


algorithms of hologram samples and transverse partial images
versus the coefficient µ in the case of a wideband hologram 141
Figure 6.11 The relative computational complexity of coherent summation
algorithms for radial and transverse partial images versus the
coefficient µ in the case of a wideband hologram 142
Figure 6.12 The transformation of the partial coordinate frame in the
processing of a 3D hologram by coherent summation of
transverse partial images 143

Chapter 7
Figure 7.1 Characteristics of an imaging device in the case of partially
coherent echo signals: (a) potential resolving power at C2 = 1,
(b) performance criterion (1 – dc = 6.98 m, 2 – dc = 3.49 m
and 3 – dc = 0) 151
Figure 7.2 Typical errors in the impulse response of an imaging device
along the s-axis: (a) response shift, (b) response broadening,
(c) increased amplitude of the response side lobes and
(d) combined effect of the above factors 153
Figure 7.3 The resolving power of an imaging device in the presence of
range instabilities versus the synthesis time Ts and the method
of resolution step measurement: (a) −σp = 0.04 m; 1 and 1
(2 and 2 ) – first (second) way of resolution step measurement;
1 and 2 – Tc = 1.5 s, 1 and 2 – Tc = 3 s; (b) −σp = 0.05 m,
1 and 1 (2 and 2 ) – first (second) way of resolution step
measurement; 1 and 2 – Tc = 1.5 s, 1 and 2 – Tc = 3 s 154
Figure 7.4 The resolving power of an imaging system in the presence of
velocity instabilities versus the synthesis time Ts and the
method of resolution step measurement: (a) σx ,y = 0.1 m/s
(other details as in Fig. 7.3), (b) σx ,y = 0.2 m/s (other details
as in Fig. 7.3) 155
Figure 7.5 Evaluation of the performance of a processing device in the
case of partially coherent signals versus the synthesis time Ts
and the space step of path instability correlation dc : 1–dc =
6.98m, 2 – dc = 3.49 m 155

Chapter 8
Figure 8.1 The normalised refractive index spectrum n (χ )/Cn2 as
a function of the wave number χ in various models:
1 – Tatarsky’s model-I, 2 – Tatarsky’s model-II, 3 – Carman’s
model and 4 – modified Carman’s model 160
Figure 8.2 The profile of the structure constant Cn2 versus the altitude for
April at the SAR wavelength of 3.12 cm 164
Figure 8.3 The profile of the structure constant Cn2 versus the altitude for
November at the SAR wavelength of 3.12 cm 165

zino: “fm” — 2005/11/7 — 18:58 — page xiii — #13


xiv List of figures

Figure 8.4 A geometrical construction for a spaceborne SAR tracking


a point object A through a turbulent atmospheric stratum of
thickness ht 168
Figure 8.5 A schematic test ground with corner reflectors for
investigation of SAR performance 176
Figure 8.6 A 1D SAR image of two corner reflectors 177
Figure 8.7 A histogram of the noise distribution in a SAR receiver 178
Figure 8.8 The grey-level (half-tone) resolution versus the number of
incoherently integrated frames N 179
Figure 8.9 The dependence of the image interpretability on the resolution
versus linear resolution pa = pr = p 180
Figure 8.10 The dependence of the half-tone resolution on the number of
incoherent integrations over the total real antenna pattern 181

Chapter 9
Figure 9.1 The mean monthly convoy speed in the NSR changes from V0
(without satellite data) to V1 (SAR images used by the
icebreaker’s crew to select the route in sea ice). The mean ice
thickness (hi ) is shown as a function of the season. (N. Babich,
personal communications) 198
Figure 9.2 (a) Photo of grease ice and (b) a characteristic dark SAR
signature of grease ice. © European Space Agency 199
Figure 9.3 Photo of typical nilas with finger-rafting 200
Figure 9.4 A RADARSAT ScanSAR Wide image of 25 April 1998,
covering an area of 500 km × 500 km around the northern
Novaya Zemlya. A geographical grid and the coastline are
superimposed on the image. © Canadian Space Agency 201
Figure 9.5 A RADARSAT ScanSAR Wide image of 3 March 1998,
covering the boundary between old and first-year sea ice in
the area to north Alaska. © Canadian Space Agency 202
Figure 9.6 (a) Photo of a typical pancake ice edge and (b) a characteristic
ERS SAR signature of pancake ice. A mixed bright and dark
backscatter signature is typical for pancake and grease ice
found at the ice edge. © European Space Agency 203
Figure 9.7 A RADARSAT ScanSAR Wide image of 8 May 1998,
covering the south-western Kara Sea. © Canadian Space
Agency 204
Figure 9.8 An ENVISAT ASAR image of 28 March 2003, covering
the ice edge in the Barents Sea westward and southward of
Svalbard. © European Space Agency 205
Figure 9.9 An ERS-2 SAR image of 11 September 2001, covering
the Red Army Strait in the Severnaya Zemlya Archipelago.
© European Space Agency 206

zino: “fm” — 2005/11/7 — 18:58 — page xiv — #14


List of figures xv

Figure 9.10 An ERS-2 SAR image (100 km × 100 km) taken on 24 June
2000 over the Black Sea (region to the East Crimea peninsula)
and showing upwelling, natural films 208
Figure 9.11 SST retrieved from a NOAA AVHRR image on 24 June
2000. 209
Figure 9.12 A fragment of an ERS-2 SAR image (26 km × 22 km) taken on
30 September 1995 over the Northern Sea near the Norwegian
coast and showing swell 210
Figure 9.13 An ERS-2 SAR image (100 km × 100 km) taken on
28 September 1995 over the Northern Sea and showing an oil
spill, wind shadow, low wind and ocean fronts 211
Figure 9.14 An ERS-1 SAR image (100 km × 100 km) taken on
29 September 1995 over the Northern Sea showing rain
cells 213
Figure 9.15 An ERS-2 SAR image (18 km × 32 km) taken on
30 September 1995 over the Northern Sea showing an internal
wave and a ship wake 214
Figure 9.16 The scheme of the reconstruction algorithm 221
Figure 9.17 A typical 1D image of a perfectly conducting cylinder 222
Figure 9.18 The local scattering characteristics for a metallic cylinder
(E-polarisation) 223
Figure 9.19 The local scattering characteristics for a metallic cylinder
(H-polarisation) 224
Figure 9.20 A mathematical model of a radar recognition device 226

zino: “fm” — 2005/11/7 — 18:58 — page xv — #15


zino: “fm” — 2005/11/7 — 18:58 — page xvi — #16
List of tables

Chapter 6
Table 6.1 The number of spectral components of a PH 136

Chapter 8
Table 8.1 The main characteristics of the synthetic aperture pattern 174

Chapter 9
Table 9.1 Technical parameters of SARs borne by the SEASAT and
Shuttle 192
Table 9.2 Parameters of the Almaz-1 SAR 192
Table 9.3 The parameters of the ERS-1/2 satellites 193
Table 9.4 SAR imaging modes of the RADARSAT satellite 194
Table 9.5 The ENVISAT ASAR operation modes 194
Table 9.6 The LRIR characteristics 216
Table 9.7 The variants of the sign vectors 227
Table 9.8 The valid recognition probability (a Bayes classifier) 228
Table 9.9 The valid recognition probability (a classifier based on
the method of potential functions) 228

zino: “fm” — 2005/11/7 — 18:58 — page xvii — #17


zino: “fm” — 2005/11/7 — 18:58 — page xviii — #18
Introduction

The analysis of the current state and tendencies in radar development shows that novel
methods of target viewing are based on a detailed study of echo signals and their
informative characteristics. These methods are aimed at obtaining complete data on a
target, with emphasis on revealing new steady parameters for their recognition. One
way of raising the efficiency of radar technology is to improve available methods of
radio vision, or imaging. Radio vision systems provide a high resolution, considerably
extending the scope of target detection and recognition. This field of radar science and
technology is very promising, because it paves the way from the classical detection
of a point target to the imaging of a whole object.
The physical mechanism underlying target viewing can be understood on a heuris-
tic basis. An electromagnetic wave incident on a target induces an electric current on
it, generating a scattered electromagnetic wave. In order to find the scattering prop-
erties of the target, we must visualise its elements making the greatest contribution
to the wave scattering. This brings us to the concept of a radar image, which can be
defined as a spatial distribution pattern of the target reflectivity. Therefore, an image
must give a spatial quantitative description of this physical property of the target with
a quality not less than that provided by conventional observational techniques.
Radio vision makes it possible to sense an object as a visual picture. This is very
important because we get about 90 per cent of all information about the world through
vision. Of course, a radar image differs from a common optical image. For instance,
a surface rough to light waves will be specular to radio waves (microwaves), and
images of many objects will look like bright spots, or glare. However, the repre-
sentation of information transported by microwaves as visual images has become
quite common. It took much time and effort to get a high angular resolution in the
microwave frequency band because of the limited size of a real antenna. It was not
until the 1950–1960s that a sufficiently high resolution was obtained by a side-looking
radar with a large synthesised antenna aperture. The synthetic aperture method was
then described in terms of the range-Doppler approach.
At about the same time, a new method of imaging in the visible spectrum emerged
which was based on recording and reconstruction of the wave front and its phase,
using a reference wave. A lens-free registration of the wave front (the holographic
technique), followed by the image reconstruction, was first suggested by D. Gabor in

zino: “introduction” — 2005/11/7 — 15:38 — page 1 — #1


2 Introduction

1948 and re-discovered by E. Leith and U. Upatnieks in 1963. The two researchers
suggested a holographic method with a ‘side reference beam’ to eliminate the zeroth
diffraction order. This principle was later used in a new, side-looking type of radar.
A specific feature of holographic imaging is that a hologram records an integral
Fourier or Fresnel transform of the object’s scattering function. The emergence of
holography radically changed our conception of an object’s image. Earlier, humans
had dealt with images produced by recording the distribution of light intensity in a
certain plane. But objects can generate a light field or another kind of electromagnetic
field with all of its parameters modulated: the amplitude, phase, polarisation, etc.
This discovery considerably extended the scope of spatial information that could be
extracted about the object of interest.
It should be noted that holography brought about revolutionary changes only in
optics, because it did not possess ways or means to save the recorded information about
the phase structure of an optical field until then. But the application of holographic
principles to the microwave frequency band proceeded easily, giving excellent results.
This was due to the fact that radio engineering had employed methods of registration
of the electromagnetic wave phase long before the emergence of holography. For
many years, radar imaging developed independently of holography, although some
workers (E.N. Leith, W.E. Kock, D.L. Mensa, B.D. Steinberg) did note that many
intermediate steps in the recording and processing techniques for radar imaging were
quite similar to those of holography and tomography. These researchers, however,
only briefly reviewed the holographic principles just to point out the fundamental
similarity and difference between optical and radar imaging, but they did not make a
comprehensive analysis of this fact in the context of radiolocation.
E.N. Leith and A.L. Ingalls showed that the operation of a side-looking radar
should be treated in terms of a holographic approach. Holograms recorded in the
microwave frequency range were referred to as microwave holograms, and radar sys-
tems based on the holographic principle were called by E.N. Leith quasi-holographic.
In fact, the work done in those years became the basis for designing a special type of
radar to perform imaging. The research into radar imaging was developing quite inten-
sively, and many scientists made their contributions to it: L.J. Cutrona, A. Kozma,
D.A. Ausherman, G. Graf, I.L. Walker, W.M. Brown, D.L. Mensa, D.C. Manson,
B.D. Steinberg, N.H. Farhat, V.C. Chen, D.R. Wehner and others.
These efforts were accompanied by the development of tomographic techniques
for image reconstruction in medicine and physiology (X-ray imaging). Initially,
tomography was treated as a way of reconstructing the spatial distribution of a cer-
tain physical characteristic of an object by making computational operations with
data obtained during the probing of the object. This resulted in the emergence of
reconstructive computerised tomography possessing powerful mathematical meth-
ods. Later, tomographic techniques were suggested capable of reconstructing a
physical characteristic of an object by a mathematical processing of the field reflected
by it.
Naturally, there have been suggestions to combine the available methods of
radar imaging (e.g. the range-Doppler principles) with tomographic algorithms
(D.L. Mensa, D.C. Manson). At present, the work on radar imaging goes on,

zino: “introduction” — 2005/11/7 — 15:38 — page 2 — #2


Introduction 3

combining the principles of microwave holography, range-Doppler methods of


reflected field recording and tomographic image reconstruction.
In Russia, the theory of a side-looking radar has been developed by many
workers: Yu.A. Melnik, N.I. Burenin, G.S. Kondratenkov, A.P. Reutov, Yu.A. Feok-
tistov, E.F. Tolstov, L.B. Neronsky and others. Much contribution to the theory of
inverse aperture synthesis and tomographic image processing has been made by S.A.
Popov, B.A. Rozanov, J.S. Zinoviev, A.Ya. Pasmurov, A.F. Kononov, A.A. Kuriksha,
A. Manukyan and others.
This book presents systematised results on the application of direct and inverse
aperture synthesis for radar imaging by holographic and tomographic techniques.
The focus is on the research data obtained by the authors themselves. The book is
primarily intended for engineers, designers and researchers, who are working in radar
design and maintenance and are interested in the fundamental problem of extracting
useful information from radar data.
The book consists of three parts: introductory Chapters 1 and 2, theoretical
Chapters 3–8 and concluding Chapter 9.
The first two chapters will be useful to a reader who has but limited knowledge of
optical holography, microwave holography and tomography. They cover the material
available in the literature, but the information is presented in such a way that the reader
will be able to better understand the chapters that follow. Besides, Chapter 1 treats
the equation for an optical hologram in a non-trivial way to explain the speckle struc-
ture of a radar image. Chapter 2 explains the physical difference between coherent
(microwave holographic) and incoherent (tomographic) imaging. The mathematical
relations presented can be regarded as an extension of the classical theorem of a
projection slice to coherent imaging. This allows application of the analytical meth-
ods of reconstructive computerised tomography for further development of coherent
imaging theory.
Chapters 3–8 represent an attempt to treat the imaging radar operation in terms of
holography, microwave holography and tomography, without resorting to the Doppler
approach. Most of this material is the authors’ results published during the past
30 years.
Chapter 3 discusses the holographic approach as applied to a side-looking radar.
Its azimuthal channel is treated as a holographic system, in which the formation of
a microwave hologram represents the recording of a field scattered from an artifi-
cial reference source, and the image reconstruction is described in terms of physical
optics. We show that the use of a subcarrier frequency by turning the antenna beam
away from the direction normal to the track velocity vector leads to a distorted
image. The holographic approach can readily evaluate a permissible deviation of
the carrier’s pathway from a straight line and find various radar parameters, using
conventional geometrical and optical methods. The holographic analysis of a front-
looking radar on the basis of a generalised hologram geometry shows that the image
is three-dimensional (3D); we describe the conditions for recording an undistorted
image in the longitudinal and transversal directions. We also introduce the concept
of a focal depth and explain the pseudoscopic character of an image. The appli-
cation of tomographic principles to a spot-light radar is largely discussed using

zino: “introduction” — 2005/11/7 — 15:38 — page 3 — #3


4 Introduction

the results of D.S. Manson, who was the first to demonstrate their applicability to data
processing.
Chapter 4 considers the radar aperture synthesis during the viewing of partially
coherent and extended targets. The mathematical model of the aperture is also based
on the holographic principle; the aperture is thought to be a filter with a frequency-
contrast characteristic, which registers the space–time spectrum of a target. This
approach is useful for the calculation of incoherent integration efficiency to smooth
out low contrast details on an image.
In Chapter 5 we discuss microwave imaging of a rotating target, using 1D Fourier
hologram theory and find the longitudinal and transverse scales of a reconstructed
image, the target resolution and a criterion for an optimal processing of a Fourier
microwave hologram. The resolution of a visual radar image is found to be consistent
with the Abbe criterion for optical systems. One specificity is that it is necessary to
introduce a space carrier frequency to separate two conjugate images and an image
of the reference source. Here we have an analogy with synthetic aperture theory, with
the exception that we employ the concept of a complex microwave Fourier hologram.
It is shown that there is no zeroth diffraction order in digital reconstruction. We have
formulated some requirements on methods and devices for synthesising this type of
hologram. This method is easy and useful to implement in an anechoic chamber.
Chapter 6 focuses on tomographic processing of 2D and 3D microwave holograms
of a rotating target in 3D viewing geometry with a non-equidistant arrangement of
echo signal records in the registration of its aspect variation (for space objects). The
suggested technique of image reconstruction is based on the processing of microwave
holograms by coherent summation of partial holograms. These are classified into 1D,
2D, 2D radial, as well as narrowband and wideband partial holograms. This technique
is feasible in any mode of target motion. The method of hologram synthesis com-
bined with coherent computerised tomography represents a new processing technique
which accounts for a large variation of real hologram geometries in 3D viewing. This
advantage is inaccessible to other processing procedures yet.
Chapter 7 is concerned with methods of hologram processing for a target moving
in a straight line and viewed by a ground radar processing partially coherent echo
signals. The signal coherence is assumed to be perturbed by such factors as a turbulent
medium, elastic vibrations of the target’s body, vibrations of parts of the engines, etc.
We suggest an approach to modelling the track instabilities of an aerodynamic target
and present estimates of the radar resolving power in a real cross-section region.
Chapter 8 focuses on phase errors in radar imaging, evaluation of image quality
and speckle noise.
Finally, possible applications of radar imaging are discussed in Chapter 9. The
emphasis is on spaceborne synthetic aperture radars for surveying the earth surface.
Some novel and original developments by researchers and designers at the Nansen
Environmental and Remote Sensing Centre in Bergen (Norway) and at the Nansen
International Environmental and Remote Sensing Centre in St Petersburg (Russia)
are described. They have much experience in processing holograms from various
SARs: Almaz-1 (Russia), RADARSAT (Canada), ERS-1/2 and ENVISAT ASAR (the
European Space Agency). Of special interest to the reader might be the information

zino: “introduction” — 2005/11/7 — 15:38 — page 4 — #4


Introduction 5

about the use of microwave holography for classification of sea ice, navigation in
the Arctics, a global monitoring of ocean phenomena and characteristics to be used
for surveying gas and oil resources. We illustrate the use of the holographic methods
in a coherent ground radar for 2D imaging of the Russian spacecraft Progress and
for the study of local radar responses to objects of complex geometry in an anechoic
chamber, aimed at target recognition.
To conclude, the methods and techniques described in this book are also appli-
cable to many other research fields, including ultrasound and sonar, astronomy,
geophysics, environmental sciences, resources surveys, non-destructive testing,
aerospace defence and medical imaging, that have already started to utilise this rapidly
developing technology. We hope that our book will also be used as an advanced text-
book by postgraduate and graduate students in electrical engineering, physics and
astronomy.

Acknowledgements

The idea to write a book about the application of holographic principles in radio-
location occurred to us at the end of the last century and was supported by the late
Professor V.E. Dulevich. We are indebted to him for his encouragement and useful
suggestions.
We express our gratitude to the staff members of the Nansen Centres (Bergens and
St Petersburg), who provided us with valuable information about the practical applica-
tion of a side-looking radar. We should like to thank V.Y. Aleksandrov, L.P. Bobylev,
D.B. Akimov, O.M. Johannessen and S. Sandven for their help in the preparation of
these materials.
Our deepest thanks also go to our colleagues E.F. Tolstov and A.S. Bogachev for
their excellent description of the criteria for evaluation of radar images. This book
is based on the results of our investigations that have taken a long period of time.
We have collaborated with many specialists who helped to shape our conception of a
coherent radar system. We thank them all, especially S.A. Popov, G.S. Kondratenkov,
P.Ya. Ufimtzev, D.B. Kanareykin and Yu.A. Melnik, whose contribution was par-
ticularly valuable. We also thank our pupils V.R. Akhmetyanov, A.L. Ilyin and
V.P. Likhachev for their assistance in the preparation of this book. We are also grateful
to L.N. Smirnova, the translator of the book, for her immense help in producing the
English version.

zino: “introduction” — 2005/11/7 — 15:38 — page 5 — #5


zino: “introduction” — 2005/11/7 — 15:38 — page 6 — #6
Chapter 1
Basic concepts of radar imaging

1.1 Optical definitions

At present, there is a certain class of microwave radars capable of imaging various


types of extended targets. These are usually termed imaging radars. Before giving a
definition of a ‘microwave image’, we should like to draw the reader’s attention to two
circumstances. First, a microwave image is always viewed by a radar operator in the
visible range, while the imaging is performed in the microwave range. Second, this
book considers radar imaging based on a combination of holographic and tomographic
approaches. Therefore, we should first recall the basic concepts necessary for the
description of imaging by conventional photographic and holographic devices in the
visible spectral range.
Let us construct an image of an object (AB) formed by a thin lens (Fig. 1.1) [19].
The lens thickness can be neglected, and one can assume that the principal planes of the
object AB and its image A B coincide and pass through the lens centre (line M N ).
The other designations are the focal lengths HF, HF , f , f  and the distances x, x
separating the object and its image from the respective focal points F and F .
The straight line AA connecting the vertices of the object and the image passes
through the centre of the lens H. If we draw an auxiliary ray AF intercepting the
principal plane at the point N and an auxiliary ray AM parallel to the optical axis at
the point A , where the refracted rays MA and NA intercept, we can find the image
of the point A. If we draw the normal A B from the point A to the optical axis,
we shall get the optical image of the object AB. The similarity conditions yield the
governing equations for an optical image, or Newton’s formulae:

y f x
= − = − , (1.1)
y x f

xx = ff  . (1.2)

zino: “chap01” — 2005/11/7 — 15:37 — page 7 — #1


8 Radar imaging and holography

M⬘

A
M

y
H F⬘ B⬘

B F
–y⬘
N
A⬘

–x⬘ –f ⬘ f⬘ x⬘

a1 N⬘ a2

Figure 1.1 The process of imaging by a thin lens

The relation between the elements of an image and the corresponding elements of an
object is known as a linear or transversal lens magnification V defined as
y
V = . (1.3)
y
Since the lens is described by the equality f = −f  , Eq. (1.2) gives
xx = −f 2 . (1.4)
Newton’s formulae relate the distances of the object and the image to the respective
focal points. However, it is sometimes more convenient to use their distances to the
respective principal planes. Let us denote these distances as a1 and a2 . Then using
Fig. 1.1 and Eq. (1.2), we can get
1 1 1
− = . (1.5)
a2 a1 f1
The linear magnification can be expressed through a1 and a2 as
a2
V = . (1.6)
a1
Consider now the concept of focal depth in the image space [80]. When constructing
the image to be produced by a lens, we assumed that the image and the object were
in planes normal to the optical axis. Suppose now that the object AB, say, a bulb
filament, is inclined to the optical axis, as is shown in Fig. 1.2, while a photographic
plate is in the plane M1 normal to the optical axis of the objective lens. In order to

zino: “chap01” — 2005/11/7 — 15:37 — page 8 — #2


Basic concepts of radar imaging 9

(a)
M⬘
Photographic
M F F⬘ plate

(b) A
M
F F⬘
B Photographic
A⬘ plate

(c) A
B⬘
M
F F⬘

B M3 M1 M2
Photographic
plate

(d)
B⬘
M⬘ B⬘

M⬘ M⬘

A⬘ A⬘
M3 M1 M2

Figure 1.2 A schematic illustration of the focal depth of an optical image: (a) image
of point M lying in the optical axis; (b) image of point A; (c) image of
point B and (d) image of points A and B in the planes M1 , M2 and M3 .

find the image on the photoplate, we shall construct rays of light going away from
individual points of the object. The light beams going from the object AB to the
objective lens and from the objective lens to the image are conic with the lens as
the base and the points of the object and the image as the vertices. Imagine that
the image of the point M of an object lying in the optical axis is on a photoplate in
the plane M1 (Fig. 1.2(a) and (d)). Then the beam of rays converging onto this image
will have its vertex on the plate. The object’s extremal points A and B will produce
conic rays with the vertices in front of (B ) in Fig. 1.2(c) and behind the photoplate

zino: “chap01” — 2005/11/7 — 15:37 — page 9 — #3


10 Radar imaging and holography

(A ) (Fig. 1.2(b)). Thus, it is only the point M in the optical axis that will have its
image as a bright point M in Fig. 1.2(a). The end points A and B of the line will look
like light circles A and B . The image of the line will look like M1 in Fig.1.2(d). If
the photoplate is shifted towards A (Fig. 1.2(b)) or B (Fig. 1.2(c)), we shall have
different images M2 or M3 (Fig. 1.2(d)).
It follows from this representation that the image of a 3D object extended along
the optical axis will have different focal depths on the plate at all the points in the
image space. In practice, however, images of such objects have a good contrast.
Therefore, the objective lens possesses a considerable focal depth. This parameter
determines the longitudinal distance between two points of an object, and the sizes of
their images do not exceed the eye’s unit resolution. Therefore, the classical recording
on a photoplate produces a 2D image, which cannot be transformed to a 3D image.
The third dimension may be perceived only due to indirect phenomena such as the
perspective.
Now let us describe the real and virtual optical images and see how the image
of a point object M can be constructed with rays. The rays go away from the object
in all directions. If one of the rays encounters a lens along its pathway, its trajectory
will change. If the rays deflected by the lens intercept, when extended along the light
propagation direction, a point image will be formed at the interception and can be
recorded on a screen or a photoplate. This kind of image is known as real. However,
when the rays are extended along the direction opposite to the light propagation
direction, both the interception point and the image are said to be virtual. The images
in Fig. 1.2 are real because they are formed by rays intercepting at their extension
along the light propagation.
An optical image possesses orthoscopic and pseudoscopic properties. Suppose
a 2D object has a surface relief, its image will be orthoscopic if it is not reversed
longitudinally: the convex parts of the object look convex on the image. Using the
above approach, we can show that the image formed by a thin lens is orthoscopic. If
an image has a reverse relief, it is termed pseudoscopic; such images are produced
by holographic cameras.
Thus, images produced by classical methods have the following typical charac-
teristics.
• Imaging includes only the recording of incident light intensity, while its wave
phase remains unrecorded. For this reason, this sort of image cannot be
transformed to a 3D image.
• An image has a limited focal depth.
• An image produced by a thin lens is real and orthoscopic.

1.2 Holographic concepts

Holography is a lens-free way of recording images of 3D objects into 2D record-


ing media [29]. This process includes two stages. The first stage is called hologram
recording, during which the interference between the diffraction field from an object

zino: “chap01” — 2005/11/7 — 15:37 — page 10 — #4


Basic concepts of radar imaging 11

1
Reference wave

z u

3
4

Figure 1.3 The process of optical hologram recording: 1 – reference wave;


2 – object; 3 – photoplate and 4 – object’s wave

and a reference field is recorded on a photoplate or another photosensitive material.


A necessary condition is that both fields should be coherent. In their original experi-
ments, the pioneers of holography used mercury sources that were later replaced by
lasers. The interference pattern registered on a photoplate was called a hologram.
The second stage is that of image reconstruction including the illumination of the
processed photoplate with a wave identical to the reference wave. Suppose, for sim-
plicity, that the reference wave is plane (Fig. 1.3) and propagates at an angle θ to the
z-axis (x, y, z are coordinates in the hologram plane). The object’s wave is described
by a complex function

u(x, y) = a(x, y) exp(−jϕ(x, y))

and the reference wave by the function

uo (x, y) = ao exp(−jωo x),

where ωo = k sin θ , θ is the wave incidence onto a photoplate located in the xOy
plane, k = 2π/λ1 is the wave number, and λ1 is the wavelength of coherent light
source.
The intensity of the interference pattern on the hologram is

I (x, y) = [uo (x, y) + u(x, y)]2 = a2o + a2 (x, y)


+ exp{j[ϕ(x, y) − ωo x]} + exp{−j[ϕ(x, y) − ωo x]}ao a(x, y)
= a2o + a2 (x, y) + 2ao a(x, y) cos[ϕ(x, y) − ωo x]. (1.7)

zino: “chap01” — 2005/11/7 — 15:37 — page 11 — #5


12 Radar imaging and holography

In addition to the constant term a2o + a(x, y), the hologram function in Eq. (1.7)
contains a harmonic term 2ao a(x, y) cos(ωo x) with the period
T = 2π/ωo = λ1 / sin θ. (1.8)
The quantity ωo which defines this period is known as the space carrier frequency
(SCF) of a hologram. For example, for a He–Ne laser beam (λ1 = 0.6328 µm)
incident onto a hologram at an angle of 30◦ , the SCF is ωo = 900 lines/mm. The
minimum period of the SCF is θ = π/2 and is equal to the wavelength λ1 . The a/ao
ratio is called the hologram modulation index.
It follows from Eq. (1.7) that the amplitude and phase distributions of the object’s
wave appear to be coded by the SCF amplitude and phase modulations, respectively.
As a result, a hologram turns out to be the carrier of space frequency which contains
spatial information, whereas a microwave is the carrier of angular frequency and
contains temporal information. Phase-only holograms record only the phase variation
rather than the amplitude.
The first stage of the holographic process is terminated by recording the quan-
tity I (x, y). A photoplate records a hologram. The transmittance of an exposed and
processed photoplate is
Tn (x, y) = I −γ , (1.9)
where γ is the plate contrast coefficient. It is reasonable to take γ = −2 because
the hologram then corresponds to a sine diffraction grating which does not form
diffraction orders higher than the first one. So we have

t(x, y) = Tn (x, y) = I (x, y).
During the reconstruction, a hologram is illuminated by the same reference wave as
was used at the recording stage. The reconstruction occurs due to the light diffraction
on the hologram (Fig. 1.4). Immediately behind the hologram, a wave field is induced
with the following components:

U (x, y) = e−jωo x t(x, y) = e−jωo x I (x, y) = exp(−jωo x)[a2o + a2 (x, y)]


+ ao a(x, y) exp{−jϕ(x, y)} + exp{−jωo x} exp{jϕ(x, y)}ao a(x, y).
(1.10)
With this, the second stage of the holographic process is terminated.
Three terms in Eq. (1.10) describe waves that form three different images
(Fig. 1.4). The first wave preserves the direction of the reconstructing (plane) wave
and represents the zero diffraction order, or light background. The second wave
ao a(x, y) exp[−jϕ(x, y)] reproduces the object’s wave to an accuracy of the ampli-
tude factor ao , providing a virtual image of the object observed behind the hologram.
At an angle (−2θ ) relative to the normal to the hologram, a complex conjugate
wave propagates, producing a real image in front of the hologram. It can be shown
(Chapter 3) that the virtual image is orthoscopic and the real image is pseudoscopic.
Of importance is the fact that the virtual image is 3D.

zino: “chap01” — 2005/11/7 — 15:37 — page 12 — #6


Basic concepts of radar imaging 13

Reconstruction
wave

z u

2u

1 3

Figure 1.4 Image reconstruction from a hologram: 1 – virtual image; 2 – real


image; 3 – zero diffraction order and 4 – hologram

Consider the basic properties of a holographic image, in particular, the hologram


information structure. Suppose the object to be imaged is a discrete ensemble N of
coherently radiating points with the coordinates rq . The object’s field on the hologram
aperture can be described by a sum [108]

N 
N
Ur = aq exp(−jkrq ) = αq (1.11)
q=1 q=1

and the respective intensity distribution by an expression



N 
N
|Ur |2 = αq αp∗ , (1.12)
q=1 p=1

where the asterisk denotes a complex conjugate quantity.


The reference beam on the hologram aperture will be given as
Uo = ao exp(−jkro ) = αo , (1.13)
where ro are the reference beam coordinates. Then the intensity of the interference
pattern can be written as

N 
N 
N 
N
|Ur + Uo |2 = a2o + αo∗ αq + α αp∗ + αq∗ αp . (1.14)
q=1 p=1 q=1 p=1

The last term in Eq. (1.14) corresponds to Eq. (1.12) but usually it is not analysed
completely. In holographic theory (Eq. (1.7)), one often restricts one’s considera-
tion to the second and third terms. Commonly, the information about the object is
assumed to be distributed uniformly across the hologram aperture; in reality, however,
a hologram is synthesised from a set of microholograms. So the aperture is split into

zino: “chap01” — 2005/11/7 — 15:37 — page 13 — #7


14 Radar imaging and holography

a multiplicity of microapertures having various information significance. Such partial


microholograms may correspond to the object’s field with varying polarisation.
The hologram structure has three space-frequency levels. The first level is asso-
ciated with the diffraction characteristics of individual radiating scatterers, more
exactly, with their scattering patterns and the distance to the hologram plane. The
second level is associated with the interference of overlapping fields of different radi-
ating scatterers, a factor described by the last term in Eq. (1.14). Both levels determine
the structure of the object’s field. The third level is due to the interference between
the object’s speckle field and the reference beam field; this is a holographic structure
possessing the highest space frequencies.
A scatterer reflects waves in all directions; therefore, every point of the hologram
receives information about the object as a whole (the third information level). That is
why we can easily explain the experiment with a hologram broken into pieces: any
piece can reconstruct the whole image because it contains information from all the
scatterers. If a piece is small, the image quality will be poor since some details are
lost because of a poorer resolution. The result is a characteristic speckle pattern due to
the greater effect of the second-order elements on the hologram.
Thus, holographic images have the following specific features.
• The holographic method of image recording registers the field phase in addition
to the amplitude.
• Reconstructed images are 3D.
• Holographic images possess a considerable focal depth.
• Holographic images may be orthoscopic or pseudoscopic.

1.3 The principles of computerised tomography

Computerised tomography is generally defined as a method of reconstructing the true


image (density distribution) of an object, using special computational procedures
with data registered when the object is subjected to probing [15]. Generally, probing
is an arbitrary physical phenomenon (radiation, wave propagation, etc.) used for the
study of objects’ structure, and density means the distribution of an arbitrary physical
characteristic of the object to be reconstructed. The true image is an image, in which
the reconstructed density at any point in space is ideally independent of the true
densities beyond the point vicinity, or of the minimum object’s volume resolvable by
a measuring system. Since a probing wave interacts with the object and this interaction
is ‘integrated’ along its passage through the object, it is clear why tomography is said
to be a method of image reconstruction from integrated data such as beam sums,
projections and so on. Therefore, computerised tomography is a way of producing
2D images of slices of 3D objects by means of digital processing of a multiplicity of
1D functions (projections) obtained at various vision angles. There are three important
aspects of this technique. First, it is the problem of reconstructed image singularity,
that is, the degree to which the object is describable by available data. Second, it is
necessary to know whether the reconstruction process is resistant to errors and noise
in the initial data. Finally, one must design an algorithm for image reconstruction.

zino: “chap01” — 2005/11/7 — 15:37 — page 14 — #8


Basic concepts of radar imaging 15

Γm
Γc
B

r
q
A
g
Φ
0 Basic line

Q d
D⬘

d g
C⬘ D

Figure 1.5 Viewing geometry in computerised tomography (from Reference 15):


m – circumference for measurements; c – circumference with the
centre at point O enveloping a cross section; p – arbitrary point in the
circle with the polar coordinates ρ and ; A, C and D – wide beam
transmitters; B, C  and D – receivers; γ –γ , δ–δ – parallel elliptic arcs
defining the resolving power of transmitter–receiver pair (CC  and DD )

The principle of computerised tomography can be conveniently illustrated with a


2D case [15]. We shall first consider tomographic procedures for the reconstruction of
density across a body’s slice. Let us introduce a circumference c (Fig. 1.5) enveloping
a body, more exactly, the cross section of a real 3D object. The inner c region can
be termed the image space because it includes the object to be imaged. The medium
outside this region is assumed to be free, which means that a probing wave interacts
only with the object. If the probing sources are located outside the c region, the
method is called remote-probing computerised tomography, as opposed to remote-
sensing tomography when the sources are located within this region. The latter is,
however, of no interest to radar imaging. The probing effects are commonly measured
outside the c region. It is clear from information theory that measurements made
along a certain circumference m (Fig. 1.5) embracing c will be quite sufficient.
Suppose a probing radiation transmitter is located in the circumference m . To
prescribe the density at an arbitrary point ρ in the c region, we introduce the polar
coordinate function g = g(ρ, ) and express the total probing effect E = E(ρ, , t)
as a sum of the incident E = E(ρ, , t) and secondary ES = ES (ρ, , t) effects:
E = Ei +ES . The problem then reduces to the reconstruction of the density distribution
across the c region. Obviously, when a target is probed by electromagnetic radiation,

zino: “chap01” — 2005/11/7 — 15:37 — page 15 — #9


16 Radar imaging and holography

the quantity Ei is the part of E directly related to the initial wave front which is the
first to arrive at any point in c , while ES is composed of all effects scattered, often
repeatedly, by all the points in the c region.
The secondary probing effect must be given as

ES (ϑ, t) = {g(ρ, ); E(ρ, , t); w}, (1.15)

where ES (ϑ, t) is the amplitude value of ES (ρ, , t) in m , {. . .} is an integral oper-


ator determined in the c region, and w is the distance between the point P and the
receiver B. It is easy to see that a tomographic problem is a classical inverse source
problem, since the function g(ρ, ) is to be reconstructed from the known values of
ES (ϑ, t) and the source in m . Note that here we are faced with the problem of dimen-
sionality, because ES (ϑ, t) measurements are 2D, while the resulting effect E(ρ, , t)
is 3D. Because of this discrepancy, inversion algorithms become numerically unstable
and sensitive to any error in the initial data.
The solution to the inverse source problem is always approximate. An approx-
imation most important to tomography involves geometrical optics allowing the
representation of probing effects as rays. This provides an optimal formulation of
the inverse problem related directly to conventional computerised tomography which
reconstructs images from linear trajectories. This can be illustrated with Fig. 1.5. The
signal recorded at point B can be represented as a function of the variables ϑ and 
to show that this signal varies with the position of the point B in m and with the
radiation incidence:

l(B)
S(ϑ, ) = g(ρ, )dl, (1.16)
l(A)

where l is a coordinate going along the ray, whose initial and final points are denoted
as l(A) and l(B), respectively.
There are no dimensionality problems with this expression, because the measured
quantity S(ϑ, ) and the reconstructed quantity g(ρ, ) are 2D. So if S(ϑ, ) is
prescribed for the number of ϑ and  pairs sufficient for the description of g(ρ, )
with the desired accuracy, the true density distribution may be reconstructed such
that the computational algorithm is stable. Equation (1.16) is a governing equation
in conventional tomography. At present, there are various reconstruction techniques
allowing the solution of this integral equation [88].
No doubt, it would be desirable to integrate the true image in the c region (in the
image space). For practical considerations, however, the data may be integrated in a
different space, whose properties depend on how the experimental data are related to
the density function g(ρ, ). The quantity to be measured is often a Fourier image of
the density distribution, so the data recording is said to be performed in the Fourier
space. An example of this type of recording is that in a radio telescope with a synthetic
aperture [118]. Although the data integration in the image space and the Fourier
space is identical theoretically, the practical algorithms for image reconstruction differ

zino: “chap01” — 2005/11/7 — 15:37 — page 16 — #10


Basic concepts of radar imaging 17

2
0 q
x

Figure 1.6 A scheme of X-ray tomographic experiment using a collimated beam:


1 – X-rays; 2 – projection angle; 3 – registration line; 4 – projection
axis and 5 – integration line

essentially. Many of the available algorithms for the reconstruction of an unknown 2D


function g are based on the projection slice theorem [57,95]. It can be formulated with
reference to Fig. 1.6 by introducing, in the image space, two rectangular coordinate
systems xOy and uOv, rotated by the angle ϑ relative to each other. The projection
of the g function at the angle ϑ is described as
∞
Pϑ (u) = g(u cos ϑ − v sin ϑ, u sin ϑ − v cos ϑ) dv, (1.17)
−∞

where Pϑ (u) calculated at constant u = uo is a 1D integral along the respective straight


line parallel to the v-axis, so that the Pϑ (u) function describes a set of integrals for all
ϑ values. The projection theorem states [57] that a 1D Fourier image of a projection
made at an angle Pϑ (u) represents a ‘slice’ of a 2D Fourier transform of the g(x, y)
function at the ϑ angle to the X -axis:
Pϑ (U ) = G(U cos ϑ, U sin ϑ) (1.18)
with
∞
Pϑ (U ) = Pϑ (u)e−juU du,
−∞

∞ 
G(X , Y ) = g(x, y)e−j(xX +yY ) dx dy.
−∞

zino: “chap01” — 2005/11/7 — 15:37 — page 17 — #11


18 Radar imaging and holography

qmax

qmin

Figure 1.7 The geometrical arrangement of the G(x, y) pixels in the Fourier region
of a polar grid. The parameters ϑmax and ϑmin are the variation range
of the projection angles. The shaded region is the SAR recording area

In classical X-ray tomography, a body is probed by a collimated radiation beam


(Fig. 1.6), while the Pϑ (u) function is measured by a set of sensors located along a
straight line normal to the radiation direction. The set of Pϑi (u) projections for various
ϑ angles is formed by rotating the object or the power transmitters and receivers. Then
one usually uses a convolution back-projection (CBP) algorithm to be discussed later.
An alternative is to use a Fourier transform. The latter approach is convenient when
data recording is made in the Fourier space and the pixel values of the Pϑi (U ) Fourier
images are known. According to the projection theorem, these pixels also represent
the pixels of the G(X , Y ) function along a line at the ϑ angle to the X -axis. Therefore,
the Pϑi (U ) values obtained for a set of ϑ angles prescribe the G(X , Y ) pixels on a polar
grid (Fig. 1.7). By using an interpolation algorithm, one can go over to the G(X , Y )
pixels on a rectangular grid and use an inverse Fourier transform to reconstruct the
density g(x, y). There have been attempts to compute g(x, y) directly from the G(X , Y )
pixels on a polar grid to avoid using an interpolation algorithm [100].
Let us now discuss common approaches used in computerised tomography and
radar imaging. In the latter, the target position is determined from the time delay
of the radar echo and the antenna orientation. The range resolution is usually much
higher than the angular resolution. Suppose a wide beam transmitter and a receiver of
electromagnetic radiation are located at the points C and C , respectively (Fig. 1.5).
The geometrical positions of scatterers, whose echo-signals arrive at the point C
simultaneously, form an ellipse with the focus at C and C . More exactly, this is a band
between two ‘concentric’ ellipses, its width characterising the resolution limit of the
system. Part of this band is denoted as γ –γ . In the case of imaging one point (when
the points CC overlap), the ellipses degenerate into circles. The total scattering

zino: “chap01” — 2005/11/7 — 15:37 — page 18 — #12


Basic concepts of radar imaging 19

intensity is proportional to the band-averaged density of scatterers. Since the distance


between the ellipses is equal to the resolution, it is sufficient to integrate the density
along an average ellipse. The signal recorded at the point C can be written as

S(C, C ; γ ) = g(ρ, ) dl, (1.19)
γ

where γ denotes an average ellipse and dl is an element of the ellipse length.


Like in the case described by Eq. (1.16), there is no problem of dimensionality. The
scatterers located in the vicinity of a certain point Q can be identified by changing the
positions of the transmitter and the receiver. Figure 1.5 shows one of these positions,
denoted as D and D , and the respective band δ–δ.
The true density distribution can be reconstructed from a sufficiently large number
of measurements made at different points. It is clear that the cases described by
Eqs (1.16) and (1.19) differ only in the integration direction.
In X-ray tomography, the function ρ(x, y) describes an unknown distribution of
the X-ray attenuation coefficient across a transversal slice (Fig. 1.6) to be measured.
The Pθ (u) projection values are obtained in a multi-beam system represented as an
array of X-ray transmitters and receivers located at the θ angle to the x-axis (Fig. 1.6).
The intensity of the received radiation decreases exponentionally as the beams pass
along the line of ρ(x, y) integration. Therefore, the projection Pθ (u) of this function is
Iθ (u)
Pθ (u) = − log , (1.20)
Io
where Io is the X-ray source intensity and Iθ (u) is the intensity registered by the
receivers. The set of Pθi (u) projections can be obtained by rotating the object or
the array of transmitters and receivers by discrete angles θ = θi . The distribution
of the attenuation coefficient of g(x, y) is usually reconstructed from the measured
Pθi (u) projections, using the CBP method [127]. It enables one to estimate the spatial
distribution of the inner physical parameters of the target.
It will be shown below that a radar can register the Pθ (v) signal. For a given θ
angle, its intensity is proportional to the scattering density ρθ (u, v) integrated along
the u- and v-coordinates, that is, it is a tomographic projection along the v-axis.
So the Pθ (v) function is a 1D function of the variable v with the parameter θ defining
the projection orientation. One can see that there is an essential difference between
X-ray tomography and synthetic aperture radar (SAR) imaging. In the latter, a linear
integral used to obtain a projection is taken in the direction normal to the microwave
propagation, whereas in X-ray tomography it is taken along the X-ray propagation
direction.
It will be shown in Chapter 6 that the Doppler and holographic methods of SAR
signal processing can provide such projections. Another important specificity is that
these projections include a phase factor to describe the time of the double path of
the signal between a target and a radar antenna. Thus, a projection produced by SAR
is a coherent tomogram that carries much more information about a target. This is
especially evident when one uses holographic projections (see Chapter 6). On the

zino: “chap01” — 2005/11/7 — 15:37 — page 19 — #13


20 Radar imaging and holography

other hand, a tomographic processing of projections is capable of reconstructing the


arrangement of scatterers on a target, or, in fact, its shape.

1.4 The principles of microwave imaging

The past decade has witnessed an ever increasing interest in radars with a very high
resolving power. For example, the ERS-1 and ERS-2 radars (side-looking synthetic
aperture radars (SARs) of the European Space Agency [62]) provide microwave
imagery of the earth surface with the resolution of 25 m × 25 m in the azimuth-range
coordinates. An earth area of 100 km × 100 km (100 km is the radar swath width) is
represented by 1.6×107 pixels. Modern ground radars have large antenna arrays with
an aperture of about 104 –105 λ1 , where λ1 is the radar wavelength. They provide an
angular resolution of 10−4 –10−5 rad [129], so the radar vision field can be subdivided
into 104 –105 beams.
A radar with a linear and angular resolution much higher than that of a TV equip-
ment (7.105 − 106 pixels) is capable of producing microwave images of extended
targets (land areas and water surfaces) and complex objects (aircraft, space craft).
So it is reasonable to give a definition of a microwave image. At present, there is no
generally accepted definition, so we suggest the following formulation. A microwave
image is an optical image, whose structure reproduces on a definite scale the spa-
tial arrangement of scatterers (‘radiant’ points) on a target illuminated by microwave
beams. In addition to the arrangement, scatterers are characterised by a certain radi-
ance. It should be emphasised that the microwave beams can produce 3D images,
whereas the visible range of conventional optical systems gives only 2D images.
Available methods of microwave imaging can be grouped into three classes:
• direct methods using real apertures;
• methods employing synthetic apertures;
• methods combining real and synthetic apertures.
Imaging by direct methods can, in turn, be performed by real antennas or antenna
arrays. Real antennas were used in the early years of radar history. An earth area
was viewed by means of circular scanning or sector rocking of the antenna beam in
the azimuthal plane. Such systems were termed panoramic or sector circular radars.
Modern panoramic radars use 50–100λ1 apertures and their resolution is low. Since
the application of airborne panoramic antenna arrays is a hard task, the only way
to increase the resolution is to use the millimetre wavelength range. One is faced
with a similar problem when dealing with a side-looking real antenna mounted along
the aircraft fuselage. Such antennas may be as long as 10–15 m; at the wavelength
λ1 = 3 cm, their angular resolution is less than 10 min of arc and the linear resolution
of the earth surface is a few dozens of metres, which is too low for some applications.
For this type of antenna, the problem of increasing the aperture size was solved in a
radical way – by replacing a real aperture with a synthesised aperture.
Consider the potentialities of antenna arrays for aerial survey of the earth surface
and for ground imaging of targets flying at low altitudes. Suppose we are to design an

zino: “chap01” — 2005/11/7 — 15:37 — page 20 — #14


Basic concepts of radar imaging 21

antenna array for aircraft imaging. The target has a characteristic size D and is illu-
minated by a continuous radar pulse. Then, according to the sampling theorem [103],
the echo signal function in the aperture receiver can be described by a series of records
recorded at the intervals
Rλ1
δL = , (1.21)
D
where R is the distance to the target. The aperture size necessary for getting a desired
resolution on the target , can be defined in terms of Abbe’s formula [131]:
λ1 ∼ λ1 R
= = , (1.22)
2 sin α/2 L
where α is the aperture angle and L is its length.
The total number of receivers on an aperture of length L is
L DL
N = = . (1.23)
δL λ1 R
With Eq. (1.22), we get
D
N = . (1.24)

Let us illustrate this with a particular problem. Suppose we have λ1 = 10 cm,
R = 600 km, D = 20 m, and  = 1 m. Then we get L = 60 km, δL = 3 km and
N = 20. A planar aperture of L × L in size must contain n = N 2 = 400 individual
receivers.
This example shows that the applicability of direct imaging using large antenna
arrays is quite limited. Nevertheless, one of these techniques employing a radio cam-
era designed by B. D. Steinberg is of great interest [129]. The radio camera is based
on a pulse radar with a real large antenna array and an adaptive beamforming (AB)
algorithm. The principal task is to obtain a high resolution with a large aperture avoid-
ing severe restrictions on the arrangement of the antenna elements. The operation of a
self-phasing algorithm requires the use of an additional external phase-synchronising
radiation source with known parameters, which could generate a reference field and
would be located near the target. The radio camera provides an angular resolution
of 10−4 –10−5 rad [129], and the image quality is close to that of optical systems.
There is one limitation – the radio camera has a narrow vision field. But still, it may
find a wide application in radar imaging of the earth surface, in surveying aircraft
traffic, etc.
To summarise, direct real aperture imaging of remote targets at distances of
hundreds and thousands of kilometres is practically impossible.
We turn now to methods employing a synthesised aperture. The idea of aperture
synthesis born during the designing of a side-looking aperture radar [32,74,86] was
to replace a real antenna array with an equivalent synthetic antenna (Fig. 1.8). An
antenna with a small aperture is to receive consecutive echo signals and make their
coherent summation at various moments of time. For a coherent summation to be
made, the radar must also be coherent, namely, it should possess a high transmitter

zino: “chap01” — 2005/11/7 — 15:37 — page 21 — #15


22 Radar imaging and holography

(a)

ur

LA

Summator Output

(b)

us

x V
x

Memory
(delay line)

Summator Output

X
V
t
0 Ts

Figure 1.8 Synthesis of a radar aperture pattern: (a) real antenna array and
(b) synthesised antenna array

frequency stability and have a reference voltage of the stable frequency to compare
echo signals. We shall see below that a reference voltage is similar to a reference
wave in holography, with the only difference that the ‘wave’ is created in the receiver
by the voltage of a coherent generator.
Under the conditions described above, the echo signals received by a real antenna
are saved in a memory unit as their amplitudes and phases. When an aircraft flies
over an earth area x = Ls , the signals are summed up at the moment Ts = x/V
(the final moment of synthesis), where V is the track velocity of the aircraft. As

zino: “chap01” — 2005/11/7 — 15:37 — page 22 — #16


Basic concepts of radar imaging 23

a result of coherent signal processing, which is similar to the processing by a real


antenna (Fig. 1.8(a)), a synthetic aperture pattern θs similar to a real aperture pattern
θr is formed. Thus, the real aperture length Ls is replaced by the synthesised aperture
length x(x = Ls ). The width of this aperture pattern is

λ1
θs = . (1.25)
2x
Owing to its large size, a synthetic aperture can provide very narrow patterns, so the
track range resolution

δx = θs R, (1.26)

where R is the slant range to the target, may be very high even at large distances. To
illustrate, if the synthetic aperture length is x = 400 m and λ1 = 3 cm, the resolution
may be as high as δx = 6 m at R = 160 km.
Similar principles apply to a stationary ground radar and a moving target. If one
needs to obtain a high angular resolution, one can make use of the so-called inverse
aperture synthesis. We shall show in Chapter 2 that the resolution on the target is then
independent of the distance to it but is determined only by the radar wavelength and
the synthesis angle. As a result, one can obtain a very high angular resolution and
reconstruct the arrangement of the scatterers into a microwave image.
Thus, current approaches to microwave imaging, based on direct and inverse
synthesis of the aperture, provide 2D images which are structurally similar to optical
images. Besides, there are methods combining both approaches. They apply a real,
say, phased aperture and a synthetic aperture along the aircraft track. These techniques
also produce images similar in structure to optical images [2]; they will be discussed
in detail in Chapter 3. However, there are certain differences between the two types
of 2D images. We summarise the most important ones below.

1. The wavelengths in the microwave range are 103 –106 times longer than in the
visible range, and this determines an essential difference in the scattering and
reflection by natural and man-made targets. In the visible range, the scattering by
man-made targets is basically diffusive, and it can be observed when the surface
roughness is of the order of a wavelength. This fact allows a target to be consid-
ered as a continuous body. In the microwave range, the picture is quite different
because there is no diffusion. The signals are reflected by scattering centres,
corner structures and specular surfaces. For this reason, a microwave image of
a man-made target is discrete and is made up of ‘dark’ pixels and those produced
by the strong reflectors we mentioned above. A good example is the microwave
image of an aircraft that was obtained in Reference 130. Reflection by natural
targets produces similar images. However, the reflection spectrum of the earth
surface contains an essential diffusion component.
2. For these reasons, the dynamic range of microwave images varies between 50
and 90 dB, while it rarely exceeds 20 dB in optical images, reaching the value
of 30 dB in bright sunlight.

zino: “chap01” — 2005/11/7 — 15:37 — page 23 — #17


24 Radar imaging and holography

3. The quality of an image does not depend on the natural luminosity of a target
and depends but slightly on weather conditions.
4. Image quality strongly depends on the geometry of the earth region to be
imaged, especially its slant angles, roughness and bulk features in the surface
layer. So microwave imaging is used for all-weather mapping, soil classifica-
tion, detection of boundaries of background surfaces, etc. There is no unified
optimal angle (in the vertical plane) for viewing geological structures, and
the best values should be adjusted to the local topography. For mountain-
ous and undulated reliefs, for example, a small radiation incidence relative
to the normal is preferable, while the imaging of plains requires the use of
large incidence angles, which increase the sensitivity to surface roughness. For
this reason, images produced by airborne SAR may be inadequate radiomet-
rically (speckle noise) resulting from a large variation in the incidence across
a swath because of a wide aperture pattern. Space SARs possess an approxi-
mately constant radiation incidence across a swath, so there is no speckle on the
image.
5. The density of blackened regions on a negative depends significantly on the
dielectric behaviour of the surface being imaged, in particular, on the presence
of moisture, both frozen and liquid, in the soil.
6. The microwave range gives the opportunity to probe subsurface areas. For exam-
ple, the microwave images of the Sakhara desert obtained by a SIR-A SAR
showed the presence of dried river beds buried under the sands, which were
invisible on the desert surface. This opens up new opportunities to archaeologi-
cal surveys. It has been demonstrated experimentally that the probing radiation
depth in dry sand may be as large as 5 m. Besides, a sand stratum possessing
a low attenuation is found to enhance images of subsurface roughness due to
refraction at the air–soil interface. This effect is particularly strong for horizontal
polarisation at large incidence angles.
7. The specific propagation pattern of the long wavelengths in the microwave
range provide quality imagery of lands covered with vegetation.
8. The interaction of subwater phenomena such as internal waves, subsurface
currents, etc., with the ocean surface allows imaging the bottom topography
and various subwater effects.
9. The use of moving target selection allows one to make precise measurements
of the target’s radial velocity relative to the SAR.
10. An important factor in imagery is the proper choice of radiation polarisation.
11. Quite specific is imaging of urban areas and other anthropogenic targets. This is
due to a large number of objects with a high dielectric permittivity (e.g. metallic
objects), surface elements possessing specular reflection, resonance reflectors
and objects with horizontal and vertical planes that form corner reflectors. The
result of the latter is the following effect: streets parallel to the SAR carrier track
produce white lines on the image (the positive), while streets normal to the track
produce dark lines. Moreover, the presence or absence on the image of some
linear elements of the radar scene and an average density of blackening of the
whole image depend on the azimuthal angle, that is, the angle made by the SAR

zino: “chap01” — 2005/11/7 — 15:37 — page 24 — #18


Basic concepts of radar imaging 25

beam in the plane tangential to the earth surface. This is a serious obstacle to
the analysis of images of urban areas.
12. An image contains speckle noise associated with the coherence of the imaging
process.

To conclude, a microwave radar image may be 3D if it is recorded by


holographic or tomographic techniques (Chapters 5 and 6, respectively).

zino: “chap01” — 2005/11/7 — 15:37 — page 25 — #19


zino: “chap01” — 2005/11/7 — 15:37 — page 26 — #20
Chapter 2
Methods of radar imaging

2.1 Target models

All radar targets can be classified into point and complex targets [138]. A point target
is a convenient model object commonly used in radar science and practice to solve
certain types of problems. It is defined as a target located at distance R from a radar
at the viewing point ‘0’, which scatters the incident radar radiation isotropically. For
such a target, the equiphase surface is a sphere with the centre at ‘0’. Suppose a radar
generates a wave described as
f (t) = a(t) exp j[ωo t + (t)],
where f0 = ωo /2π is the carrier frequency, while a and  are the amplitude and
phase modulation functions overlapping the carrier frequency.
A point target located at distance R creates an echo signal
        
2R 2R 2R 2R
g(t) = σ f t − = σa t − exp j ωo t − + t− ,
c c c c
(2.1)
where σ is a complex factor including the target reflectance and signal attenuation
along the track.
The Doppler frequency shift is implicitly present in the variable R. If we assume
that the radial velocity v1 is constant, we shall have
R = R1 + v1 t1 , (2.2)
where R1 is the distance to the target at the initial moment of time t = 0.
Equations (2.1) and (2.2) describe a simple model target to be further used for the
analysis of the aperture synthesis and imaging principles.
In practice, most radar targets refer to the class of complex targets. In spite
of a great variety of particular targets, we can offer a common criterion for their

zino: “chap02” — 2005/11/7 — 15:37 — page 27 — #1


28 Radar imaging and holography

classification. This criterion is based on the relationship between the maximum target
size and the radar resolving power in the coordinate space of the parameters R, α, β and
Ṙ, which are the range, the azimuth, the elevation angle and the radial velocity of the
target, respectively. An additional important parameter is the number of scattering
centres (scatterers). In accordance with this criterion, all complex targets can be
subdivided into extended compact targets and extended proper targets. A target is
referred to as extended compact if it has a small number of scatterers, its linear
and angular dimensions are much smaller than the radar resolution element, and
the difference between the radial velocities of the extremal scatterers is appreciably
smaller than the velocity resolution element. What is important is that this definition
also holds for targets located at large distances. On the other hand, a target which has
a size much larger than the radar resolution element and a large number of scatterers
should be referred to as extended proper. Earth and water surfaces are examples of
such targets.
We shall first discuss extended compact targets (airplanes, spacecrafts, etc.). In
the high-frequency region, these targets should be represented as a set of scatterers,
or radiant points. The mathematical model of an extended compact target, based on
the concept of scatterers, has the form [138]:


M

U= σm exp(jm ), (2.3)
m=1

where M is the number of individual scatterers, σm is the radar cross-section (RCS)


for the mth scatterer and m is the phase of the pulse reflected by the mth scatterer
relative to that of the pulse reflected by the first scatterer. The value of σm is to be
found for a particular polarisation.
Equation (2.3) is usually used for monostatic incidence in the optical region (high
frequency approximation). It can also be used to find the relation between monostatic
and bistatic scattering at the same target aspect α. For this, the phases of the scatterers,
m , should be expressed as a sum of two terms [69]:

m = 2kZm (α) cos β/2 + ξm , (2.4)

where Zm (α) is the projection of the distance between the mth and the first scatterers
onto the bisectrix of the bistatic angle, k = 2π/λ1 is the wave number of the incident
wave and β is the bistatic angle and ξm is the residual phase contribution of the mth
scatterer, including the contribution of the creeping wave.
For scatterers retaining their position with changing bistatic angle, the mathemat-
ical model is


M

U= σm exp(j2kZm (α) cos β/2ξm ). (2.5)
m=1

zino: “chap02” — 2005/11/7 — 15:37 — page 28 — #2


Methods of radar imaging 29

Equation (2.5) allows us to introduce the concept of equivalence of mono- and


bistatic scattering and to define conditions for this equivalence. The theorem of
R. E. Kell states that (1) if the total field can be written as a sum of the fields of

all scatterers and (2) if the quantity σm , the Zm -coordinate and the residual phase
ξm are all independent of the bistatic angle β in a particular range of β values at
any given aspect α, then the total bistatic field for the angles α and β is equal to the
monostatic scattering field measured along the bisectrix of the β angle at a frequency
reduced by a factor of cos(β/2). This theorem will be used in Chapter 5 to justify
the method of inverse aperture synthesis for recording and reconstruction of Fourier
holograms.
The amplitude and polarisation characteristics of individual scatterers are of spe-
cial interest for the understanding of diffraction phenomena in extended compact
targets. A comparison of respective experimental and theoretical values should be
based on precise scattering models substantiated by the physical theory of diffrac-
tion, namely, by the edge wave method (EWM) [137] or by the geometrical theory of
diffraction (GTD) [70]. To illustrate, let us consider the field scattered by a perfectly
conducting cylinder of finite length l and radius a oriented towards the transmitting
antenna. According to the EWM [12], the horizontal and vertical field components
in the far range are

ia eikR  ia eikR 
Eϕ = Eox (ϑ), Eϑ = Hox (ϑ), (2.6)
2 R 2 R

where k is the wave number and ϑ is the angle between the viewing direction and the
cylinder symmetry axis, π/2 ≤ ϑ ≤ π :

   
(ϑ) = (1) + (2) + (3), (2.7)
   
(ϑ) = (1) + (2) + (3), (2.8)

(1) = f (1)[J1 (ζ ) + iJ2 (ζ )]eikl cos ϑ , (2.9)

(2) = f (2)[−J1 (ζ ) + iJ2 (ζ )]eikl cos ϑ , (2.10)

(3) = f (3)[−J1 (ζ ) + iJ2 (ζ )]e−ikl cos ϑ , (2.11)

ζ = 2ka sin ζ ,

J1 (ζ ) and J2 (ζ ) are the first- and second-order Bessel functions, respectively. Indices
1, 2 and 3 correspond to three scatterers on the cylinder (Fig. 2.1).  
Similar expressions can be obtained for the functions (1), (2) and (3) by
replacing f (1), f (2) and f (3) by g(1), g(2) and g(3), respectively. The latter are

zino: “chap02” — 2005/11/7 — 15:37 — page 29 — #3


30 Radar imaging and holography

y
z

3
A A/
1
q

Figure 2.1 Viewing geometry for a rotating cylinder: 1, 2, 3 – scattering centres


(scatterers)

defined as
  −1  −1
f (1) sin(π/n) π π (π − 2ϑ)
= cos − 1 ± cos − cos ,
g(1) n n n n
(2.12)

  −1  
f (2) sin(π/n) π π 2ϑ −1
= cos − 1 ∓ cos − cos , (2.13)
g(2) n n n n

  −1
−1
f (3) sin(π/n) π π (π + 2ϑ)
= cos − 1 ∓ cos − cos ,
g(3) n n n n
(2.14)

n = 3/2.

The functions (2.7)–(2.14) can be used to calculate the scattering characteristics (the
RCS diagram, the amplitude and phase scattering diagrams, etc.) for an experi-
mental study of diffraction in an anechoic chamber (AEC). The last two diagrams,
for example, can be found as the modulus and argument of the functions (2.7) and
(2.8). However, the representation of the field as a sum of the fields re-transmitted
by scatterers provides information on individual scatterers. Such characteristics are
referred to as local responses [12]. The RCS diagrams for scatterers on a cylinder and

zino: “chap02” — 2005/11/7 — 15:37 — page 30 — #4


Methods of radar imaging 31

the E- and H-polarisations of the incident field can be written as



 2  2
σnE (ϑ) = π a2 (ϑ) , σnH (ϑ) = π a2 (ϑ) , (2.15)
n n

n = 1, 2, 3.
The phase responses of scatterers can be derived in the form of arguments of the
complex valued functions (2.9)–(2.11). A scattering model for a cylinder with bistatic
incidence was designed in Reference 12 in the EWM approximation. Besides, it is
shown in References 105 and 109 that the amplitude responses and the positions of
scatterers on a target can be studied experimentally using images reconstructed from
microwave holograms.
We now turn to models of extended proper targets. Such targets include
• land surface;
• sea surface;
• large anthropogenic objects like urban areas and settlements;
• special standard objects for radar calibration.
An analysis of models of all of these targets would go far beyond the scope of this book.
We give a brief survey of scattering models of sea surface in Chapter 4, including a
model of a partially coherent extended proper target, which is used in the analysis of
microwave radar imagery.
It should be noted that extended compact targets may also be partially coherent
(Chapter 7). In either case, these targets produce parasitic phase fluctuations which
perturb radar imaging coherence.
Target models are used for several purposes: to justify the principles of inverse
aperture synthesis, to interpret microwave images, to obtain local RCS of scatterers
on standard objects, and to calibrate measurements made in AECs.

2.2 Basic principles of aperture synthesis

We have mentioned in Chapter 1 that the use of a synthetic aperture is necessary if one
needs to obtain a high angular resolution of targets at large distances. It has been shown
by some researchers [73,109] that the aperture synthesis is, in principle, possible for
any form of relative motion of a target and a real antenna; what is important is that
the target aspect should change together with the relative displacement.
Today there are two basic methods of aperture synthesis – direct and inverse.
Direct synthesis can be made by scanning a relatively stationary target by a real
antenna (Fig. 2.2(a)). The target is on the earth surface and the antenna is located on
an aircraft. Radar systems with direct antenna synthesis are known as side-looking
synthetic aperture radars (SARs). The authors of Reference 85 have suggested for
them the term quasi-holographic radars (Chapter 3). Methods of aperture synthesis
using linear translational motion of a target or its natural rotation relative to a stationary
ground antenna are called inverse methods and radars based on such methods are

zino: “chap02” — 2005/11/7 — 15:37 — page 31 — #5


32 Radar imaging and holography

(a) Lc

V
1

bo

1 – radar 2 – target

(b) Lc

V
2

bo

1 – radar 2 – target

(c)

2 bo

1 – radar 2 – target

Figure 2.2 Schematic illustrations of aperture synthesis techniques: (a) direct syn-
thesis implemented in SAR, (b) inverse synthesis for a target moving in
a straight line and (c) inverse synthesis for a rotating target

known as inverse synthetic aperture radars (ISARs) (Fig. 2.2(b) and (c)). There are also
combined approaches to field recording. For example, a front-looking holographic
radar (Chapter 3) combines direct synthesis along the track and transversal synthesis
with a one-dimensional (1D) real antenna array (Fig. 3.4). A spot-light mode of
synthesis is also possible: it uses both the linear movement of an airborne antenna
and its constant axial orientation to a ground target (Fig. 3.11). Radars based on this
principle are known as spot-light SAR [100].
Finally, ground radars operating in the inverse synthesis mode and viewing a
linearly moving target can combine a real-phased antenna array and aperture synthesis.

zino: “chap02” — 2005/11/7 — 15:37 — page 32 — #6


Methods of radar imaging 33

This method was suggested by B. D. Steinberg [129] to employ adaptive beamforming


(AB) together with aperture synthesis (ISAR + AB).
In any method of aperture synthesis, the radar azimuthal resolution is determined
by the aperture angle βo = Ls /R. The linear resolution along the angle coordinate is
δl = λ1 /βo . It should be emphasised [73] that rotation of a synthetic antenna pattern
(SAP) does not shift the target phase centre and, therefore, does not synthesise the
aperture. For this reason, one cannot increase the angular resolution by rotating a real
antenna, in contrast to the target rotation.

2.3 Methods of signal processing in imaging radar

Imaging radar signal processing can be considered from different points of view.
Since there is an essential difference between the direct and inverse modes of aperture
synthesis, the processing techniques should be described individually for each type
of radar.

2.3.1 SAR signal processing and holographic radar for earth surveys
The SAR aperture synthesis by coherent integration is treated in terms of
• the antenna approach [74];
• the range-Doppler approach [85,140];
• the cross-correlation approach [85];
• the holographic approach [85,143];
• the tomographic approach [100].
The use of a variety of analytical techniques in radar imaging leads to various
processing designs and physical interpretations of some of its details.
The first four approaches provide a fairly complete analysis of the effects of SAR
parameters on its performance characteristics and the results are generally consistent.
Each approach, however, enables one to see the image recording and reconstruction
in a new light, because each has its own merits and demerits. In this book, we
largely follow the holographic approach to the performance analysis of various SAR
systems, which involves the theories of optical and holographic systems. According
to one of the pioneers of optical and microwave holography E. H. Leith, a holographic
treatment of SAR performance has proved most fruitful. The recording of a signal is
regarded as that of a reduced microwave hologram of the wave field along the azimuth,
that is, along the flight track. Illumination of such a hologram by coherent light
reconstructs the optical wave field, which is similar to the recorded microwave field
on a certain scale. A schematic diagram illustrating the holographic approach to SAR
signal recording and processing is presented in Fig. 2.3. For a point target, for instance,
an optical hologram is a Fresnel zone plate. When the plate is illuminated by coherent
light, the real and virtual images of the point target are reconstructed (Fig. 3.1).
Thus, a microwave image of a point target can be obtained directly owing to
the focusing properties of a Fresnel zone plate. The processing optics in that case is

zino: “chap02” — 2005/11/7 — 15:37 — page 33 — #7


34 Radar imaging and holography

Signal from radar target

1
Reference signal

Image

Figure 2.3 The holographic approach to signal recording and processing in SAR:
1 – recording of a 1D Fraunhofer or Fresnel diffraction pattern of
target field in the form of a transparency (azimuthal recording of a
1D microwave hologram), 2 – 1D Fourier or Fresnel transformation,
3 – display

necessary only to compensate for various distortions inherent in SAR; anamorphism


and the difference in the azimuth and range scale factors. Optical processing of SAR
signals was first analysed in terms of holography [86].
The holographic approach will be used in Chapter 3 to describe SAR as a sys-
tem for combined recording and reconstruction of microwave holograms. A general
scheme of this process is shown in Fig. 2.3. The reference signal here is a heterodyne
coherent pulse, whose role is actually much more important (see below).
Holographic SAR for surveying the earth surface (Chapter 3) uses the cross-
correlation [72] and holographic approaches. The scheme illustrating the holographic
principle is similar to that in Fig. 2.3 with the only difference that one deals here with
2D microwave holograms.
The tomographic approach is applied in descriptions of aperture synthesis by
spot-light SAR (Chapter 3).

2.3.2 ISAR signal processing


Methods of inverse aperture synthesis have been discussed in a number of publica-
tions. The treatments involved are:
• a range-Doppler algorithm [13,21,24];
• a circular convolution algorithm (CCA) [94];

zino: “chap02” — 2005/11/7 — 15:37 — page 34 — #8


Methods of radar imaging 35

• correlated processing [13];


• extended coherent processing (ECP) [13];
• polar format processing [13];
• holographic processing [109];
• tomographic processing [9,106].

A serious limitation of the range-Doppler algorithm is its applicability only to a


synthesis made at relatively small angle steps, which is an obstacle in achieving
high resolutions. The restrictions on the time intervals of coherent processing were
formulated by D. A. Ausherman et al. [13]. Any attempt to overcome these restrictions
leads to displacement of individual scatterer images into adjacent resolution elements
and, hence, to the image degradation. The range-Doppler algorithm has been used in
SAR for microwave imaging of aircraft [8]. Preliminarily, the radial movement of the
target is compensated for in all range channels. The development of new processing
algorithms based on larger angle steps required the use of spherical coordinates (polar
coordinates in the 2D case) instead of the Cartesian coordinates of the range–cross
range type. One of these is the CCA permitting the limit angle step of 2π with a
precise aperture focusing over the whole target space [94]. Moreover, it is applicable
to the processing of both narrow- and wide-band radar signals. The conditions for
viewing real targets differ from the conditions, in which the CCA operates. First,
discrete records for the angle steps of the target aspect variation, recorded at a constant
repetition frequency, are not equidistant. Second, the angle between the radar line of
sight (RLOS) and the rotation axis changes during the viewing. The first obstacle can
be bypassed by interpolating the radar data. The second one inevitably leads to the
necessity to consider a 3D problem. Attempts at using this algorithm, like other 2D
algorithms, to process 3D data result in distorted images [8].
When applied to narrow-band signals, the CCA has another disadvantage: the
whole ensemble of radar data must be processed simultaneously. So this algorithm
should be employed only in measuring test areas and in AECs.
Correlated processing provides well-focused images of targets of any size, and
the time intervals of coherent processing may be of arbitrary duration. On the other
hand, its computational efficiency is quite low [8].
Both algorithms require special measures to compensate for the phase shift due
to the radial displacement of the target.
Extended coherent processing is based on coherent summation of microwave
images, each of which is formed by a range-Doppler algorithm at a small angle step.
The application of this technique increases the processing rate by approximately an
order of magnitude with a good image quality for a fairly long processing time.
Variable movement of a target relative to the RLOS necessitates the use of different
algorithms for the synthesis of the final image from partial ones. So algorithms for
ECP are subdivided into those for wide angle imaging and those for multiple target
rotations. Target aspects suitable for wide angle imaging are chosen when a ground
radar views a space craft stabilised along three axes or rotating around its centre of
mass. Imaging by multiple rotations has the following specificity: when a space target
is stabilized by rotation, the angle step remains the same in every consecutive rotation

zino: “chap02” — 2005/11/7 — 15:37 — page 35 — #9


36 Radar imaging and holography

of the target around its axis. In its latter modification, the ECP algorithm is used for
3D and stroboscopic microwave imaging [13].
Polar format processing is another effective way to overcome the scatterers’ move-
ment through the resolution elements. It is based on the representation of radar data
in a 3D frequency space.
In our opinion, a very perspective way of inverse aperture synthesis is by holo-
graphic processing [109,146]. The possibility of using a holographic approach was
first suggested by E. N. Leith [85]. Not only does it provide a new insight into the
processes occurring in inverse synthesis but it also helps to find novel designs of
recording and reconstruction devices.
The schematic diagram of the holographic approach to ISAR signal recording
and reconstruction is similar to that shown in Fig. 2.3. The first step is to record a
1D quadrature or complex microwave Fourier hologram (the diffraction pattern of
the target field) (Section 2.3.1). The reference signal is a coherent heterodyne pulse.
The second step is the implementation of a 1D Fourier transform. The next step is
the image representation.
Tomographic processing can be performed using one of the three ways of image
reconstruction:
• reconstruction in the frequency region [9];
• reconstruction in the space region by using a convolution back-projection
algorithm [9];
• reconstruction by summation of partial images (Chapter 6).
The tomographic approach to ISAR analysis will be discussed in Section 2.4.2 and
in Chapter 6.

2.4 Coherent radar holographic and tomographic processing

2.4.1 The holographic approach


Direct hologram recording commonly used in the optical wavelength range finds
a limited application in the microwave range because of the absence of a suitable
substitution to a microwave photoplate. So the processing can be made by either of
the two methods – direct or inverse aperture synthesis (Section 2.2). These techniques
allow the recording of two types of hologram. One is similar to an optical hologram,
while the other has no optical counterpart.
Suppose a exp(i) is a target wave and ao exp(io ) is a reference wave. In the
first case a square microwave hologram is formed which is described by the following
equation:
H1 (x, y) = |a exp(i) + ao exp(io )|2 = a2o + a2 + 2aao cos( − o ),
(2.16)
Such holograms can be recorded by a quadratic detector in the high- and medium-
frequency ranges (Fig. 2.4(a) and (b) respectively), using a high-frequency reference

zino: “chap02” — 2005/11/7 — 15:37 — page 36 — #10


Methods of radar imaging 37

(a) (b) Intermediate


frequency
a exp(iΦo)
Square Square
Receiver
detector detector

ao exp(iΦo) ao exp(iΦo)

(c) (d)
a exp(iΦ) a exp(iΦ)
Multiplicative Amplitude-
Receiver
detector phase detector

ao exp(iΦo) ao exp(iΦo)

(e)
Amplitude-phase
detector I

a exp(iΦ)
ao exp(iΦo)
p/2

Amplitude-phase
detector II

a exp(iΦ)
(f) Amplitude-
Receiver Phase
limiter circuit detector

exp(iΦ)

Figure 2.4 Synthesis of a microwave hologram: (a) quadratic hologram recorded at


a high frequency, (b) quadratic hologram recorded at an intermediate
frequency, (c) multiplicative hologram recorded at a high frequency,
(d) multiplicative hologram recorded at an intermediate frequency,
(e) quadrature holograms, (f) phase-only hologram

wave. In the second case a multiplicative hologram is formed [109] which is defined as

H1 (x, y) = Re[a exp(i) · ao exp(−io )] = aao cos( − o ). (2.17)

The latter can also be formed at high and medium frequencies (Fig. 2.4(c) and (d),
respectively).
In either case it is possible to record a quadrature microwave hologram

H2 (x, y) = a2o + a2 + 2aao sin( − o ), (2.18)


H2 (x, y) = aao sin( − o ). (2.19)

zino: “chap02” — 2005/11/7 — 15:37 — page 37 — #11


38 Radar imaging and holography

A pair of quadrature microwave holograms (2.16), (2.17) or (2.18), (2.19) is recorded


by using identical reference waves phase-shifted by π/2 relative to each other
(Fig. 2.4(e)).
Optical recording of the bipolar functions (2.17) and (2.19) for optical reconstruc-
tion requires the use of the reference level Hr to be found from the condition

max |H1 (x, y)|
Hr ≥ (2.20)
max |H2 (x, y)|
and the linearity of the microwave recording. Then we arrive at the equations
H1 (x, y) = Hr + aao cos( − o ), (2.21)
H2 (x, y) = Hr + aao sin( − o ). (2.22)
Each pair of quadrature holograms makes up a complex microwave hologram:
H (x, y) = H1 (x, y) + iH2 (x, y). (2.23)
The quantity j in Eq. (2.23) is introduced at the reconstruction stage, following the
recording of only two quadrature holograms, say, (2.21) and (2.22). But this form of a
complex hologram equation makes it possible to consider this pair as an entity, which
is especially convenient for an analytical description of the reconstruction process.
A complex microwave hologram is a means of registration of the total field scattered
by a target. It will be shown later that this allows the reconstruction of a single image.
The designs shown in Fig. 2.4(a)–(d) are largely used in laboratory and test set-ups,
while radar stations use the design in Fig. 2.4(e). A typical microwave holographic
receiver based on this design is shown in Fig. 2.5. In contrast to optical holography,
the reference wave is produced by a coherent generator and phase-shifter 1 in the
receiver.
Therefore, this is a radically new way, as compared to optical holography when
it creates a reference wave by electrical modulation. We call it an artificial reference
wave. Its incidence angle can be simulated by varying the phase with phase-shifter 1
operating synchronously with the movement of the real radar antenna. The incidence
angle α to the carrier track (Fig. 2.6) can be simulated by changing its phase as
2πx sin α
= , (2.24)
λ1
where x is the position of the real antenna during the aperture synthesis.
Microwave holograms can be classified in terms of the volume of recorded data
on the target wave. If a hologram contains data on the wave amplitude and phase, it is
said to be an amplitude–phase hologram. If the amplitude factor a(x, y) is neglected
before the summation or multiplication of the target and reference waves, a hologram
is said to be a phase-only hologram [109] (Fig. 2.4(f)).
To describe the fields of reconstructed images, one can conveniently use the
Fresnel–Kirchhoff diffraction formula [121] employed in optical holography. So it
is reasonable to classify holograms in terms of the phase fronts of fields induced by
reference sources and diffracted by a target.

zino: “chap02” — 2005/11/7 — 15:37 — page 38 — #12


Methods of radar imaging 39

Frequency A-phase Low-pass


Transmitter
XX synthesiser detector filter
A cos Φ
2

Phase Phase
shifter shifter
No– 1 No– 2
(90°)

1 4

IF 3 A-phase Low-pass A sin Φ


Receiver amplifier detector filter
5

Figure 2.5 A block diagram of a microwave holographic receiver: 1 – reference


field, 2 – reference signal cos(ω0 t + ϕ0 ), 3 – input signal
A cos(ω0 t + ϕ0 − ϕ), 4 – signal sin(ω0 t + ϕ0 ) and 5 – mixer

Plane wave

v a
x

Radar antenna

Figure 2.6 Illustration for the calculation of the phase variation of a reference wave

A Fresnel microwave hologram is synthesised by registration of the interference


pattern of interaction between plane or spherical reference waves and waves diffracted
by a target, which have a spherical phase front in the hologram plane.
A Fraunhofer microwave hologram is formed by recording the interference pattern
of plane or spherical reference waves interacting with diffracted waves having a plane
phase front in the hologram plane.
A Fourier microwave hologram is formed by recording the interference pattern of
interaction between the diffracted waves having a spherical front in the hologram plane
and a spherical reference wave with a curvature radius equal to an average curvature
radius of the waves coming from the target and propagating in the same direction.

zino: “chap02” — 2005/11/7 — 15:37 — page 39 — #13


40 Radar imaging and holography

Fresnel and Fraunhofer holograms have found application in SAR theory, while
Fourier holograms are used in ISAR theory (Chapters 3 and 5).
Since the process of hologram synthesis implies that the radar is to be coherent,
the question arises as to what requirements must be imposed on the coherence. Let us
first define the concept of coherence in microwave radar theory. A signal is said to be
coherent if it shows no abrupt changes in the basic frequency, or if such changes are
small, of the order of 1–3◦ [14]. If the basic frequency changes are greater than these
values, the signal reflected from a target is called partially coherent. This happens
when the coherence is perturbed due to:

• an unstable frequency of the radar wave synthesiser or heterodyne;


• the effects of the target itself, say, of a sea surface (Chapter 4);
• a non-uniform motion of the aircraft, for example, yawing, pitching and beaking
(Chapter 7);
• the effects of the troposphere and ionosphere, such as sporadic changes in the
wave propagation conditions (Chapter 8).

Within this definition, a continuous radiation is always coherent for a period of time
when various instabilities in the transmitter performance can be neglected. When a
radar operates in a pulse mode, coherence is determined by an unambiguous relation
between the initial phase values of the carrier frequency of a train of pulses. The
above definition of coherence also applies to radar signals with known phase jumps
that can be avoided using coherent sensing. Since the first of the factors responsible
for coherence instability is the most serious one, there was a suggestion to introduce
in imaging theory the concept of frequency, rather than coherence, stability [87].
A comprehensive analysis of requirements on the frequency stability was made in
SAR theory by R.O. Harger [55]. A simplified approach is considered in Reference 87.
The latter will be discussed here in more detail in order to explain the physical
mechanism of SAR instability. The treatment of this problem has yielded the following
expression:

παT 2 ≤ (π/4)(cT /2R), (2.25)

where α is the rate of linear frequency variation due to the instability of the radar
generator, T is the time for a pulse to reach the target at distance R and to come back.
It is clear from Eq. (2.25) that a permissible phase error π α 2 T 2 is π/4 for the time
T = 2R/c.
Therefore, Eq. (2.25) is the criterion for the coherence length in the holographing
of reflecting targets; it should provide the frequency stability of the signal propagation
for a time consistent with the scene depth (a full analogy with optical holography).
Similar stability requirements can be imposed on coherent ISAR, in which coherence
is preserved if the signal phase deviation due to the frequency instability is less than
π/2. Then we have the expression

2πδfc T ≤ π/2, (2.26)

zino: “chap02” — 2005/11/7 — 15:37 — page 40 — #14


Methods of radar imaging 41

where δfc is the deviation of the probing signal frequency for the time T . Neglecting
the signal delay in the antenna-feeder waveguide, we get

δfc ≤ c/8R, (2.27)

using the concept of short-term instability


fc
εf = , (2.28)
fc
where fc is the radar carrier frequency. Then we have
c
εf = . (2.29)
8fc R
The condition for a long-term frequency instability can be found from a coherent
processing in the whole time interval of the synthesis, Ts , which varies with the type
of hologram processing.
The frequency stability in modern radars is achieved with highly stable, mainly
caesium atomic beam standards of time and frequency. The frequency standards pro-
vide a long-term instability over 1 h with a possible adjustment of about 10−12 –10−14
and a 1 ns random component of the 24 h behaviour of the timescale [25]. To maintain
the stability, modern radars use a phase loop control [44]. The long-term instability
requirements to coherent ISAR are very high. For example, in the Goldstone Solar
System Radar (GSSR) radar for planet surveys [44] this parameter is about 10−15 for
1000 s and the pulse-to-pulse instability is less than 1◦ . The GSSR Project is designed
for the observation of Mercury, Venus and Mars. In a LRIR (Long-Range Imaging
Radar), the pulse-to-pulse instability is about 2–3◦ [20]. This radar is designed for
observation of space objects.
To summarise the discussion of factors causing coherence instabilities in radars,
the frequency stability maintained by frequency standards and loop frequency control
can solve the problem of operation instabilities of a pulse generator or heterodyne.
The other three causes of instability can be removed by special signal processing in
the radar (Chapters 4 and 7).

2.4.2 Tomographic processing in 2D viewing geometry


It has been shown above that the signal processing in ISAR can be described in
terms of Doppler frequencies, correlated processing, CCAs, etc. We believe that the
most appropriate approach is tomographic processing which allows focusing of a
synthesised aperture over the whole target space and provides an image resolution
restricted only by the diffraction limit [7,9,10]. Another advantage of this technique
is great possibilities for optimisation of processing algorithms and devices. Consider
a target being probed by a stationary coherent radar (Fig. 2.7), which radiates pulses
with the carrier frequency fc and the modulation function w(t)

s(t) = w(t) exp(j2π fc t) (2.30)

zino: “chap02” — 2005/11/7 — 15:37 — page 41 — #15


42 Radar imaging and holography

Point target

–r
o

0
–r

–r
a

Figure 2.7 The coordinates used in target viewing

and measures the amplitude and phase of the complex envelope of an echo signal.
The target is assumed to consist of a small number of independent scatterers, whose
position relative to the centre of mass of the target O and the radar is defined by the
respective vectors (Fig. 2.7). The target moves along an arbitrary trajectory, rotating
around its centre of mass. The conditions for the far zone and a uniform field amplitude
of the wave incident on the target surface facing the radar are fulfilled. The algorithm
for the processing of the complex envelope of an echo signal, synthesised by the radar
receiver, is
 
2|r̄|
sv (t) = g(r̄o )w t − exp(−j2kc |r̄|)d r̄o , (2.31)
c
v

where g(r̄o ) is the function of the target reflectivity and kc = 2π/λc is a wave number
corresponding to the wavelength of the radar carrier oscillation. Equation (2.31)
allows the estimation of the ĝ(r̄o ) reflectivity of every scatterer.
The integration of Eq. (2.31) is made over the target space. With the condition
for the far zone, the vector r̄ describing the position of an arbitrary scatterer relative
to the radar can be substituted by its projection on the line of sight:

|r̄| = |r̄a | + r̂, (2.32)

where

r̂ = r̄o ū, ū = r̄a /|r̄a | (2.33)

and ū is a unit vector coinciding with the line of sight and directed away from the
target rotation centre towards the radar.
Generally, both terms of Eq. (2.32) vary during the viewing. However, the con-
tribution to the imaging is made only by the variation in the relative range r̂. On the
contrary, the range variation of the target’s centre of mass |r̄a | produces distortions
in the image. By substituting Eq. (2.32) into Eq. (2.31) and regrouping the terms for

zino: “chap02” — 2005/11/7 — 15:37 — page 42 — #16


Methods of radar imaging 43

the complex envelope distortion, we obtain


 
2|r̄a | 2r̂
sv (t) = g(r̄o ) w t − − exp(−j2kc |r̄a |) exp(−j2kc r̂)d r̄o .
c c
v
(2.34)
It follows from the analysis of Eq. (2.34) that the correction of the received signal
is to maintain a constant delay τ = 2|r̄a |/c and to multiply the signal by the phase
factor exp(j2kc |r̄a |). After making the correction, the signal can be written (assuming
τ = 0) as
 
2r̂
sv (t) = g(r̄o )w t − exp(−j2kc r̂)d r̄o . (2.35)
c
v
The exponential phase factor in Eq. (2.35) defines the coherence degree of the whole
imaging system (the radar and the processing system) over the whole band-limited
frequency spectrum. The coherence instability due to, say, an inaccurate compensa-
tion for the target radial movement leads to a poorer resolution. The possibility of
imaging by a tomographic algorithm is, in principle, preserved. Let us process a sig-
nal in the frequency domain. The Fourier transform of a video signal corresponding
to the change in the target aspect relative to the radar is

S(f ) = F{s(t)} = W (f ) g(r̄o ) exp[−j2(kc + k)r̂] d r̄o , (2.36)
v
where W (f ) = F{w(t)} is the modulation function spectrum, k = 2π/λ is the wave
number to be defined in the frequency spectrum, and F{·} is a 1D Fourier transform
operator. Next, we perform a standard range processing to obtain the resolution along
the line of sight in a filter with the transmission characteristic K(f ) [18]:

S(f ) = H (f ) g(r̄o ) exp[−j2(kc k)r̂] d r̄o (2.37)
v
with H (f ) = W (f )K(f ).
The range processing can also be made in the time domain of the receiver using
a filter with the impulse response h(t) = F −1 {K(f )}, where F −1 {·} is the inverse
Fourier transform (IFT) operator.
Note that the compensation for the target radial displacement can also be made by
the processor (after the transformation of Eq. (2.37)) by multiplying the video signal
spectrum by the phase factor exp[j2(kc + k)|r̄a |]. A particular method of processing
requires a proper design of the receiver and processor.
With Eq. (2.33), expression (2.37) can be presented as a 3D Fourier transform of
the target reflectivity:

S(f ) = H (f ) g(r̄o ) exp[−j2(kc + k)ūr̄o ] d r̄o , (2.38)
v
where (kc + k) is the 3D frequency vector modulus.

zino: “chap02” — 2005/11/7 — 15:37 — page 43 — #17


44 Radar imaging and holography

To calculate the target reflectivity, it is necessary to make an inverse transforma-


tion of the Fourier function over the respective volume:

ĝ(r̄o ) = F −1 {S(f )} = g(r̄o ) ∗ h(r̄o ), (2.39)

where ∗ denotes convolution, h(r̄o ) is the processing system response from a single
point target in the space frequency domain, h(r̄o ) = F −1 {H (f )}, and H (f ) is a 3D
aperture function.
It is clear from Eq. (2.39) that the value of ĝ(ro ) is a distorted representation of
the target reflectivity g(ro ). The distortion is largely due to the limited frequency
spectrum and the small angle step of the aspect variation.
Equation (2.39) can be transformed in the 3D frequency domain. More often,
however, one needs 2D images, which can be obtained using an appropriate 2D data
acquisition design (Fig. 2.8). Equation (2.38) then has the form:

S(f ) = H (f ) g(u, v) exp[−j2(kc + k)v] du dv. (2.40)
−∞

Keeping in mind that the function



Pθ (v) = g(u, v) du, (2.41)
−∞

represents the projection of the target reflectivity on the v-axis, the target aspect
defined by the angle θ (Fig. 2.8) can be written as

Sθ (f ) = H (f ) Pθ (v) exp[−j2(kc + k)v] dv. (2.42)
−∞

Using the denotation fp = 2(fc + f )/c, we get



Sθ (f ) = H (f )Pθ (fp ) = H (f ) Pθ (v) exp(−j2π fp v) dv, (2.43)
−∞

where Pθ (fp ) is the Fourier transform of the projection Pθ (v) with the space
frequency fp .
The substitution of Eq. (2.41) into Eq. (2.43) yields

Pθ (fp ) = g(u, v) exp[−j2π(0u + fp v)] du dv (2.44)
−∞

or

Pθ (fp ) = Pθ (0, fp ) = Pθ (fp sin θ , fp cos θ), (2.45)

zino: “chap02” — 2005/11/7 — 15:37 — page 44 — #18


Methods of radar imaging 45

v
y

u
ro o
w
x

Figure 2.8 2D data acquisition design in the tomographic approach

where P(·) is the Fourier transform of the target reflectivity in the (x, y) coordinates.
Then using Eq. (2.45), we have

Sθ (f ) = H (f )Pθ (fp sin θ , fp cos θ ). (2.46)

Equation (2.45) represents the formulation of the projection theorem underlying the
tomographic imaging algorithms [34,57].
Bearing in mind that v = y cos θ −x sin θ , we go from Eq. (2.43) to the 2D Fourier
transform in the (x, y) coordinates related to the target:


S(fx fy ) = H (f ) g(x, y) exp[−j2π(fx x + fy y)] dx dy, (2.47)
−∞

where fx and fy are the respective space frequencies, fx = −(fpo + fp ) sin θ , fy =


(fpo + fp ) cos θ , fpo = 2fc /c is the space frequency corresponding to the carrier
frequency spectrum, fp is the space frequency defined over the whole frequency band
of the probing signal, 2fl /c < fp < 2fu /c, fl and fu are the lower and top frequency
spectra. The solution to Eq. (2.47) yields the target reflectivity:


ĝ(x, y) = S(fx , fy ) exp[j2π(fx x + fy y)] dx dy = g(x, y) ∗ h(x, y). (2.48)
−∞

This approach to imaging can be implemented in the frequency and space domains (see
Chapter 6). Note that the radar data on a signal are recorded in polar coordinates [8],
while the imaging devices are represented as a dot matrix. This inconvenience neces-
sitates the use of a cumbersome procedure of data interpolation and then finding a
compromise between the degree of interpolation complexity (the greater the com-
plexity, the better the image quality) and the computation resources. It will be shown

zino: “chap02” — 2005/11/7 — 15:37 — page 45 — #19


46 Radar imaging and holography

fy

fpo
fx

Du
fp1
fpt

Dfp

Figure 2.9 The space frequency spectrum recorded by a coherent (microwave holo-
graphic) system. The projection slices are shifted by the value fpo from
the coordinate origin

in Chapter 6 that there is a procedure of processing in the space domain, which


successfully overcomes this difficulty.
The space spectrum of each echo signal is represented in the frequency fx fy plane
(Fig. 2.9) as a straight line coinciding with a radial beam. The beam angular coordinate
θ is equal to the angle ϑ which defines the target position at the moment of probing
signal reflection. The space spectra of echo signals are centred relative to an arc of
radius fpo . In the frequency plane, their multiplicity forms microwave holograms with
the angle θ equal to the angle step of the synthesis, ϑ. The inner and outer radii
of a hologram are defined by the space frequencies fpl and fpt in the lower and top
frequency spectra of the probing signal.
It follows from Eq. (2.47) that the ensemble of radar data recorded under the
above conditions is a 2D Fourier microwave hologram. The image reconstruc-
tion from such a hologram reduces to IFT. Although the inversion of a hologram
described by Eq. (2.47) is a simple mathematical procedure, the methods of its digital
implementation are not as obvious.
With the above assumptions of the far zone and the high-frequency spectrum, we
can suggest that at every moment of time t = 2v/c the contribution to the echo signal
will be made only by the local scatterers with the range coordinate ϑ. Then the integral

Pϑ (ϑ) = g(u, ϑ)du (2.49)
V

taken along the transverse range represents the projection of the target reflectivity on
the RLOS.

zino: “chap02” — 2005/11/7 — 15:37 — page 46 — #20


Methods of radar imaging 47

fy

Dfp

fx

~Du

Figure 2.10 The space frequency spectrum recorded by an incoherent (tomo-


graphic) system

With Eq. (2.49), expression (2.40) will have the form:



Sθ (fpo + fp ) = H (fp ) Pϑ (ϑ) exp[−j2π(fpo + fp )ϑ]dϑ = H (fp )Pθ (fpo + fp ),
V
(2.50)
where
Pθ (fpo + fp ) = F{Pϑ (ϑ) exp(−j2π fpo ϑ)}. (2.51)
If the right-hand side of Eq. (2.50) is expressed in the Cartesian coordinates, we
shall have
Sθ (fp ) = H (fp )Pθ [−(fpo + fp ) sin θ , (fpo + fp ) cos θ ]. (2.52)
Hence, a 1D spectrum of the product of the reflectivity function projection at the
angle ϑ and the phase factor  = exp(−j2π fpo ϑ) is the cross section of a microwave
hologram function along a straight line passing through the frequency plane origin at
the angle θ (θ = ϑ).
If the data acquisition system is incoherent and records only the complex envelope
shape of the echo signal, the phase factor  vanishes from Eq. (2.50) to Eq. (2.52).
Equation (2.52) then reduces to the projection slice theorem, one of the fundamental
theorems in computerised tomography [57].
Let us discuss the physical differences between coherent (holographic) and inco-
herent (tomographic) systems of microwave radar imaging by comparing Figs 2.9
and 2.10. The angle step of the target aspect variation and the frequency band width
of the probing signal are taken to be identical in both cases.

zino: “chap02” — 2005/11/7 — 15:37 — page 47 — #21


48 Radar imaging and holography

A specific feature of coherent systems is that the projection slices of a hologram


are shifted radially by the value fpo away from the coordinate origin. Other things
being equal, their resolution, defined in the first approximation by the data domain
size in the frequency space, is therefore high [8].
A more important difference is that the projection Pϑ (ϑ) recorded by an incoherent
system is a real time function. So the phase of any of the projected slices in the data
domain of the frequency space is zero at the interception with the coordinate origin.
The projection slices are independent of one another. In contrast, a coherent system
records not only the changes in the complex envelope amplitude along the projection
but also those of the phase of the echo signal carrier oscillation. As a result, in
consecutive projection slices the phases of average records with the space frequency
fpo carry information about the ranges of all unscreened scatterers of the target relative
to its rotation centre. Other records of the projection slice have additional shifts with
their space frequency differences with respect to the centre record. In this way, all
hologram records become interrelated providing a resolution along any direction,
including the transverse range.
Thus, the mathematical theory of computerised tomography for designing digital
processing algorithms should be modified to adjust it to the requirements of coher-
ent imaging. The above mathematical expressions (2.50)–(2.52) can be regarded as
generalised projection slice theorems for coherent radar imaging. This enables one
to employ analytical methods of computerised tomography [57] as a basis for further
development of the theory of coherent imaging. Advantages of this kind of treatment
are physical clarity and computation efficiency (Chapter 6).
The holographic approach to the description of inverse synthesis by coherent
radars accounts for arbitrary changes in the target aspect and the frequency band width
of the probing signal. Most of the available algorithms for microwave imaging have
been designed for 2D viewing geometry, so digital processing for real target sizes and
angle steps becomes a time-consuming endeavour. Well-elaborated mathematics of
computerised tomography could considerably facilitate the development of effective
computation algorithms for digital processing of 3D microwave holograms.

zino: “chap02” — 2005/11/7 — 15:37 — page 48 — #22


Chapter 3
Quasi-holographic and holographic radar
imaging of point targets on the earth surface

3.1 Side-looking SAR as a quasi-holographic radar

We have shown in Chapter 2 that the aperture synthesis can be described in different
ways, including a holographic approach. It was first applied by E. N. Leith to a
side-looking synthetic aperture radar (SAR) [85,86]. He analysed the optical cross
correlator, which processes the received and the reference signals, and concluded
that ‘if the reference function is a lens, the record of a point object’s signal can
also be considered as a lens, because the reference function has the same functional
dependence as the signal itself’ [85]. The signal from a point object is a Fresnel lens,
and its illumination by a collimated coherent light beam creates two basic images – a
real image and a virtual image (Fig. 3.1). The author also pointed out that the images
formed by a Fresnel lens were identical to those created by correlation processing. He
drew the conclusion that ‘by reducing the optical system to only three lenses, we are, it
appears, led to abolishing even these, as well as the correlation theory upon which all
had been based’ [85]. This was a radically new concept of SAR. The radar theory and
the principles of optical processing were revised in terms of the holographic approach.
Its key idea is that signal recording is not just a process of data storage, like in antenna
or correlation theories, but it is rather the recording of a miniature hologram of the
wave field along the carrier’s trajectory. For this, the recording is made on a two-
dimensional (2D) optical transparency (the ‘azimuth-range’), or a complex reflected
signal is recorded 2D. The first procedure uses a photographic film to record the
range across the film but the azimuth and pathway range along its length. In optical
recording, the image is reconstructed in the same way as in conventional off-axial
holography, that is, along the carrier’s pathway line. If a microwave hologram is
recorded optically, its illumination by coherent light reproduces a miniature optical
representation of the radar wave field. Therefore, the object’s resolution is determined
by the size of the hologram recorded along the pathway line, rather than by the aperture

zino: “chap03” — 2005/11/7 — 15:37 — page 49 — #1


50 Radar imaging and holography

5
1

Figure 3.1 A scheme illustrating the focusing properties of a Fresnel zone plate:
1 – collimated coherent light, 2 – Fresnel zone plate, 3 – virtual image,
4 – real image and 5 – zeroth-order diffraction

of a real radar antenna. The range resolution is provided by the pulse modulation of
radiated signals. Since the holographic approach to SAR is applicable only to its
azimuthal channel, the authors of the work [85] termed it quasi-holographic. In his
later publications on this subject, E. N. Leith pointed out that aperture synthesis should
be described as a microwave analogue of holography to which holographic methods
could be applied, rather than as holography proper.
Thus, a combination of SAR and a coherent optical processor represents a
‘quasi-holographic’ system, whose azimuthal resolution is achieved by holographic
processing of the recorded wave field. Both E. N. Leith and F. L. Ingalls believe [86]
that this representation is most flexible and physically clear. The use of the holo-
graphic approach for SAR analysis has so far been restricted to optical processors
[87]. There is a suggestion to represent the entire SAR azimuthal channel as a holo-
graphic system [143]. In that case the initial stage of the holographic process in this
channel (the formation of a microwave hologram) is the recording of the field scat-
tered by an artificial reference source. The second stage (the image reconstruction)
is described in terms of physical optics.

3.1.1 The principles of hologram recording


Let us consider a SAR borne by a carrier moving with velocity v along the x -axis
(Fig 3.2). The SAR antenna has the length LR (the real aperture) and the beam width
ϑR along the pathway line. The SAR irradiates the view stripe by short pulses and
makes consecutive time recordings of the probing signal reflected by the object.
The scattered field amplitude and phase are registered by a coherent (synchronous)
detector due to the interference of the reference and received signals. This produces
a multiplicative microwave hologram (Chapter 2). The role of the reference wave is

zino: “chap03” — 2005/11/7 — 15:37 — page 50 — #2


Quasi-holographic and holographic radar imaging 51

played by a signal directly supplied to the synchronous detector; this is the so-called
‘artificial’ reference wave.
We shall describe now the receiving device of the synthetic aperture which records
a hologram on a cathode tube display. Usually, a hologram is recorded by modulating
the tube radiation intensity, with the photofilm moving with velocity vf relative to
the screen. For objects with different ranges Ro from the pathway line, one can
use a pulse mode and vertical display scanning. As a result, the device records a
series of one-dimensional (1D) holograms having different positions along the film
width, depending on the distance to the respective objects. Suppose all the objects
are located at a distance Ro to the pathway line. For simplicity, the radiated signal can
then be taken to be continuous because the pulsed nature of the radiation is important
only for the analysis of range resolution. Figure 3.3 shows an equivalent scheme
of 1D microwave hologram recording. A synthetic aperture is located at point Q
with the coordinates (x , 0) (x = vt, where t is the current moment of time), and a
hypothetical source of the reference wave is at point R(xr , zr ). The source functions in
a way similar to that of the reference wave during the hologram recording (Fig. 1.2).
The point P(xo , zo = −Ro ) belongs to the object being viewed along the xo -axis. If

x⬘
V

qR

Ro

Figure 3.2 The basic geometrical relations in SAR

x⬘
xo
Q(x⬘, O)

P(xr, zr)
Ro
z
0
Rr
R(xr, zr)

Figure 3.3 An equivalent scheme of 1D microwave hologram recording by SAR

zino: “chap03” — 2005/11/7 — 15:37 — page 51 — #3


52 Radar imaging and holography

the object’s scattering characteristics are described by the function F(xo ) and its size
is small as compared with Ro , one can use the well-known Fresnel approximation to
define the diffraction field along the  -axis [103]:
∞
 eik1 Ro 
Uo (x ) = Co √ F(xo )eik1 ((xo −x )/(Ro )) dxo , (3.1)
λ1 R o
−∞

where k1 = 2π/λ1 is the wave number and Co is a complex-valued constant. The


complex amplitude of the reference wave is
Ur (x ) = Ar eiϕr .
Normally, this is a plane wave, i.e. ϕr = k1 sin(ϑx ), where ϑ is the wave ‘incidence’
on the hologram. The inclination of the reference wave is equivalent to that of the
reference signal with a linear phase shift, providing the introduction of the carrier
frequency ωx = k1 sin ϑ. A coherent registration gives a hologram described as
h(x ) = Re(Ur∗ (x )Uo (x )) (3.2)
or
h(x ) = Im(Ur∗ (x )Uo (x )).
It follows from Eq. (3.1) that a synthetic aperture generally forms 1D Fresnel holo-
grams. The following three types of hologram are possible, depending on the relation
between the object’s size, the synthetic aperture length Ls = vT (T is the recording
time or the time of the aperture synthesis) and the range Ro .
1. If the condition
Ro  k1 (xo2 )max /2
holds true (here (xo )max defines the maximum size of the object), we get
Fraunhofer’s approximation instead of Eq. (3.1):
∞
 eik1 Ro ik1 ((x )2 /(Ro )) 
uo (x ) = Co √ e F(xo )e−ik1 ((2x −xo )/(Ro )) dxo . (3.3)
λ1 Ro
−∞

The hologram we obtain is of the Fraunhofer type.


2. If the condition
Ro  k1 (xo )2max /2 = k1 L2S /8
is valid, we can eliminate the term exp[ik1 (x )2 /Ro ] from Eq. (3.3) to obtain a
Fourier hologram, which is described as

LS ≤ 2 λ1 Ro /π. (3.4)
3. For a point object, we have
F(xo ) ∼ δ(x − xo )
and Fraunhofer’s condition for diffraction becomes immediately fulfilled.

zino: “chap03” — 2005/11/7 — 15:37 — page 52 — #4


Quasi-holographic and holographic radar imaging 53

Using the filtering properties of the δ-function and Eq. (3.3), we arrive at the
following equation for the hologram (with the constant phase terms ignored):
 
(x )2 x  xo
h(x ) = Ar Ao cos ωx x − k1 + 2k1 , (3.5)
Ro Ro
where Ao is the scattered wave amplitude at the receiver input.
If Eq. (3.4) holds, expression (3.5) yields
 
  x  xo
h(x ) = Ar Ao cos ωx x + 2k1 . (3.6)
Ro
Thus, a synthetic aperture forms either a Fraunhofer or a Fourier hologram of a point
object. The former looks like a 1D Fresnel zone plate, in accordance with Eq. (3.5), and
the latter is a 1D diffraction grating with a constant step, in accordance with Eq. (3.6).
During the photographic recording, the holograms are scaled by substituting the
x -coordinate by the x-coordinate, where x = x /nx and nx = v/vf . A constant term
ho (‘displacement’) is added to Eqs (3.5) and (3.6) for the photographic registration
of the bipolar function h(x ).

3.1.2 Image reconstruction from a microwave hologram


It is reasonable to discuss the next step in the holographic process in terms of physical
optics. Illumination of a photographic transparency by a plane coherent wave with
the wave number k2 produces a diffraction field, whose distribution at distance ρ
from the hologram is described by the Huygens–Fresnel integral:
n T /2
v
ei(k2 ρ−π/4)
h(x)ei(k2 /2ρ)(x−ξ ) dx.
2
V (ξ ) = √ (3.7)
λ2 ρ
−vn T /2

The substitution of Eq. (3.5) into Eq. (3.7) gives


V (ξ ) = Vo (ξ ) + V1 (ξ ) + V2 (ξ ),
where Vo (ξ ) is the zeroth order corresponding to the displacement ho , V1 (ξ ) and V2 (ξ )
are the functions of the reconstructed images of a point object. These functions are
equal to
 f T /2
v
  2
V1 (ξ )
ei (k2 /2ρ)±(k1 nx /Ro ) x e−i[(k2 ξ/ρ)±(ωx nx +(2k1 nx xo /Ro ))]x dx.
2
= Co
V2 (ξ )
−vf T /2
(3.8)
The positions of the images along the z-axis can be found from the condition for the
zeroth power of the first exponent in Eq. (3.8):
λ1 Ro
ρ=± . (3.9)
2λ2 n2x
Obviously, one image is virtual and the other real.

zino: “chap03” — 2005/11/7 — 15:37 — page 53 — #5


54 Radar imaging and holography

By integrating Eq. (3.8) with the condition of Eq. (3.9), we obtain



V1 (ξ ) sin{[ωx nx + (2k1 n2x /Ro )((xo /nx ) − ξ )]vn T /2}
= Co . (3.10)
V2 (ξ ) [ωx nx + (2k1 n2x /Ro )((xo /nx ) − ξ )]vn T /2
Therefore, the image of a point object is described by the sin ν/ν-type of function.
It follows from Eq. (3.10) that the image position along the x-axis is defined by the
zeroth value of the argument ν, or
ξ = xo /nx + ωx Ro /2k1 nx . (3.11)
The first term in Eq. (3.11) corresponds to the real coordinate of the object and
the second one describes the carrier frequency. Images of two-point objects having
the same coordinates xo (xo1 = xo2 ) but different ranges R1 and R2 (R1  = R2 ) are
characterised by different coordinates ξ1 and ξ2 (ξ1  = ξ2 ). Therefore, the use of
the carrier frequency leads to geometrical distortions of the coordinates of point
objects. We should recall that the use of the carrier frequency in the first generation
of SARs (with an optical processor) was necessitated by the application of Leith’s
off-axial holography in order to separate images from the zeroth order. The carrier fre-
quency becomes, however, unnecessary in digital image reconstruction from complex
holograms (Chapter 2).
Let us now discuss the SAR resolving power. According to Reighley’s criterion,
two points are thought to be separated if the major maximum of one of the sin x/x
functions coincides with the first zero of the other function. This gives us the resolving
power
x = x1 − x2 = πRo /k1 LS . (3.12)
The aperture creating a Fraunhofer hologram with Eq. (3.5) was termed a ‘focused
aperture’ in classical SAR theory. The focusing here is treated as a compensation for
the quadratic phase shift in Eq. (3.5) during image reconstruction, the compensation
being made with the transform in Eq. (3.7).
The case of an ‘unfocused aperture’ is described by Eq. (3.6) for the Fourier
hologram, and the processing is performed with the Fourier transform of the hologram
function:
f T /2
v

V (ξ ) = Co h(x)e−ik2 xξ/ρ dx. (3.13)


−vf T /2

Eqs (3.13) and (3.6) yield



V1 (ξ ) sin {[(k2 /ρ)ξ ± (ωx nx + (2k1 /Ro )nx xo )] vf T /2}
= Co . (3.14)
V2 (ξ ) [(k2 /ρ)ξ ± (ωx nx + (2k1 /Ro )nx xo )] vf T /2
Here, ρ can be taken to be the focal length of the Fourier lens.
The image position for a point object is defined as
 
ρ 2k1
ξ =± ω x nx + . (3.15)
k2 Ro nx xo

zino: “chap03” — 2005/11/7 — 15:37 — page 54 — #6


Quasi-holographic and holographic radar imaging 55

Images of two-point objects with the same coordinates xo but different ranges R1
and R2 will also be distorted due to the dependence of ξ on Ro . The resolving power
from Reighley’s criterion is
x = x1 − x2 = πR/k1 nx vf T . (3.16)
With Eq. (3.4), the permissible limit for this parameter in SAR with unfocused
processing has the value

 
x = πλ1 Ro /4 ≈ 0.44 λ1 Ro . (3.17)
Note that a hologram is written on a photofilm (in the case of an optical processor)
or in a memory device (in the case of digital recording) continuously during the flight.
For this reason, the focused or unfocused aperture regime is prescribed only at the
reconstruction stage.
Synthetic aperture radar can also be considered in terms of geometrical optics,
which implies phase structure analysis of a hologram. One of the expressions in (3.2)
can be re-written as
h(x ) = Ar Ao cos(ϕr − ϕo ),
where ϕo is the phase of the field scattered by the object. For a point object located
at point P (Fig. 3.3), we can write two expressions taking into account the SAR wave
propagation to the object and back:
ϕo = −2k1 (PQ − PO),
ϕr = −2k1 (RQ − RO),
where RO = Rr is the distance between a hypothetical reference wave source and
the coordinates origin. By expanding ϕr and ϕo into series, we get for the first-order
terms
    
4π 1 1  xr xo
ϕr − ϕo ∼
= − (x  2
) − − x − . (3.18)
λ1 2Rr 2Ro Rr Ro
In a simple case of xo = 0, xr = 0 and Rr = ∞ (a plane reference wave without
linear phase shift), we have
ϕr − ϕo = 4π(x )2 /2λ1 Ro .
The space frequency in the interference pattern is
1 ∂(ϕr − ϕo )
ν(x) = = 2x /λ1 Ro . (3.19)
2π ∂x
 = (L )
At a certain value of xcr S max /2, the frequency ν may exceed the resolving
power of the field recorder, which is defined in this case by the real aperture angle
and is equal to νcr = 1/LR . From this we have the condition
(LS ) ≤ λ1 Ro /LR = ϑR Ro . (3.20)

zino: “chap03” — 2005/11/7 — 15:37 — page 55 — #7


56 Radar imaging and holography

The substitution of (Ls )max into Eq. (3.12) gives a classical relation for the
attainable limit of SAR resolution:

xlim = LR /2.

The pulsed nature of the signal allows determination of such an important radar
parameter as the minimum repetition frequency of probing pulses, χmin . Obviously,
the pulse mode is similar to hologram discretisation. The distance between two
adjacent records x = vf /χ must meet the condition

x ≤ [2ν(xcr

)]−1 .

This condition and Eq. (3.19) gives

χmin = 2ϑ/LR .

By following the method suggested in Reference 92 we can obtain relations for


the phase deviation of the reconstructed wave from the spherical shape (third-order
wave aberrations):
 
(3) k2 x4
ϕ = − Do − D1 x + D2 x ,
3 2
(3.21)
2 4
where


xk xk 2µ xok xrIk
Dk = c3 − I3 ± 4−k − ,
Rc RI m R3o R3r

xc and Rc are the coordinates of the reconstructing wave source, µ = λ2 /λ1 , m = n−1
x .
The image coordinates for a point object are
 
1 1 2µ 1 1
= ± 2 − ,
RI RC m Ro Rr
 
xI xI 2µ xo xr
= ± − .
RI RC m Ro Rr
The value k = 0 is for the spherical aberration, k = 1 is for the coma, k = 2 is for the
astigmatism. These relations can be used to find the maximum size of the synthetic
aperture, (Ls )max , from Reighley’s formula (wave aberrations at the hologram edges
should not be larger than λ2 /4). Since spherical aberration is largest in the order of
magnitude, we obtain
 
µ2
(LS )max = 2 λ1 Ro
4 3 1−4 2 . (3.22)
m

For typical conditions of SAR performance, the value of (Ls )max calculated from
Eq. (3.20) is smaller than (Ls )max found in Eq. (3.22), that is, the effect of wave
aberrations is inessential.

zino: “chap03” — 2005/11/7 — 15:37 — page 56 — #8


Quasi-holographic and holographic radar imaging 57

3.1.3 Effects of carrier track instabilities and object’s motion on


image quality
The carrier’s trajectory instabilities are a major factor that can distort SAR images.
The use of geometrical optics in the holographic approach provides a fairly simple
estimation of permissible trajectory deviations from a straight line. The object’s wave
phase ϕo (x ) can be written as
 1/2 
ϕo (x ) = −2k1 (zo − g)2 + (x − xo )2 − Ro ,

where g = g(x ) is the trajectory deviation from the x -axis. At Ro  xo , x and g, the
binomial expansion ignoring all terms of the g 2 order gives an approximate expression
for ϕo (x ):

4π (x )2 − 2xo x x4 − 4xo (x )3 + 4xo2 (xo )2
ϕo (x ) ∼=− −
λ1 2Ro 8R3o

zo g zo g(x )2 − 2zo gxo x
− + .
Ro 2R3o
The phase equation for a wave reconstructing one of the images has a stan-
dard form:

ϕI = ϕc ± (ϕo − ϕr ), (3.23)

where ϕc are the reconstructed wave phases.


On the other hand, ϕI can be written as



2π x2 − 2xI x x4 − 4xI x3 + 4xI2 x2
ϕI = − − . (3.24)
λ2 2RI 8R3I

The phases ϕc and ϕr are described by expressions similar to (3.24). The phase
differences between the respective third-order terms relative to 1/RI in Eqs (3.23)
and (3.24) represent aberrations described as

 = ϕ (3) + ϕn(3) .
(3)
The aberrations ϕ (3) are defined by Eq. (3.21), and ϕn has the form:

ϕn(3) = −k2 (D3 g + D4 gx − D5 gx2 ), (3.25)

where

D3 = ∓2µzo /Ro , D4 = ∓2µzo xo /mR3o ,


D5 = ∓µzo /m2 R3o , m = 1/nx , µ = λ2 /λ1

and g is the trajectory deviation. Here the quantities D3 , D4 and D5 are aberrations
arising from the trajectory instabilities.

zino: “chap03” — 2005/11/7 — 15:37 — page 57 — #9


58 Radar imaging and holography

Equation (3.25) describing distortions in the hologram phase structure can be used
to calculate the compensating phase shift directly during the synthesis. For this, SAR
should be equipped with a digital signal processor.
By applying Reighley’s criterion to each term in Eq. (3.25), one can get the
following conditions for maximum permissible deviations of the carrier’s trajectory:
g3 ≤ λ2 /4/D3 = λ1 Ro /8Zo = λ1 /8 cos ϑo , (3.26)
g4 ≤ λ2 /4/D4 /xmax = λ1 R3o /4LS Zo xo , (3.27)
g5 ≤ λ2 /4/D5 /xmax
2
= λ1 R3o /Zo L2S . (3.28)
Besides, if one knows the flight conditions and carrier’s characteristics,
Eqs (3.26)–(3.28) can be used to find constraints imposed on the parameter cos ϑo
and the maximum size of the synthetic aperture:
cos ϑo ≤ λ1 /8g,
LSmax ≤ λ1 R2o /4gxo ,

LSmax ≤ Ro λ1 /g.
Normally, SAR meets the conditions LS  Ro and xo  Ro . So D4 and D5 can be
neglected leaving only the factor D3 , which severely restricts the trajectory stability
(see Eq. (3.26)).
Effects arising in a synthetic aperture during the viewing of moving targets can be
estimated in terms of physical optics. Suppose a point object moves radially (along the
z-axis) at velocity vo , such that its displacement is smaller than the range resolution
for the synthesis time T . Then, the equation for the hologram, ignoring constant phase
terms, is
 
vo n2x x2 nx xo x k1  vo 2 2 2
h(x) ∼ cos ωx nx x + 2k1 nx x − k1 + 2k1 − nx x .
v Ro Ro Ro v
(3.29)
The substitution of Eq. (3.29) into (3.7) gives a condition for viewing the focused
image:
    v 2
k2 Ro o
ρ=± 2
1+ .
2k1 nx v
Since vo /v  1, the image can be viewed practically in the same plane as that
for an immobile object. Keeping this in mind, we can obtain, after the integration,
a function describing one of the reconstructed images:
sin {[ωx nx + 2k1 (vo /v)nx + (2k1 nx /Ro )(xo − nx ξ )] vf T /2}
V (ξ ) = Co .
[ωx nx + 2k1 (vo /v)nx + (2k1 nx /Ro )(xo − nx ξ )] vf T /2
The image position is defined as being
xo ω x Ro R o vo
ξ= + + .
nx 2k1 nx nx v

zino: “chap03” — 2005/11/7 — 15:37 — page 58 — #10


Quasi-holographic and holographic radar imaging 59

Clearly, the object’s motion is equivalent to the use of additional carrier frequency
at the recording stage, which causes the image shift. The optical processor deals with
a real image recorded on a photofilm. The recording field on the film is limited by a
diaphragm cutting off the background. The value of vo may become so large that no
image will be recorded because of the shift.
The object’s motion in the azimuthal direction (along the x -axis) at velocity vo is
equivalent to the change in the SAR’s flight velocity. Then Eq. (3.9) describing the
position of the focused image along the z-axis can be re-written as
ρ  = ±λ1 Ro /2λ2 n2x = ±λ1 Ro v2 /2λ2 (v − vo )2 .
Therefore, the object’s motion along the x -axis changes the focusing conditions
by the value

vo  vo   vo 2
δρ = ρ  − ρ = 2ρ 1− 1− , (3.30)
v 2v v
where ρ is found from Eq. (3.9). If the condition vo  v is fulfilled, we have
δρ ≈ 2ρvo /v. (3.31)
Equation (3.30) yields
  
vo = v 1 − 1 − δρ/(ρ + δρ) .
On the other hand, a simple geometrical consideration can give the following
relations for the resolving power of SAR along the z-axis (longitudinal resolution):
ρ = 2( x vf )2 /λ2 v2 = 2( x )2 /λ2 n2x . (3.32)
The focusing depth ρ is defined as the focal plane shift along the z-axis by
a distance, at which the azimuthal resolution x becomes twice as poor as the
diffraction limit in Eq. (3.12).
The viewing of a focused image of an object moving at velocity vo requires an
additional focusing of the optical processor. The object’s velocities that require the
focusing can be found from the condition ρ < δρ, where δρ is given by Eq. (3.31).
Using Eqs (3.9) and (3.32), we get
vo > 2( x )2 /λ1 Ro .
At lower velocities, there is no need to re-focus the processor, and a poorer image
quality may be assumed to be inessential.
To conclude Section 3.1, we should like to emphasise the following. The SAR
operation principles can be described by conventional methods (Chapter 2) that are
still widely used [73] or with a holographic approach representing the side-looking
synthetic aperture and the processor as an integral system for recording and recon-
structing the wave field. The analysis of the aperture synthesis can be based on
the well-elaborated principles of holography as well as on physical and geometrical
optics. The examples we have discussed support the physical clarity of the holo-
graphic approach and its value for SAR analysis. We can get a better insight into the

zino: “chap03” — 2005/11/7 — 15:37 — page 59 — #11


60 Radar imaging and holography

mechanisms of image formation by SAR without relying on Doppler frequencies of


reflected signals or on correlation theory.

3.2 Front-looking holographic radar

The operation principle of a front-looking holographic radar was discussed in


Chapter 2. A high resolution across the pathway line (Fig. 3.4) is provided in it
by a multibeam antenna pattern of a large receiving antenna array located, say, along
the aircraft wings [72]. The resolution along the pathway line is achieved by the
aperture synthesis. There is another radar design, in which the desired transversal
resolution is provided by a phased antenna array mounted under the fuselage and the
longitudinal resolution by a synthetic aperture [81,82].

3.2.1 The principles of hologram recording


A coherent transmitter (Fig. 3.5) generates a continuous or pulsed signal (to decouple
the transmitter and the receiver) and illuminates the desired survey zone under the air-
craft. The receiving antenna represents a linear or phased array of numerous receivers.
The amplitude and phase of the reflected signal are recorded by each array element
for the time Ts , synthesising a 2D aperture of size Xs Y along the trajectory segment
Xs = vTs . Signals at the receiver output are saved by a memory unit, for example,
on a photofilm [81]. The film record can be regarded as a 2D plane optical hologram
equivalent to a microwave hologram with the size Xs Y (Fig. 3.4). If the radar has an
optical processor, it reconstructs the wave front recorded on the optical hologram to
produce an optical image of the earth surface within the view zone. Thus, the oper-
ation principle of this type of radar is totally holographic and it is reasonable to call

Transmitter
antenna
Receiver Y
V
antenna

Xc
a
H

Survey
zone
Line of track
x

Figure 3.4 The viewing field of a holographic radar

zino: “chap03” — 2005/11/7 — 15:37 — page 60 — #12


Quasi-holographic and holographic radar imaging 61

Transmitter Receiver Memory Processor

Transmitter Receiver-phased Display


antenna antenna

Figure 3.5 A schematic diagram of a front-looking holographic radar

dx(w),m
2.5

1.5

0.5

0 w
20 30 40 50 60 70 80 90 100

Figure 3.6 The resolution of a front-looking holographic radar along the x-axis as
a function of the angle ϕ

it a front-looking holographic radar [72,81]. Since it is an analogue of a 2D optical


holographic system, it produces a 3D image. The resolution of a holographic radar can
be examined by analysing the uncertainty function [72]. The slicing of this function
into equal power levels at the point of 0.7 gives an approximate radar resolution:

δy = 0.88λ1 H /Y sin ϕ, (3.33)


δx = 0.45λ1 H /Xs sin3 ϕ, (3.34)
δz = 7λ1 H 2
/(2Xs2 sin3 ϕ + Y sin ϕ),
2 3
(3.35)

where Xs is the synthetic aperture length, ϕ = 90◦ − α.


Figures 3.6 and 3.7 show the dependence of δx and δz on the angle ϕ, plotted
from the following initial parameters: λ1 = 1.78 cm, H = 300 m, Y = 1 m and
Xs = 30 m. One can see that a holographic radar possesses a fairly large resolving
power.
It follows from Eqs (3.33), (3.34) and (3.35) that in addition to the ‘conventional’
resolution along and across the pathway line, a holographic radar has a longitudinal

zino: “chap03” — 2005/11/7 — 15:37 — page 61 — #13


62 Radar imaging and holography

dz(w),m
200
180
160
140
120
100
80
60
40
20
0 w
20 30 40 50 60 70 80 90 100

Figure 3.7 The resolution of a front-looking holographic radar along the z-axis as
a function of the angle ϕ

resolution δz even when its signal is continuous. This is due to the fact that a holo-
gram contains information about the three dimensions of the object, including the
longitudinal range (Chapter 2).

3.2.2 Image reconstruction and scaling relations


Consider now the processes of wave front recording and processing in this type of
radar. As the radar is an analogue of a 2D holographic system, it would be natural
to analyse it in terms of the holographic approach developed in Section 3.1, which
treats the radar and the processing unit as an integral system. For this, we shall
examine a generalised hologram geometry [50,51]. Suppose a wave comes from a
microwave point source with the coordinates (xo , yo , zo ), and a reference wave is
generated by a point source with the coordinates (xr , yr , zr ), as shown in Fig. 3.8(a).
The wave field being recorded has the wavelength λ1 .
At the second stage, the recorded hologram is illuminated by a spherical wave
with the wavelength λ2 , coming from a point source with the coordinates (xp , yp , zp ),
as shown in Fig. 3.8(b). A paraxial approximation will then give the coordinates of
two reconstructed images:

λ 2 zi λ 2 zi zi
xi = ± xo ∓ xr − xp  

λ1 zo λ1 zr zp 



λ2 zi λ 2 zi zi 
yi = ± yo ± yr − y p (3.36)
λ 1 zo λ1 zr z 
 −1 


1 λ2 λ2 

zi = ± ∓ 

zp λ 1 zr λ 1 zo
The upper arithmetic signs in the equalities of (3.36) are for the virtual image and
the lower ones are for the real image. When zi is positive, the image is virtual and is
on the left of the hologram; when zi is negative, the image is real and is located on

zino: “chap03” — 2005/11/7 — 15:37 — page 62 — #14


Quasi-holographic and holographic radar imaging 63

(a) Y
zr

Reference wave
(xr, yr, zr) Photoplate

Object
(xo, yo, zo)
zo

(b) Y

Reconstructing wave zp
(xp, yp, zp) Hologram

Virtual image
(xi, yi, zi)

zi Real image
(x /i, y /i, z /i)

Figure 3.8 Generalised schemes of hologram recording (a) and reconstruction (b)

the right of the hologram. At λ1 = λ2 , zr = zo and zc > 0 both images are virtual,
whereas at λ1 = λ2 , zr = zo and zc < 0 they are real.
One can show with Eqs (3.36) that holographic images of objects more complex
than just a point, for example, consisting of two point sources, can be magnified or
diminished relative to the respective object [50,51].
As the reconstructed wave front is 3D, the transverse (along the x- and y-axes) and
the longitudinal (along the z-axis) magnifications obtained during the reconstruction
can be analysed separately.
From Eq. (3.36), the transverse magnifications are:
for the real image (superscript ‘r’)
∂xi ∂yi λ 2 zi
Mtr = = = , (3.37)
∂xo ∂yo λ 1 zo

zino: “chap03” — 2005/11/7 — 15:37 — page 63 — #15


64 Radar imaging and holography

for the virtual image (superscript ‘v’)


∂xi ∂yi λ2 zi
Mtv = = =− (3.38)
∂xo ∂yo λ 1 zo
or
 

 zo λ1 zo −1
Mtr = Mtv = 1 − ∓ . (3.39)
zr λ2 zp 

Here the superscript is for the real image and the subscript is for the virtual one.
The transverse magnification describes the ratio of the width and height of the image
to the appropriate parameters of the real object.
The longitudinal magnification can be found by differentiating Eq. (3.36) for zi :

for the real image

∂zi λ2 zi2 ∼ λ1  r 2
Mlr = = = Mt (3.40)
∂zo λ1 zo2 λ2

for the virtual image

∂zi λ2 z 2 λ1  v 2
Mlv = = − i2 ∼ =− Mt . (3.41)
∂zo λ1 z o λ2

The longitudinal magnification of a virtual image is always negative. This means


that the image always has a relief inverse to that of the object: it is pseudoscopic.
Equations (3.37), (3.38) and (3.40), (3.41) show that the longitudinal and trans-
verse magnifications are not identical, so the image of a 3D object is distorted. The
matter is that the object’s relief cannot be reproduced exactly in an image. The condi-
tion for obtaining an undistorted image can be derived from the equality of transverse
and longitudinal magnifications:

λ2 zi λ2 zi2
Mtr = Mlr or = .
λ1 z o λ1 zo2
Therefore, a geometrical similarity is possible only if the image is reconstructed
at the site the object occupied during the recording.
By substituting the coordinate zi = zo into Eq. (3.36), we can get an expression
for the coordinates of the reconstructing source:
 
1 1 λ2 λ2
= 1∓ ∓ . (3.42)
zp zo λ1 λ1 zr

Another way of obtaining an undistorted image is to change the scale of the linear
hologram size by a factor of m at the transition from the recording to the reconstruction
[50]. At m < 1, the hologram becomes smaller while at m > 1 it becomes larger.
The coordinates of an image reconstructed from a hologram diminished m times can

zino: “chap03” — 2005/11/7 — 15:37 — page 64 — #16


Quasi-holographic and holographic radar imaging 65

be found from

λ2 zi λ 2 zi zi
xi = ±m xo ∓ m xr − xp ; 
λ1 zo λ1 zr zp 



λ2 zi λ 2 zi zi 
yi = ±m yo ∓ m y r − yp ; (3.43)
λ1 zo λ1 zr zp 
 −1 



1 2 λ2 λ2 
zi = ±m ∓m .

zp λ1 zr λ1 z o

The transverse magnifications are:

for the real image (superscript ‘r’)


∂xi ∂yi λ2 zi
Mtr = = =m (3.44)
∂xo ∂xo λ 1 zo
for the virtual image (superscript ‘v’)
∂xi ∂yi λ2 zi
Mtv = = = −m . (3.45)
∂yo ∂yo λ 1 zo
The longitudinal magnifications are:

for the real image


∂zi λ1
Mlr = = m2 (Mtr )2 , (3.46)
∂zo λ2
for the virtual image
∂zi λ1
Mlv = = −m2 (Mtv )2 . (3.47)
∂zo λ2
The condition for obtaining an undistorted image also follows from the equality
of transverse and longitudinal magnifications

zo = mzi . (3.48)

The substitution of Eq. (3.48) into Eq. (3.36) yields the coordinates of the
reconstructing source; in particular, for zp we have
 
1 m λ2 λ2
= 1 ± m2 ∓m . (3.49)
zp zo λ1 λ 1 zr

If the recorded hologram is magnified m times, the reconstructed image is at a


distance
 −1
1 λ2 λ2
zi = ± ∓ , (3.50)
zp λ 1 zr m 2 λ 1 zo m 2

zino: “chap03” — 2005/11/7 — 15:37 — page 65 — #17


66 Radar imaging and holography

from this hologram, and the transverse magnifications are:


for the real image
λ 2 zi
Mtr = (3.51)
λ1 zo m
for the virtual image
λ2 zi
Mtv = − . (3.52)
λ1 zo m
In the case of imaging a 3D object, the distortions due to the difference in
the transversal and longitudinal directions will be minimal at Mt = λ2 /λ1 . Then
Eqs (3.51) and (3.52) give Mt = Ml . The distortions of the real and virtual images
due to the shift are also eliminated at a = b = 0 (Fig. 3.9), but the images and the
zeroth-order overlap, a situation unacceptable for optical holography. In a holographic
radar capable of recording a complex hologram (Chapter 2), there is no problem with
decoupling a single image and the zeroth order.
We shall now turn to the limiting longitudinal resolution in a holographic radar
and consider the recording and reconstruction schemes (Fig. 3.9 (a) and (b), respec-
tively) in order to define longitudinal magnifications. Using a paraxial approximation,
the authors of the work [142] have shown that the minimal resolvable longitudinal
distance for a reconstructed real image is written as
(dr )min ∼
= lr at d  R1 , (3.53)
where
lr = lr − lr ,
lr = λ1 R1 L1 L2 /(λ L1 L2 − λ R1 L2 − λ1 R1 L1 ), (3.54)
lr  
= λ1 R1 L1 L2 /(λ L1 L2 − λ R1 L2 − λ1 R1 L1 ), (3.55)
λ and λ are the minimal and maximal wavelengths of the reconstructing beam. If the
distance d is small compared to R1 , the longitudinal magnification is
dr
Mlr = at d  R1 .
d
Hence, we have
(dr )min ≥ lr (Mlr )−1 , (3.56)
where
Mlr = λ1 λ2 (L1 L2 )2 /[λ2 L1 L2 − λ2 RL2 − λ1 R1 L1 ]2
and λ2 = (λ λ )1/2 is the average wavelength of the reconstructing source. Similar
expressions can be derived for the reconstruction of a virtual image.
The analysis we have made allows one to choose suitable recording and recon-
struction procedures when one uses a holographic radar. Clearly, the parameters of
these procedures are closely interrelated, so the radar and its processor should be
regarded as an integral system.

zino: “chap03” — 2005/11/7 — 15:37 — page 66 — #18


Quasi-holographic and holographic radar imaging 67

(a) x

Photoplate

2 d 1

a
R1
3
L1

(b) 4

2 1

Hologram
1

2
/
dv l1 l1 dr

l/2 l2
L2

Figure 3.9 Recording (a) and reconstruction (b) of a two-point object for finding
longitudinal magnifications: 1, 2 – point objects, 3 – reference wave
source and 4 – reconstructing wave source

3.2.3 The focal depth


Consider now the focal depth of an image produced by a holographic radar. Chapter 2
discussed the problem of recording a 3D image in a 2D medium using a classical
holographic approach. The image quality then depends on the focal depth of the
image. The process of reconstruction gives the opportunity to obtain a 3D image of
a scene. Following the reconstruction, the image is again recorded in a 2D medium,
so the problem of focal depth arises once more. This parameter can be defined by
analogy with the recommendations suggested in Reference 96.

zino: “chap03” — 2005/11/7 — 15:37 — page 67 — #19


68 Radar imaging and holography

3
2
∆xi
∆zi

Figure 3.10 The focal depth of a microwave image: 1 – reconstructing wave source,
2 – real image of a point object and 3 – microwave hologram

The focal depth of a microwave holographic image is a longitudinal distance zi ,


along which the cross section of the beam reconstructing the virtual or real image
of a point object is smaller than the resolution elements xi , so that it is perceived
as a point image (Fig 3.10). A formula for the focal depth of a virtual image can be
derived from Eqs (3.40) and (3.41) for the longitudinal magnification:

zi = Mlv zo (3.57)

or
λ1 v 2
zi = − (M ) zo . (3.58)
λ2 t
With the relation for the transverse magnification (3.52), one can write
 
λ1 zo
zi = − xi . (3.59)
λ2 xo
The last factor in Eq. (3.59) can be written as
zo
= tgαo , (3.60)
xo
where αo is the aperture angle in the objects’ space. Then Eqs (3.59) and (3.60) yield
λ1  v  xi
zi = − Mt . (3.61)
λ2 tgαo
If the scale of the initial hologram is diminished m times, we have
λ1  v  xi
zi = −m2 Mt . (3.62)
λ2 tgαo
Let us now define the quantity xi . Although the resolution along the x- and y-axes
is determined by different physical conditions, the resolution elements x and y
must have the same values. Therefore, instead of xi one can use δx describing the
resolution along the pathway line provided by the aperture synthesis. Then Eqs (3.62)

zino: “chap03” — 2005/11/7 — 15:37 — page 68 — #20


Quasi-holographic and holographic radar imaging 69

and (3.34) give

0.45λ21 Mtv H
zi = − . (3.63)
λ2 Xs tgαo sin3 ϕ
A characteristic feature of this expression is that zi is inversely proportional to
the synthetic aperture length Xs .
It is also worth discussing some practical aspects of scaling in a holographic
radar. Unlike SAR, this type of radar has no anamorphism, that is, the image planes
coincide in azimuth and range. So there is no need to use special optics to eliminate
anamorphism. However, the image proportions along the x- and y-axes do not coin-
cide because the scaling coefficient in azimuth, Px , differs from that in range, Py .
According to Reference 81, Px is defined as

Px = v/V , (3.64)

where v is the velocity of the transparency on which the hologram is recorded and V
is the velocity of the antenna array.
Along the y-axis, the scaling coefficient Py is
W
Py = , (3.65)
2a
where W is the transparency width and 2a is the double length of the antenna array.
As a result, the holographic image appears to be defocused along the x- and y-axes.
The image scale along these axes can be equalised by special optics – spherical or
cylindrical telescopes. The optics suggested in Reference 81 can change the image
scale from 4 to 25 times. Transversal and longitudinal scales of an image can be
equalised by choosing a proper coefficient m. Therefore, the final values of longitu-
dinal magnification and focal depth can be found only after one has selected all the
scaling coefficients Py , Px and m.
To conclude, we summarise specific features of front-looking SAR systems.
1. It has been shown in Reference 74 that SAR systems have a serious limitation.
When the view zone approaches the pathway line, the resolution in azimuth
becomes much poorer. This makes it impossible to obtain quality images in
the front view zone. In contrast, a holographic radar provides a high resolution
directly under the aircraft.
2. Another essential advantage of a holographic radar is a high longitudinal res-
olution even in a continuous pulse mode along the z-axis, providing 3D relief
images.
3. The 3D character of a holographic radar image is a basis for obtaining range
contour lines which can then be recalculated to get surface contours [81]. This
operation mode is ‘purely’ holographic. In fact, it implements the principle of
two-frequency holographic interferometry.
4. A high 3D quality of the image requires the use of a new parameter – the image
focal depth, by analogy with optical systems.

zino: “chap03” — 2005/11/7 — 15:37 — page 69 — #21


70 Radar imaging and holography

5. The view field geometry in a holographic radar is equivalent to that of airborne


infrared and optical devices, so it is possible to combine microwave images
with infrared and optical images. This kind of complexing considerably increases
the radar capability to detect and identify targets.

3.3 A tomographic approach to spotlight SAR

3.3.1 Tomographic registration of the earth area projection


Today there are two practically valuable cases when tomographic algorithms can be
used for reconstruction of radar images: inverse aperture synthesis by rotating an
object round its centre of mass (see Chapter 6) and aperture synthesis in a spotlight
or telescopic mode [100]. We shall analyse the latter case.
A microwave radar with a synthetic aperture borne by a carrier and operating in
a spotlight mode has a real antenna oriented onto an earth area to be surveyed. The
area is illuminated for a longer time than is normally done in stripe surface map-
ping [100], so this type of SAR has a greater resolving power than a conventional
side-looking SAR. Figure 3.11 shows the basic geometrical relations illustrating the
spotlight mode. For simplicity, we shall consider a 2D case. Suppose that the coordi-
nate origin is related to a certain point on the earth’s surface; the x-axis is the range and
the y-axis is the azimuth. During the carrier flight, a real antenna ray is incident onto
this area at an angle ϑ to the x-axis. The SAR scans the target with wideband pulses,
for example, linear frequency modulation (LFM) pulses of the ReS(t) type, where

ej(ωo t+αt ) , |t| ≤ τ/2,
2

S(t) = (3.66)
0, otherwise

y
v
u

Figure 3.11 The basic geometrical relations for a spot-light SAR

zino: “chap03” — 2005/11/7 — 15:37 — page 70 — #22


Quasi-holographic and holographic radar imaging 71

where ωo is the SAR carrier frequency, 2α is the LFM slope and τ is the pulse dura-
tion. Note that the latter condition is not obligatory because the signal may have a
narrow band. It is assumed that the target is in the far zone and the microwave phase
front in the target vicinity is planar. The signal reflected by a unit area of the surface
at the point (xo , yo ) is
 
2R
ro (t) = ARe g(xo , yo )S(t − ) dx dy, (3.67)
c
where A is the amplitude coefficient accounting for the signal attenuation during the
propagation; 2R/c is the time delay of the signal while it covers the distance R in both
directions; g(x, y) = |g(x, y)| exp[ jϕo (x, y)] is the density function, whose physical
sense here is just the distribution of the earth surface reflectivity; and ϕo (x, y) is
the signal phase shift due to the reflection. We also assume that the function g(x, y)
remains constant within the given ranges of radiation frequencies and view angles ϑ.
Normally, when the distance to the target is much larger than the target’s size,
elements of the ellipses in Fig. 3.11 may be regarded as segments of straight lines.
Therefore, with Eqs (1.17) and (3.67) we can write down the total echo signal from
all reflectors located within a narrow band normal to the u-axis and having the width
du at u = uo :
  
2(Ro + uo )
r1 (t) = ARe pϑ (uo )S t − du,
c
where Ro is the distance between the SAR and the target centre.
The total signal from the area being surveyed is
 L 
  
2(Ro + uo )
rϑ (t) = ARe pϑ (u)S t − du , (3.68)
 c 
−L

where L is the area length along the u-axis and A = const, which is valid at Ro  L.
In contrast to the classical situation presented in Fig. 1.6, the linear integral used for
the projection is taken along the line normal to the microwave propagation direction.
Now we substitute Eq. (3.66) for the LFM pulse into Eq. (3.68), simultaneously
detecting the received signal with a couple of quadrature multipliers, and then we
pass the output signals through low-frequency filtres. What we eventually get is the
signal
L    
A 4αu2 2
cϑ (t) = pϑ (u) exp j 2 exp −j [ωo + 2α(t − τo )u] du,
2 c c
−L

where
τo = 2Ro /c and τ/2 + 2(Ro + L)/c ≤ t ≤ τ/2 + 2(Ro − L)/c. (3.69)

The latter expression is the Fourier transform of the function pϑ(u) exp( j4αu2 /c2 ),
whose exponential factor can be easily eliminated if we find the inverse Fourier

zino: “chap03” — 2005/11/7 — 15:37 — page 71 — #23


72 Radar imaging and holography

transform of cϑ (t) by multiplying the result by exp(−j4αu2 /c2 ) and making


Fourier transform again. This quadratic phase factor can quite often be neglected.
Eventually, we have
 
A 2
cϑ (t) = Pϑ [ωo + 2α(t − τo )] , (3.70)
2 c
where the time t satisfies Eq. (3.69).
Therefore, if one uses LFM pulses, demodulated signals received from every
illuminated direction are part of a 1D Fourier function of the centre projection of
the earth area at the respective view angle. In other words, the processor output
signal represents a Fourier image of the projection function (within the time interval
considered) and the data are registered in Fourier space. In accordance with the
projection theorem, the function (3.70) is a cross section, taken at the angle ϑ, of the
2D Fourier transform G(X , Y ) of the desired density function g(x, y). It follows from
Eq. (3.69) that the function Pϑ (X ) is defined in the range X1 ≤ X ≤ X2 , where
 
2 4αL ∼ 2
X1 = ωo − ατ + = (ωo − ατ ),
c c c
 
2 4αL ∼ 2
X2 = ωo + ατ − = (ωo + ατ ). (3.71)
c c c
Since measurements are usually limited to a certain range of angles ϑmin ≤ ϑ ≤
ϑmax , it is clear that the counts of G(X , Y ) can be obtained at the polar grid points
within a limited circular segment (the shaded region in Fig. 1.7). The inner and outer
radius of the circle, X1 and X2 , are proportional to the smallest (ωo − ατ ) and the
largest (ωo + ατ ) frequency values of the LFM pulse.
Further, one can employ classical algorithms based on interpolations and inverse
Fourier transforms to reconstruct g(x, y). Before performing the latter procedure, it
is useful to multiply the G function by the weight or ‘window’ function, to reduce
parasitic side lobes in the image.

3.3.2 Tomographic algorithms for image reconstruction


The next step in developing this algorithm is to perform a 2D inverse Fourier transform
in the polar coordinates. This can be done as follows. First, introduce the function

F(x, y) = G(u, v)δ(x − u, y − v),
(u,v)∈P

where P is a polar grid; δ(·) is the delta-function of Dirac; G = S · W , where S is a


complex-valued function prescribed by P (experimental data); and W is a real-valued
weight function. We should also prescribe the real parameters a > 0, b > 0 and the
integer parameters M > 0 and N > 0 such that the rectangular grid
R = {(ma, nb)| − M /2 ≤ m < M /2, −N /2 ≤ n < N /2} ,
should satisfy a discrete representation of the object sought for.

zino: “chap03” — 2005/11/7 — 15:37 — page 72 — #24


Quasi-holographic and holographic radar imaging 73

The quantity M ×N is equal to the number of pixels on the image, each pixel having
the size a × b. According to the sampling theorem, I /a and I /b are approximately
equal to the size of the P region along the x- and y-axes, while I /(Ma) and I /(Nb)
should equal the spacing between the grid nodes along the same axes. Thus, the P grid
consists approximately of M radial lines and N pixels along each line. Note that in
classical tomography, we have M ∼ = N and the grid P includes about π M /2 radial
lines and N pixels along each line.
We can now estimate the inverse Fourier transform f of the function F across the
region R:

f (ma, nb) = F(x, y)E(xma + ynb) dx dy

= G(u, v)E(uma + vnb)
(u,v)∈P

with E(z) = exp( j2πz).


A straightforward calculation of exact values of f in the region R using the last
formula will require about M 2 N 2 elementary arithmetic operations. For f estimations
in this region, however, one can employ conventional methods with a smaller number
of operations, using the interpolation algorithm mentioned above and the convolution
algorithm to be discussed below. They are a fairly simple algorithm for a rigorous
calculation of the functions f (ma, nb) with the so-called homogeneous concentrically
square polar grid, which requires about MN log2 (MN ) operations.
The polar grid P is described as

P = {(u(i, k) = A(k) + iB(k), v(i, k) = C + kD)


at 0 ≤ i < M, 0 ≤ k < N},

where A(k) = −(C + kD)tg(ϑo /2), B(k) = −2A(k)/(M − 1), ϑo is the size of the
R region, C and D are some selected real positive numbers.
The values of the f function are found in two steps. First, for −M /2 ≤ m < M /2
and 0 ≤ k < N we find the function

−1
i=M
H (m, k) = E(m2 aB(k)/2) {G(i, k)E(i2 aB(k)/2)}
i=0

× E(−(m − i) aB(k)/2).
2

Second, for −M /2 ≤ m < M /2 and −N /2 ≤ n < N we calculate the desired


function

N −1
f (ma, nb) = E(nbC) {H (m, k)E(maA(k))} E(nk/N ).
k=o

Consider now a tomographic algorithm for reconstruction of SAR images, based


on the convolution back projection (CBP) method. It employs the relation between

zino: “chap03” — 2005/11/7 — 15:37 — page 73 — #25


74 Radar imaging and holography

the functions g(x, y) and G(X , Y ) written in the polar coordinates [34]:
π/2 ∞
1
g(ρ cos , ρ sin ) = G(r cos ϑ, r sin ϑ)|r|
4π 2
−π/2 −∞

× exp[ jrρ cos( − ϑ)] dr dϑ.


With the projection theorem, the last expression can be re-written as
π/2 ∞
1
g(ρ cos , ρ sin ) = Pϑ (r)|r| exp[ jrρ cos( − ϑ)] dr dϑ.
4π 2
−π/2 −∞
(3.72)
The integral around the variable r can be interpreted as inverse Fourier transform
with the argument ρ cos( − ϑ); from the convolution theorem, Eq. (3.72) reduces to
π/2
1
g(ρ cos , ρ sin ) = (Pϑ ∗ kr )ρ cos( − ϑ)dϑ, (3.73)

−π/2

where kr is the Fourier transform of the function |r|.


The algorithm used in computer-aided tomography (CAT) involves the calculation
of the Pϑ ∗kr convolution for each value of ϑ, followed by an approximate integration
around the variable ϑ by summing up the results obtained. Since one measures the
function Pϑ (r), the reconstruction algorithm should be based on Eq. (3.72) rather than
Eq. (3.73). It follows from Eq. (3.72) that Pϑ (r) must be known for all r values, but
it is clear from the foregoing (see Eq. (3.71)) that Pϑ (r) is known only for a limited
range of r values with the centre at r = 2ωo /c. Besides, the circular segment of the
Pϑ function (Fig. 3.12) should be shifted towards the origin. With these remarks in
mind, Eq. (3.72) can be reduced to
max X2 −X1
ϑ
1
g(ρ cos , ρ sin ) = Pϑ (r + X1 )|r + X1 |W1 (r)
4π 2
ϑmin 0

× exp[ jrρ cos(ϑ − )]dr × W2 (ϑ) exp[ jX1 ρ cos( − ϑ)]dϑ. (3.74)

where W1 (r) and W2 (ϑ) are additional weight functions [33].


The interpolation and convolution algorithms have been compared quantitatively.
The comparison is based on two criteria: (1) the level of multiplicative noise (side
lobes)
 
N2 
 oml 
RMN = 101g  2  ,
 Niml 

zino: “chap03” — 2005/11/7 — 15:37 — page 74 — #26


Quasi-holographic and holographic radar imaging 75

where Noml is the number of pixels outside the major lobe on a point scatterer’s
image and Niml is the number of pixels inside the major lobe; and (2) the computation
time and complexity, or the number of elementary arithmetic operations to be made.
The value of RMN for the convolution algorithm has been found to be −(30/40) dB.
A similar result is obtained using the convolution algorithm with a high interpo-
lation order (8–16). The computation complexity of the first algorithm is about N 3
(N × N is the number of pixels on the image) and that of the second algorithm is
about kN 2 (k is a constant varying in proportion with the interpolation order). The
computation time with the convolution algorithm is 3–5 times longer than with the
interpolation algorithm. Its application is, however, preferred because it allows pro-
cessing primary data as they come handy (e.g. the internal integral in Eq. (3.74)) in
real time for each projection individually. The convolution algorithm can be used
for simultaneous (systolic) computations by a set of elementary processors such as
a multiplier, a summator and a saving register, which are not tightly coupled to one
another.
There have been some attempts to design ‘faster’ tomographic algorithms, using,
for example, the Hankel transform. The principle of this algorithm is as follows.
Because the functions g(ρ, ) and G(r, ϑ) are periodic with the period 2π , they can
be expanded into a Fourier series:


g(ρ, ) = gn (ρ) exp( jn),
n=−∞


G(r, ϑ) = Gm (r) exp( jmϑ),
m=−∞

where
π/2
1
gn (ρ) = g(ρ, ) exp(−jn)d,

−π/2

π/2
1
Gm (r) = G(r, ϑ) exp(−jmϑ)dϑ.

−π/2

In addition, We can show that


∞
gn (ρ) = 2π rGn (r)Jn (rρ)dr, (3.75)
0

where Jn (·) is the first-order Bessel function. This relation is known as the nth order
Hankel transform [103].
Apparently, these relations can be applied to the reconstruction of g from the
known values of G. An important advantage of this algorithm is the use of data in a

zino: “chap03” — 2005/11/7 — 15:37 — page 75 — #27


76 Radar imaging and holography

polar format without interpolation. The Hankel transform takes the largest computa-
tional time. The available procedures for accelerating the computation are based on the
representation of Eq. (3.75) as a convolution and the use an asymptotic representation
of the Bessel function.
The available tomographic algorithms for image reconstruction in spotlight SAR
also include signal processing designs accounting for the wave front curvature.
These employ more complex transformations than just finding Fourier images. The
‘efficiency’ of such algorithms should be evaluated taking into account the inade-
quacy of the problem formulation. We should recall that a problem is considered to
be ill-posed if it has no solution, or the solution is ambiguous or unsteady, that is,
it does not change continuously with the input data. It is the second circumstance
that usually takes place in the case being discussed, because experimental data fit
only a small region in the transformation space. Even if we assume that the G(X , Y )
values are known over the whole polar grid, there is generally no sampling theorem
for g(ρ, ) in the polar format.
The tomographic approach allows estimation of all major parameters of the
spotlight SAR. In particular, the resolution was estimated as

πc
δx ∼
= ,
2αT
∼ πc
δy =
2ωo sin(|ϑmin | + |ϑmax |)

a value coinciding with a conventional radar estimate [100]. The conditions for the
input data discretisation were defined. Besides, requirements for the synthesis were
formulated, providing that one could ignore the deviation of projections from a straight
line and their incoherence due to the wave front curvature in the target vicinity.
We have made the above analysis for a 2D case, neglecting the SAR’s altitude. This
circumstance does not, however, violate the generality of our treatment. A correction
for the altitude can be easily made by ‘extending’ the linear range by a factor of Ro /R,
where Ro is the slant range to the target’s centre and R is the slant range projection
onto the earth plane.
We should like to emphasise the following important difference between CAT
systems and SAR operating in a spot-light (telescopic) mode. In order to provide a
high resolution, a CAT radar must cover a much larger range of angles than a SAR,
say, 360◦ against 6◦ . This can be understood in terms of image reconstruction from
data obtained within a limited region of a 2D space–time spectrum. In this sense,
the spectral region utilised by the SAR is shifted relative to the origin by 2ωo /c
(Eq. (3.71)), while the spectral region of a CAT radar is not. We shall try to show why
a high resolution can be achieved by a small aperture in SAR.
We should first recall that resolution corresponds to the width of the major lobe
of the pulse response, normally at 3 dB. The resolving power of both CAT and SAR
systems depends only on the frequency band used in a 2D spectrum and it should
be independent of the carrier frequency ωo , which is the frequency of the band shift.

zino: “chap03” — 2005/11/7 — 15:37 — page 76 — #28


Quasi-holographic and holographic radar imaging 77

To illustrate, the range resolution for the shaded region in Fig. 1.7 is inversely propor-
tional to the frequency band width along the X -axis (or the u-axis) and the azimuthal
resolution to that along the Y -axis (or the v-axis).
If the number of point objects is large, the image quality becomes poor due
to signal interference. This effect arises because the pulse response of the system,
usually expressed as a 2D function sin x/x, contains a constant phase factor varying
with the carrier frequency ωo and the position of the point object. As is easy to
see, the quality of a reconstructed image is independent of the ωo variation provided
that the function describing the object depends on a complex-valued variable with
an occasional uncorrelated phase. This means that the phases of signals reflected by
different scattering centres are not correlated. The authors evaluated the image quality
from a formula similar to that for finding a mean-square root error. One can suggest
that the process of SAR imaging meets this condition. As a result, the spectrum of
the ‘initial image’ occupies a wide frequency band in Fourier transform space and the
object’s reflectivity can be reconstructed from a limited shifted spectral region. This
circumstance is similar to a fact well known in holography: the image of a diffusely
scattering object can be reconstructed from any fragment of the hologram (Chapter 2).
These aspects of image quality can be treated in a different way. The band width
of space frequencies, v, which defines the azimuthal resolution, ‘grows’ with the
shift frequency (Fig. 1.7) as
v = (|ϑmin | + |ϑmax |)(2ωo /c).
For a CAT radar, ωo = 0 and v is
(|ϑmin | + |ϑmax |) u,
where u ∼ = 4αT /c  2ωo /c. Therefore, in order to obtain a high azimuthal
resolution, one must have information about the whole range of view angles, 360◦ .
One can eventually say that the principal difference between the CAT and SAR
systems is that the latter is coherent and can process complex signals.
To conclude, the tomographic principle of synthetic aperture operation does not
rely on the analysis of Doppler frequencies of reflected signals. We shall turn to this
factor again in Chapters 5 and 6 when we describe imaging of a rotating object by
an inverse synthetic aperture. It will be shown that the holographic and tomographic
approaches do not need an analysis of Doppler frequencies.

zino: “chap03” — 2005/11/7 — 15:37 — page 77 — #29


zino: “chap03” — 2005/11/7 — 15:37 — page 78 — #30
Chapter 4
Imaging radars and partially coherent targets

Remote sensing of the earth surface in the microwave frequency range is a rapidly
developing field of fundamental and applied radio electronics [31,77]. It has already
become a powerful method in many earth sciences such as geophysics, oceanology,
meteorology, resources survey, etc. Especially among microwave sensors side-
looking synthetic aperture radars (SAR) are capable of providing high-resolution
images of a background area at any time, irrespective of weather conditions. Extensive
information has been obtained by airborne radars and radars carried by satellites and
spacecraft: SEASAT-A and SIR (USA), RADARSAT (Canada), Almaz-1 (Russia),
ERS and ENVISAT (European Space Agency), Okean (Russia, Ukraine). A challenge
to the radar scientist is the analysis of synthetic aperture imaging of extended targets.
The various tasks of remote SAR sensing of the earth include the study of the
ocean surface, sea currents, shelf zones, ice fields, and many other problems [62].
Objects to be imaged are wind slicks, oil spills, internal waves, current boundaries,
etc. Some of these targets are characterised by motions with unknown parameters,
so they are considered to be partially coherent. This chapter focuses on theoretical
problems of SAR imaging of such targets while their practical aspects are discussed
in Chapter 9.
In contrast to a conventional radar which measures instantaneous amplitudes of
a signal reflected by a target, the SAR registers the signal phase and amplitude for a
finite synthesis time Ts . The conversion of these data to a radar image requires the
knowledge of the time variation of these characteristics, which can be found if one
knows a priori the time variation of the reflected signal. When the view zone includes
only stationary targets, the prescribed data have the form of the time dependence of
distances between the SAR and the objects being viewed. If the time variation of
the signal phase is unknown, the coherence is violated. This may happen not only
in SAR viewing of the sea surface but also because of sporadic fluctuations of the
carrier trajectory (see Chapter 7). So partial coherence may be associated with the
viewing conditions or with the target itself. The analytical method discussed below
preserves its generality in this case.

zino: “chap04” — 2005/11/7 — 15:37 — page 79 — #1


80 Radar imaging and holography

4.1 Imaging of extended targets

Viewing of background surfaces by SAR involves two kinds of difficulty: one is


associated with evaluation and improvement of image quality and the other with image
interpretation [59]. The first difficulty is due to the fact that one has to control the
SAR performance (i.e. the operation of transmitters/receivers and imaging devices), to
evaluate the capabilities of test systems, and to compare the data from the synthetic
aperture and other sensors. The other difficulty arises from the diversity of image
applications. The point is that one resolution element contains a large number of
elementary scatterers reflecting coherent signals which interfere with one another.
This produces speckle noise on the radar image. The situation becomes especially
complicated, for example, in sea surface viewing when elementary scatterers move,
making the image intensity a random quantity. For this reason, one has to employ
statistical methods to describe the imaging of extended proper targets. It is clear that
both problems are closely interrelated. For instance, the statistical characteristics of
speckle noise can be used to obtain information about the surface and to evaluate the
image quality.
Image quality is affected by numerous independent parameters of target imaging.
Therefore, image evaluation requires the use of quantitative factors which can objec-
tively describe the image characteristics and relate this information to the parameters
of the viewing system. The quality of any image, including a radar one, can be
described by four parameters: geometrical accuracy, spatial resolution, radiometric
precision and radiometric resolution.
Geometric accuracy defines the longitudinal and latitudinal precision of the
image as an integral entity, which is particularly important for images of poorly recog-
nisable surface areas. It also determines the mapping accuracy of different points on
the image relative to one another.
Since a SAR is a coherent system, its ability to resolve neighbouring point scat-
terers depends on various factors, such as the relative phases of the scatterers, their
relative effective cross sections, the system noise, etc. So it is reasonable to describe
spatial resolution either with the half width of the major impulse response peak
(usually, at 3 dB) or with the envelope of this response. The latter way enables one to
find the extent to which the image is affected by the side lobes of the impulse response,
which are comparable with the major peaks of responses from neighbouring, less
intensive scatterers that can be erroneously taken for images of independent point tar-
gets. Spatial resolution can be evaluated by a photometric study of the image of a bright
point object, say, of a corner reflector, or by determining the amplitude image profile of
an object with a sharp reflectivity variation, followed by the calculation of the impulse
response from this profile gradient. The second approach is more accurate because
the resolution evaluation is less affected by the limited dynamic range of the aperture.
Radiometric precision indicates to what extent the various brightness levels
of the image reproduce the reflectivity variation of the radar target at particular
wavelengths, polarisations and radiation incidences. To measure the radiometric pre-
cision, one can use calibrated extended targets with different values of the specific
cross-section (SCS).

zino: “chap04” — 2005/11/7 — 15:37 — page 80 — #2


Imaging radars and partially coherent targets 81

Radiometric (contrast) resolution characterises the ability to discern the SCS


values of neighbouring elements and is largely determined by random signal fluctu-
ations registered on the image. Such fluctuations may arise along the signal pathway
from aperture or speckle noise. The radiometric resolution for homogeneous areas can
be calculated from the density distribution function of the image intensity probability.
The reflectivity distribution across the area of interest is often assumed to be nor-
mal. Then the amplitude distribution of the reflected signal is described by Reighley’s
formula while the phase is taken to be uniform in the range from 0 to 2π . The radar
image intensity, which is equal to the squared signal modulus, has an exponential
distribution:
p(χ ) = (1 + S)−1 exp[−χ /(1 + S)], (4.1)
where χ is the intensity normalised to unit noise power and S is the signal-to-
noise ratio on the image. The average intensity and the distribution dispersion are,
respectively, described as
χm = 1 + S, (4.2)

Dχ = (1 + S)2 . (4.3)
SCS measurements involve a large ambiguity. From Eqs (4.2) and (4.3) it follows that
the standard deviation of the SCS value is equal to the image intensity. To estimate this
value, it is necessary to find the mean noise intensity and subtract it from the image
intensity. We assume χm = S and assume that the estimate dispersion to be constant.
If the radiometric resolution γ is found to be on the level of one standard deviation
(the ratio of the mean value plus one standard deviation to the mean value), then for
the distribution described by Eq. (4.1) at zero noise we have
γ = 10 lg(2 + 1/S). (4.4)
Obviously, γ will not be larger than 3 dB even at S → ∞. The simplest way to
improve radiometric resolution is to average the viewing results on several neigh-
bouring resolution elements of an extended target (incoherent signal integration).
Then we shall have
γ = 10 lg[1 + (1 + S)/N 1/2 S], (4.5)
where N is the number of uncorrelated integrated versions of the image.
Incoherent signal integration by SAR can be provided only at the expense of spatial
resolution because this is normally done by multi-ray processing or by averaging the
intensities of elements of a highly resolved image. For example, the SEASAT-A
aperture used a four-ray processing which, nevertheless, could not totally remove the
speckle noise [99].
Thus, there is a certain contradiction between spatial and radiometric resolu-
tions [61]. A possible compromise is to choose a proper criterion for image quality.
However, this is not very easy to do for two reasons. First, such a criterion must
account for specific features of the object being viewed, which may happen to be
diverse. Second, one must adapt this criterion to the subsequent processing of the

zino: “chap04” — 2005/11/7 — 15:37 — page 81 — #3


82 Radar imaging and holography

image – visual, automated, etc. Moore [99], for example, suggested using visual
expertise of the image as a criterion for evaluation of its quality. For a quantitative
analysis he used the spatial grey-level (SGL) volume V = Va VR Vg (N ), where Va and
VR are the azimuth and range resolutions, respectively, and Vg (N ) is the grey-level
resolution defined by the number of uncorrelated integrated realisations, N .
Before proceeding with the discussion of criteria that can optimise the coherent-
to-incoherent signal ratio in the synthetic aperture, we think it is necessary to consider
briefly the available methods to describe SAR mapping of a typical fluctuating
extended target – a rough sea surface.

4.2 Mapping of rough sea surface

At present we have much information on rough sea surface viewing by SAR sys-
tems [36,62], both airborne and carried by spacecraft. Most of the publications
describe wave movements and their effect on radar image quality. However, this
issue still remains controversial and is a subject of much debate [56].
When the sea surface is viewed by an airborne or space SAR, the probing radiation
incidence varies from 20◦ to 70◦ . Bragg scattering by small-scale and capillary waves
has the greatest effect on the reflection of electromagnetic radiation. The effect of
large-scale (gravitational) waves on the radar image reveals itself in the modulation
of scattering by small-scale waves. These phenomena are usually described by a
2D model which considers the sea surface as a superposition of Bragg scatterers –
capillary and longer gravitational waves. They can also be described by a facet model,
in which facets represent small-scale scatterers with superimposed capillary waves;
the scatterers move with orbital velocities defined by large-scale waves [59]. The
imaging of large-scale waves is affected by the following factors:

• the energy modulation of capillary waves due to hydrodynamic interaction


between capillary and gravitational waves;
• the modulation of the facet inclination, which changes the effective incidence
of the probing signal with respect to the normal facet surface, which, in turn,
changes the Bragg scattering coefficient;
• the variations in the facet parameters (the position and the normal direction)
and the Bragg scattering coefficient due to the facet movement during the
synthesis.

The first two processes are important for sea viewing by any radar, whereas the third
process affects only SAR imaging. The effect of moving waves on the image quality
can be found analytically if one bears in mind that the synthesis time (0.1–3 s) is much
shorter than the period of a large-scale wave (8–16 s). Then the functions that describe
the time variation of the facet parameters and scattering coefficients can be expanded
into a Taylor series. The major expansion terms are related to the radial components
(along the slant range) of the orbital velocity and acceleration of the facets. These
components are responsible for two effects: the velocity bunching and the image

zino: “chap04” — 2005/11/7 — 15:37 — page 82 — #4


Imaging radars and partially coherent targets 83

defocusing along the azimuth. The velocity bunching is associated with the azimuthal
shift of each facet image because of the radial velocity effect, which represents a
periodic rarefaction and thickening of virtual positions of elementary scatterers along
the large-scale wave pattern. The bunching degree varies with the number of images
of individual facets per unit azimuthal length, which is proportional to

R dur
= , (4.6)
v dx

where R is the inclined range, v is the SAR carrier velocity, ur is the radial velocity
component and x is the azimuthal coordinate on the sea surface. For small values
of ||, this effect is linear and is characterised by a linear transfer function; for large
|| values (>π/2), it becomes nonlinear, leading to image distortions. It is greatest
for waves running along the azimuthal coordinate but practically vanishes for radial
waves.
Image defocusing of large-scale waves is interpreted as being either due to the
radial acceleration of the facets or due to the change in the relative aperture velocity
because of the effect of the azimuthal phase velocity of sea waves [61]. Investigations
have shown that the latter explanation is better substantiated. The major contribution
to the image is made by the amplitude modulation of the reflected signal due to
the surface roughness and facet inclination, whereas the velocity bunching plays
a minor role. As for the image defocusing, it can be removed by correcting the
signal processing conditions, for example, by an additional adjustment of the optical
processor or by refining the base function during digital image reconstruction.
Generally, the sea wave behaviour appears to be quite complex. For this reason,
available models of a probing signal reflected by the sea surface depend on the
particular problem to be solved. Models accounting for the orbital motion of liquid
droplets are too sophisticated to be extended to a large class of objects defined as
partially coherent. Besides, they do not readily apply to the analysis of the influence
of aperture parameters on image quality, because imaging is then determined only
by the sea wave characteristics and viewing geometry. Probably, the only factor that
affects the sea imaging by SAR and related to the choice of radar parameters is the
image defocusing. But even here, we deal with the mapping of sea waves, which
is a particular problem that does not represent the whole class of partially coherent
targets.
On the other hand, of academic interest and practical importance are the problems
of background dynamics, various anomalies in the extended target reflectivity (for
the sea, these are slicks, spills of surface-active substances, etc.), as well as the proper
choice of the SAR design for viewing this class of targets. The analysis shows that
the results obtained can be extended to a large number of partially coherent extended
targets.
In principle, the basic characteristics of extended target images, including images
of sea surface, could be found by solving the problem of electromagnetic wave scat-
tering by a moving plane. The methods of dealing with these problems are well known
but they involve cumbersome calculations.

zino: “chap04” — 2005/11/7 — 15:37 — page 83 — #5


84 Radar imaging and holography

Another way of describing a radar signal reflected by an extended target is to


introduce the autocorrelation function for the object being viewed, as is done in optical
systems theory [29]. In this approach, a complex signal reflected by the sea surface
can be written as U (x, t) = u(x, t)ur (x, t), where u(x, t) is a co-factor accounting for
the effect of large-scale sea waves and ur (x, t) is a random complex component to
describe the signal reflected by a capillary wave. The autocorrelation function of this
signal is
U (x1 , t1 )U ∗ (x2 , t2 ) = u(x1 , t1 )u∗ (x2 , t2 ) ur (x1 , t1 )ur∗ (x2 , t2 ) ,
where the asterisk denotes the complex conjugate and represents the ensemble
average.
The complex component ur (x, t) can be written as
ur (x, t) = f (x)α(t | x) (4.7)
where f (x) is a complex random amplitude of the reflected signal, defined by the sur-
face roughness, and α(t | x) is a complex reflectivity describing the time fluctuations
of the reflected signal with the x-coordinate.
Normally, f (x) describes the Gaussian random process with a zero average, which
happens in the case of Bragg scattering of an electromagnetic wave on a rough surface.
The spatial correlation function of this process can be approximated by the Dirac delta-
function when the spacing between the features is sufficiently small, a condition often
fulfilled in practice:
f (x1 )f ∗ (x2 ) = pδ(x1 − x2 ), (4.8)
where p is a factor proportional to the object’s SCS and is defined by the governing
radar equation.
The autocorrelation function of the time fluctuations of the surface is, in turn,
equal to
α(t1 | x1 )α ∗ (t2 | x2 ) = [(t1 − t2 ) | x1 , x2 ]. (4.9)
It has been termed partial or autocorrelation coherence [103]. The possibility to
employ this formalism is a fundamental feature of partially coherent objects which
can then be treated as a special class of targets.
Thus, the autocorrelation function of the signal reflected by the sea surface can
be written as
U (x1 , t1 )U ∗ (x2 , t2 ) = u(x1 , t1 )u∗ (x2 , t2 )δ(x1 − x2 ) [(t1 − t2 ) | x1 , x2 ].
(4.10)
Taking the time fluctuations of the signal to be constant, we can approximate the
autocorrelation function with the expression
[(t1 − t2 ) | x1 , x2 ] = exp[−π(t1 − t2 )2 /τc2 ], (4.11)
where τc is the time interval of the correlation.

zino: “chap04” — 2005/11/7 — 15:37 — page 84 — #6


Imaging radars and partially coherent targets 85

The radar signal model discussed above agrees well with experimental data [112].
Equation (4.11) has a general form allowing the solution of a large range of problems
involved in the analysis of extended target imaging by SAR systems. We shall fur-
ther omit partially coherent background modulation by large-scale waves, assuming
u(x, t) = 1 in order to be able to extend the results to a sufficiently large class of
objects.
The model we have described can provide the basic statistical characteristics of
partially coherent surface images, but we should first outline the imaging model
itself.

4.3 A mathematical model of imaging of partially


coherent extended targets

Suppose a SAR is borne by a carrier moving uniformly along a straight line with a
velocity v. The carrier position is described by the coordinate y = vt and inclined
range R, while the position of an arbitrary element of the viewed surface is described
by the x-coordinate (Fig. 4.1). The imaging process is subdivided into two stages – the
registration of the reflected signal (hologram recording) and the image reconstruction.
This approach allows one to represent a general block diagram of the synthetic aperture
(Fig. 4.2) with the complex amplitude of the reconstructed image written as a sum of
convolutions:

s = f ∗ w ∗ h + n ∗ h, (4.12)

R f ( y, t)
y⬘
R
u

y = vt
0

Figure 4.1 The geometrical relations in a SAR

Surface SAR SAR Radar


model receiver + processor image

Noise

Figure 4.2 A generalised block diagram of a SAR

zino: “chap04” — 2005/11/7 — 15:37 — page 85 — #7


86 Radar imaging and holography

where f is a function of the viewed surface reflectivity; w and h are the impulse
responses of the radar and the processor, respectively; n is the complex amplitude of
additive noise; and ∗ denotes convolution.
The optimal quality of images of point objects is achieved by matching the impulse
responses of the radar and the aperture processor:
h(y) = w∗ (y). (4.13)
This condition cannot, however, provide an optimal image of an extended proper
object [99], since it is impossible to integrate an incoherent signal and to reduce the
speckle noise on the image. On the other hand, the fact that the image intensity g(u) =
s(u)s∗ (u) is usually registered at the aperture processor output allows introducing the
concept of a partially coherent processor in square filtration theory [58]. One can then
account simultaneously for the effects of coherent and incoherent signal integration
by the aperture and eventually obtain the major statistical characteristics of images
of partially coherent extended targets. This type of processor will have the following
impulse response:
Q(y1 , y2 ) = γ (y1 − y2 )h(y1 )h∗ (y2 ), (4.14)
where γ (y1 −y2 ) is a factor characterising the degree of incoherent signal integration.
Then Eq. (4.13) will be valid for any class of targets.
To avoid cumbersome calculations, we shall introduce Gaussian approximations
of the functions
w(y) = exp(−ay2 /2) exp(jby2 /2), (4.15)

h(y) = exp(−ay2 /2) exp(jby2 /2), (4.16)

γ (y1 − y2 ) = exp[−A(y1 − y2 )2 /2], (4.17)



where exp(−a y2 /2)dy = (2π/a)1/2 = R is the width of the real antenna pattern
projection onto the area, which defines the synthesis range b = 2π/λR; λ is the
aperture wavelength; and exp(−Lx2 /2) dx = (2π/L)1/2 = Ls is the synthesised
aperture length. The A/a ratio describes the number of independent integrations of
an incoherent signal, N = (1 + A/a)1/2 .
Within this SAR model, the image intensity is
 ∞
g(u) = Q(u − y1 , u − y2 )[sr (y1 ) + n(y1 )] [sr∗ (y2 ) + n∗ (y2 )] dy1 dy2 ,
−∞
(4.18)
where sr (y) describes the complex hologram function and n(y) is a function describing
the intrinsic noise of the aperture.
The model of a synthetic aperture with a partially coherent processor can be used to
analyse statistical characteristics of images of partially coherent targets and to reveal
the effects of coherent and incoherent signal integration on the image parameters.

zino: “chap04” — 2005/11/7 — 15:37 — page 86 — #8


Imaging radars and partially coherent targets 87

4.4 Statistical characteristics of partially coherent target images

Let us turn back to the synthetic aperture shown in Fig. 4.1. In one of the range
channels, the reflected signal can be represented as a random complex field. For
many real surfaces, the function f (y) in the centimetre wavelength range is a Gaussian
random process with the zero average and the correlation function in the form of the
Dirac delta-function obeying Eq. (4.8).
The time relations for the surface changes can be described by the autocorrelation
function of Eq. (4.9) and that of the reflected signal, assuming u(y1 , t1 ) ≡ s0 (y1 , t1 ):

s0 (y1 , t1 )s∗ 0 (y2 , t2 ) = pδ(y1 − y2 ) [(t1 − t2 ) | y1 , y2 ], (4.19)

where the function [(t1 − t2 ) | y1 , y2 ] is defined by Eq. (4.11).


The process of imaging can be analysed in terms of the holographic approach, as
applied to SAR. At the first stage, the hologram is recorded: sh = s0 ∗ w + n, where
w is the impulse response of the aperture receiver, n is additive noise, and ∗ denotes
convolution. At the second stage, the image is reconstructed with the intensity

g = ss∗ = (sh ∗ h) ∗ (sh ∗ h)∗ , (4.20)

where s is the complex amplitude of the image and h is the impulse response of the
aperture processor.
To smooth out the image fluctuations, one usually uses incoherently integrated
signals. We can now evaluate the effects of two smoothing procedures: multi-ray
processing and averaging of neighbouring resolution elements on the image [2].
Additionally, we shall consider the potentiality of incoherent signal integration on the
hologram. In the first case, when the image is reconstructed by an optical processor,
its intensity is [60]

g1 (u) = g(u, τ )Da (τ ) dτ . (4.21)

Here Da (τ ) describes the light distribution across the aperture stop located in front of
the secondary film which records the image, τ is the current exposure of the secondary
film, and u is the reconstructed image coordinate.
In the second case, the image intensity is

g2 (u) = g(u )Ga (u − u ) du , (4.22)

where Ga (u − u ) is the weighting function of the averaging.


In the third case, the image intensity is given by Eq. (4.20), in which the hologram
function is

sha (y ) = sh (y )Ca (y − y ) dy , (4.23)

where Ca (y − y ) is the weighting function of the averaging and y = vt is the spatial
coordinate on the hologram.

zino: “chap04” — 2005/11/7 — 15:37 — page 87 — #9


88 Radar imaging and holography

To simplify the calculations, let us approximate the above functions with the
expressions

Da (τ ) = exp[−D(τ v)2 /2],


Ga (u) = exp[−Gu2 /2], (4.24)
Ca (y) = exp[−Cy2 /2],

where exp(−Dv2 τ 2 /2)d(vτ ) = De is the equivalent width of the aperture stop,
(2π/G)1/2 = Ge and (2π/C)1/2 = Ce are the equivalent widths of the respective
weighting functions; Eqs (4.15) and (4.16) have been used as approximations of the
functions w(y) and h(y).
We can now find the following parameters characterising the statistical properties
of the image: the average intensity ga , the intensity dispersion σa2 , the smoothing
degree ga2 /σa2 , the autocorrelation range uc and the signal-to-noise ratio Wg = ga /gan ,
where gan is the average noise on the image.

4.4.1 Statistical image characteristics for zero incoherent signal


integration
The parameters of interest can be found using the power spectrum of the image
intensity [60]:

Sg (ω) = (2π)−1 |H (η, ω − η)|2 Sh (ω)Sh (ω − η) dη, (4.25)

where H (η, ω) is a 2D transfer function of the aperture processor and Sh (ω) is the
hologram power spectrum. In turn, H (η, ω) = H (η)H ∗ (ω), where H = F{h} is the
Fourier image of the function h(x). With Eq. (4.24), we get
 
2 2 −1/2 2 2 L
H (η, ω) = (L + b ) exp −(η + ω )
2(L2 + b2 )
 
b
× exp j(ω2 − η2 ) . (4.26)
2(L2 + b2 )

The function Sh (ω) represents the Fourier transform of the hologram spatial cor-
relation function, Rh (y ), which can be described, for low intrinsic aperture
noise, as

Rh (y ) = sh (y1 )sh∗ (y2 ) = p(π/a)1/2 exp[−(y1 − y2 )2 (a2 + b2 + 2aB)/4a]


(4.27)

with

y = y1 − y2 , B = 2π/(vτc )2 .

zino: “chap04” — 2005/11/7 — 15:37 — page 88 — #10


Imaging radars and partially coherent targets 89

Hence, we have

Sh (ω) = p[2π/(a2 + b2 + 2aB)]1/2 exp[−ω2 /(a2 + b2 + 2aB)]. (4.28)

By substituting Eqs (4.26) and (4.28) into Eq. (4.25) and using the expression
covg (u) = F{Sg (ω)} for the background, we obtain

<ga > = Sg (ω)dω = 21/2 π p[aL(a + L + 2B) + b2 (a + L)]−1/2 ,

σg2 = covg (0) = 2(π p)2 [aL(a + L + 2B) + b2 (a + L)]−1 ,



uc = covg (u)/covg (0)du

= π [2a/(a2 + b2 + 2aB) + 2L/(L2 + b2 )]1/2 ,

<ga2 >/σg2 = 1.
(4.29)

Assuming that the spectrum of the intrinsic aperture noise recorded on the hologram
is uniform and has spectral density Shn (ω) = n, we find the respective parameters of
the image noise:

<gan > = n(π/L)1/2 ,

σn2 = n2 (π/L),
(4.30)
ucn = π[2L/(L2 + b2 )]1/2 ,
2
gan /σn2 = 1.

The signal-to-noise ratio Wh = ga /gan can be reduced to Wh = W0 Q, where W0 is a


classical quantity and

Q = [(a/b2 + 1/L)(a + L) + 2aB/b2 ]−1/2 (4.31)

is a factor largely determined by the real antenna pattern.


A quantitative analysis of Eqs (4.29) and (4.30) shows that the statistical para-
meters of the image are practically independent of the surface fluctuations at the
typical values of λ ≈ 3 cm, R ≈ 10–20 km, ≈ 0.02 and τc ≈ 0.01 s and that
the correlation ranges uc and ucn differ only slightly with a maximum at Ls =
(λR/2)1/2 (b = L). The latter circumstance can be attributed to the fact that the
function h(y) essentially represents a linearly frequency-modulated signal, whose
spectral width is proportional to its range at Ls > (λR/2)1/2 and inversely proportional
at Ls < (λR/2)1/2 . So the spectral width of the image fluctuations is minimal at
Ls = (λR/2)1/2 .

zino: “chap04” — 2005/11/7 — 15:37 — page 89 — #11


90 Radar imaging and holography

Q, db
0

20 km

R = 10 km

–1.0

0 10 20
Ls/√ lR/2

Figure 4.3 The variation of the parameter Q with the synthesis range Ls at λ = 3 cm,
= 0.02 and various values of R

At the minimal width of the |H (ω)| function, the difference between ga and
gan is also insignificant. This accounts for the maximum of the Q function at
Ls = (λR/2)1/2 (Fig. 4.3). A quantitative analysis of Q shows that the influence
of the real aperture pattern on the signal-to-noise ratio is slight and reveals itself only
at large synthesis ranges, Ls  (λR/2)1/2 .

4.4.2 Statistical image characteristics for incoherent signal integration


According to Eq. (4.21), the image intensity in multi-ray processing is [60]
 
g(u) = Da (vτ ) sh (y1 )sh (y2 ) exp[−L(y1 − u − vτ )2 /2]

× exp[−L(y2 − u − vτ )2 /2] × exp[jb(y1 − u)2 /2]


× exp[jb(y2 − u)2 /2] dy1 dy2 d(vτ ). (4.32)

This relation describes the impulse response of the aperture processor and enables
one to find its transfer function:

H (η, ω) =r(l 2 + b2 + 2Al)−1/2


× exp{−[A(η + ω)2 + l(η2 + ω2 )]/[2(l 2 + b2 + 2Al)]}
× exp{−jb(η2 − ω2 )/[2(l 2 + b2 + 2Al)]}, (4.33)

with

r = 2π/(D + 2L)1/2 , l = LD/(D + 2L), and A = L2 /(D + 2L).

zino: “chap04” — 2005/11/7 — 15:37 — page 90 — #12


Imaging radars and partially coherent targets 91

Following the same procedure and using the last two relations, we can find the
characteristics of the background and noise on the image:

<ga > = 21/2 πp{D[aL(a + L + 2B) + b2 (a + L)] + 2aLb2 }−1/2 , (4.34)


σg2 2 2 2
= 2(πp) {[L(D + L)(a + b + 2aB) + aD(L + b ) + 2aLb ] 2 2 2 2

− L4 (a2 + b2 + 2aB)}−1 (4.35)


uc = 21/2 π{L(1 + 2L/D)/[L2 + b2 (1 + 2L/D)]
+ a/(a2 + b2 + 2aB)}1/2 , (4.36)
<ga2 >/σg2 2 2
= {1 + 2L b /[aD(L + b ) + LDb + 2aLb ]} 2 2 2 2 1/2
, (4.37)
1/2
<gan > = π n[2/(LD)] , (4.38)
σn2 = π 2 n2 (1 + 2L/D)−1/2 /(LD), (4.39)
ucn = 21/2 π{L(1 + 2L/D)/[L2 + b2 (1 + 2L/D)]}1/2 , (4.40)
2
<gan >/σn2 = (1 + 2L/D) 1/2
. (4.41)

The analysis of these relations shows that the image smoothing is improved, as was
expected, while the correlation functions of the clutter and radar noise images are
practically the same, uc ≈ ucn . Figure 4.4 demonstrates the correlation range ver-
sus the normalised quantity Ls for various degrees of incoherent integration De , or for

冓uc冔, m
40
4

8
20
3

7
2
5
6
1,5
0
2 4
Ls/√ lR/2

Figure 4.4 The dependence of the spatial correlation range of the image on nor-
malised Ls for multi-ray processing (solid lines) at various degrees of
incoherent integration De and for averaging of the resolution elements
(dashed lines) at various Ge ; λ = 3 cm, R = 10 km; 1, 5–0 (curves
overlap); 2, 6–0.25(λR/2)1/2 ; 3, 7–(λR/2)1/2 ; 4, 8–2.25(λR/2)1/2

zino: “chap04” — 2005/11/7 — 15:37 — page 91 — #13


92 Radar imaging and holography

different aperture stop sizes. It is clear that the image correlation at Ls > (λR/2)1/2
(the focused processing region) will only slightly vary with De but the correlation
range in incoherent integration will become larger (the defocused processing region).
The parameter Q then takes the form

Q = [(a + L)(a/b2 + 1/L) + 2aB/b2 + 2a/D]−1/2 .

Its quantitative analysis indicates that it does not much affect the signal-to-noise
ratio.
When the resolutions of neighbouring elements are averaged according to
Eq. (4.22), the processor transfer function is expressed as

H (η, ω) = r1 (l12 + b21 + 2A1 l1 )−1/2


× exp{−[A1 (η + ω)2 + l1 (η2 + ω2 )]/[2(l12 + b21 + 2A1 l1 )]}
× exp{−jb1 (η2 − ω2 )/[2(l12 + b21 + 2A1 l1 )]} (4.42)

with

r1 = [2π/(G + 2L)]1/2 , l1 = LG/(G + 2L),


A1 = (L2 + b2 )/(G + 2L) and b1 = bG/(G + 2L). (4.43)

Hence, we have

<ga > = 21/2 πp{G[a(L2 + b2 ) + L(a2 + b2 + 2aB)]}−1/2 , (4.44)


σg2 = 2(πp)2 {G 2 [Lb2 +a(L2 + b2 )]2 +2Gb2 (L2 +b2 )
[Lb2 + a(L2 + b2 )]}−1/2 , (4.45)
ga2 /σg2 = {1 + 2b2 (L2 + b2 )/[aG(L2 + b2 ) + LGb2 ]}1/2 , (4.46)
1/2
<gan > = π n[2/(LG)] , (4.47)
σn2 = π 2 n2 {LG[LG + 2(L2 + b2 )]}−1/2 , (4.48)
2
<gan >/σn2 = [1 + 2(L2 + b2 )/(LG)]1/2 , (4.49)
uc = 21/2 π{[Lb2 + a(L2 + b2 )]/[b2 (L2 + b2 )] + 2/G}1/2 , (4.50)
1/2 2 2 1/2
ucn = 2 π[L/(L + b ) + 2/G] . (4.51)

In this case, we also have uc ≈ ucn . Figure 4.4 illustrates this dependence at
various widths of the integrating function Ge . Obviously, the image correlation
range increases in proportion with the integrating window width. The expression
for the coefficient Q coincides with Eq. (4.31), since the statistical properties of the
background and noise images are similar and cannot contribute to the power.

zino: “chap04” — 2005/11/7 — 15:37 — page 92 — #14


Imaging radars and partially coherent targets 93

In incoherent signal integration on the hologram, the H (η, ω) function is described


by Eq. (4.26) and the Rh function, after averaging by Eq. (4.23), takes the form:
Rh (y ) = p[π/C(aC + a2 + b2 + 2aB)]1/2
× exp[−(y )2 C(a2 + b2 + 2aB)/4(aC + a2 + b2 + 2aB)].
Therefore, the noise correlation function on the hologram can be written as
Rhn (y ) = n(π/C)1/2 exp[−(y )2 C/4. (4.52)
This expression yields the hologram signal-to-noise ratio Wh = Rh (0)/Rhn (0) =
Wh0 Qh , where Wh0 is the single pulse ratio defined by the governing radar equation
as Qh = Ni (1 + Ni2 /Ka )1/2 , Ka = da /(2vTir ), da is the horizontal dimension of
the real antenna, Tir is the repetition rate of the pulses, Ni is the number of pulses
integrated by the hologram with an incoherent averaging. The variation of Qh with
Ni is shown in Fig. 4.5. One can see that incoherent integration is profitable only at
Ni ≤ Ka , which agrees well with the condition Ce ≤ yhc  = 2(π a)1/2 /b, where

yhc is the correlation range of the hologram. If the latter condition is fulfilled, the
basic statistical characteristics of the image can be described by expressions similar
to Eqs (4.29)–(4.31), which means that there is no image smoothing.

Qh

10

10
8

5
6

4
2

Ka = 1

0 5 10
Ni

Figure 4.5 The variation of the parameter Qh with the number of integrated signals
Ni at various values of Ka

zino: “chap04” — 2005/11/7 — 15:37 — page 93 — #15


94 Radar imaging and holography

The results obtained allow the following conclusions to be made:


1. For typical conditions of SAR viewing of background surfaces and for real times
τc ≈ 0.01 s of reflected signal correlation, all the image parameters discussed
above are actually independent of the degree of coherence of the objects being
viewed, in contrast to the radar resolving power.
2. The statistical properties of images of background surfaces and aperture noises
are practically identical. This fact can be successfully used to calibrate radar
apertures designed for measurement of background SCS. The maximum period
of spatial image fluctuations is observed in the synthesis range Ls = (λR/2)1/2 .
3. The analytical expressions we have derived can be used for the calculation of the
image smoothing degree in the case of incoherent signal integration.
4. The signal-to-noise ratio of the image is nearly independent of the synthesised
aperture length or on the incoherent integration range.
5. Incoherent integration on a hologram does not change the statistical character-
istics of the image, that is, it does not lead to image smoothing, provided that
the integrating function width is smaller than the hologram correlation range.
Otherwise, there is no noticeable improvement of the signal-to-noise ratio on the
hologram; therefore, the signal integration procedure becomes meaningless.
6. The methods of incoherent signal integration we have discussed (multi-ray
processing and averaging of resolution elements) give similar results on the
smoothing of image fluctuations. Multi-ray processing is performed automat-
ically if the image is reconstructed by exposing the secondary film in an optical
processor. In the case of digital reconstruction usually based on fast Fourier algo-
rithms, the averaging of resolution elements is preferable because the algorithm
performance is very effective when one has to process vast data. Application of
special-purpose digital processors may improve the situation.

4.5 Viewing of low contrast partially coherent targets

The major SAR characteristics for viewing low contrast targets such as sea currents,
wind slicks, oil spills, etc., are the spatial resolution and radiometric (contrast) res-
olution determined by the number of incoherent signal integrations [58]. It is clear
that a proper choice of the proportion between spatial and radiometric resolutions
(coherent and incoherent integration) will depend not only on the radar parameters
but on the properties of the target to be viewed. So it is reasonable to consider opti-
misation of SAR performance in the context of partial coherence of signals reflected
by an extended target.
Recall that the process of imaging includes two stages. First, the received signal
is recorded on a radar hologram as u(y ) = w(y − y)f (y), where w(y − y) is the
impulse response of the aperture receiver, f (y) is a function describing the spatial
distribution of the target reflectivity, y is the coordinate in the viewed surface plane,
y = vt is the SAR carrier coordinate, and t is current viewing time. Second, the
image field is recorded: g(y ) = u(y )h(y − y ), where h(y − y ) is the impulse
response of the aperture processor and y is the image coordinate.

zino: “chap04” — 2005/11/7 — 15:37 — page 94 — #16


Imaging radars and partially coherent targets 95

Imaging can be described in terms of linear filtration theory. The concepts of a


quadratic filter and a frequency-contrast characteristic (FCC) well known from optics
can be used to present the image intensity:

SI (ω) = So (ω)KR (ω) (4.53)

where So (ω) is the space frequency spectrum of the SCS of the object and KR (ω) is
the FCC of the aperture.
For instance, if the average SCS of the background is σ0 , the distribution of a low
contrast target is described by the function

σ (y) = σ0 [1 − m exp(−y2 A/2)], (4.54)

where m < 1 is a factor defining the target’s initial contrast Kin = (1 − m)/(1 + m)
with respect to the background, A = 2π/l 2 is a parameter related to the target’s size l,
and the aperture FCC is given by the expression

KR (ω) = exp[−ω2 /(2z)], (4.55)

where z denotes its width. Then using Eq. (4.53), we can write the spatial distribution
of the image intensity:

g(y ) = σ0 {1 − m[z/(A + z)]1/2 exp[−(y )2 Az/2(A + z)]}. (4.56)

Hence, the object’s contrast on the image is

Kout = {1 − m[z/(A + z)]1/2 }/{1 + m[z/(A + z)]1/2 } (4.57)

and its observable size is

l  = [2π(1/A + 1/z)]1/2 . (4.58)

It is clear that the contrast and target size on the image become distorted but the
knowledge of the explicit quantity KR (ω) can give the real object’s parameters.
For targets whose reflectivity varies with time randomly, the signal received by
the aperture possesses a partial coherence and the hologram function u(y ) is no longer
a convolution integral. In that case it would be unreasonable to use linear filtration
theory. We shall show, however, that statistical methods and physical assumptions
concerning the time fluctuations of objects’ reflectivities can make this convenient
formalism work successfully.
For this, we shall find the aperture response for a low contrast target (m  1),
whose reflectivity distribution is described by the function

f (y, t) = [1 + m cos(y)]f (y)α(t | y), (4.59)

where  is a certain space frequency and α(t | y) is a random complex function


describing the time fluctuations of the reflected signal.
The aperture FCC can be written as KR () = Kout /Kin , where Kin = ( σ −
σm=0 )/ σm=0 ≈ 2m; Kout = ( g − gm=0 )/ gm=0 ; σ = f (0, 0)f ∗ (0, 0); σ

zino: “chap04” — 2005/11/7 — 15:37 — page 95 — #17


96 Radar imaging and holography

and g are the average values of the target’s SCS and image intensity, respectively.
The correlation function of the field in Eq. (4.59) is defined as
Rf = f (y1 , t1 )f ∗ (y2 , t2 ) = [1 + m cos(y1 )] f (y1 )f ∗ (y2 )α(t1 | y1 )α ∗ (t2 | y2 ) .
(4.60)
For many real surfaces, f (y) in the centimetre wavelength range is a Gaussian process
with a zero average and a correlation function in the form of the Dirac delta-function
of Eq. (4.8). Assuming the time fluctuations of the signal to be a steady-state random
process, we can use the approximation of Eq. (4.11). Together with Eq. (4.60) and
m  1, y = vt, we shall have
Rf = [1 + m cos(y1 ) + m cos(y2 )]δ(y1 − −y2 ) exp[−(y1 − y2 )2 B/2

with B = 2π/(vτ )2 .
The average image intensity is

g = h(y1 )h∗ (y2 )Ru (y1 , y2 ) dy1 dy2 , (4.61)

where Ru (y1 , y2 ) is the correlation function of the hologram:



Ru (y1 , y2 ) = Rf w(y1 − y1 )w∗ (y2 − y2 ) dy1 dy2 .

Using the Gaussian approximations of the impulse responses in Eqs (4.15) and (4.16),
we obtain, instead of Eq. (4.61), g = g0 + 2 g , where g0 = (2)1/2 π [aL(a +
L + (a + L)]−1/2 is the average intensity of the fluctuating background image and
g = m g0
× exp[−(2 /4)((a + L)(a + L + 2B)/aL(a + L + 2B) + b2 (a + L))].
(4.62)

For real viewing, we have b2  aL and b2  LB, which reduces Eq. (4.62) to
Kout = 2m exp[−2 (a + L + 2B)/4b2 ],
(4.63)
KR () = exp[−2 (a + L + 2B)/4b2 ].
There is a certain relationship between the FCC and the azimuthal resolution of the
aperture. The latter can be found from the width of the averaged impulse response to
a fluctuating point target:

δa = g(y ) / g(0) dy . (4.64)

The signal reflected by this target can be prescribed as f (y, y ) = δ(y)α(y ), where
α(y ) describes the time fluctuations of the signal, whose correlation properties are
defined by Eq. (4.11). With Eqs (4.61) and (4.64), we get
δa = [π(a + L + 2B)/b2 ]1/2 .

zino: “chap04” — 2005/11/7 — 15:37 — page 96 — #18


Imaging radars and partially coherent targets 97

Ωe,
rad/m
7
tc → ∞
6

5
tc = 0.4 s
4

3
tc = 0.2 s
2

tc = 0.1 s
1

0 20 40 60 80 100 120 140 160 180 Ls, m

Figure 4.6 The variation of the parameter e with the synthesis range Ls at various
signal correlation times τc

Of course, the aperture FCC 2 2


 can be presented as KR () = exp[− δa /(4π )] and its
equivalent width as e = KR () d = 2π/δa .
The concept of FCC allows the consideration of a SAR as a linear filter of space
frequencies. On the other hand, the filter description essentially depends on the target’s
behaviour through the parameter B. Figure 4.6 illustrates the variation of e with the
synthesis range Ls for an airborne SAR. The basic radar parameters are λ = 3 cm,
R = 10 km, ≈ 0.02, and v = 250 m/s. For zero signal fluctuations (τ → ∞),
the width e increases in proportion with Ls but at Ls ≈ R the linear dependence
is violated because of the antenna pattern effect through the parameter a. The signal
fluctuations lead to the resolution independence of Ls at Ls > vτ but they are rather
defined by the correlation time τ . Equation (4.63) can be re-written in the form:

KR () = K0 ()Kτ (), (4.65)

where K0 () = exp[−2 (a + L)/(4b2 )] is the aperture FCC in the absence of signal
fluctuations and Kτ () = exp[−2 B/(2b2 )] is multiplicative noise arising from
fluctuations in the radar channel.
Therefore, a SAR can be described as a set of two filters – a filter of space
frequencies K0 () and a narrow band space–time filter Kτ (), whose bandwidth is
determined by the time of the surface fluctuation correlation. The image has a spatial
intensity spectrum SI () = S0 ()KR (). On the other hand, one can consider that the
aperture measures the space–time spectrum S0τ () = S0 ()Kτ () if one assumes
its FCC being independent of the target’s properties and describes the radar with the
function K0 ().

zino: “chap04” — 2005/11/7 — 15:37 — page 97 — #19


98 Radar imaging and holography

Q
1.0
tc → ∞
0.8
tc = 0.4 s
0.6

0.4
tc = 0.2 s
0.2 tc = 0.1 s

0 20 40 60 80 100 120 140 160 180 Ls, m

Figure 4.7 The parameter Q as a function of the synthesis range Ls at various signal
correlation times τc

To conclude, the parameters of radar apertures for viewing fluctuating targets can
be optimised by matching the characteristics K0 () ≈ Kτ (). The latter equality
provides the imaging of a surface with nearly as much detail as possible potentially
for a particular type of object. This equality can be obtained by choosing the value of
Ls equal to Ls = vτ , which means that the synthesis time should not be longer than the
time of the signal correlation. As a result, the aperture resolution appears to be limited
to δa = λR/2Ls but this choice of Ls provides the (N = R/Ls ) number of image
realisations. The aperture contrast resolution, defined by the number of incoherent
integrations N , is in turn independent of the signal coherence time τ . So the choice
of Ls > vτ does not provide the desired spatial resolution but it decreases N , making
the contrast resolution poorer.
The potentiality of the SAR in viewing low contrast targets can be conveniently
described by the parameter Q = Ndh /(2δa ) equal to unity at zero fluctuations. If
the fluctuations are present, Q essentially depends on the chosen synthesis range Ls
(Fig. 4.7). For example, the signal fluctuations at Ls < vτ do not noticeably affect
the image quality and Q = 1. At Ls > vτ , the aperture performance proves to be
inferior to its potentiality (Q < 1), since the real aperture resolution does not fit the
chosen value of Ls but is rather defined by the signal correlation time τ .
We can draw the following conclusions from these results:

• To describe the imaging of fluctuating targets, one can make use of linear filtration
theory, representing the radar as a filter with a certain FCC. The aperture can be
considered as a device measuring the space–time spectrum of the object being
viewed.
• One can suggest that the time fluctuations of the signal in the viewing channel
create multiplicative noise decreasing the azimuthal resolution of the aperture.
• This approach provides a reasonable compromise between the potential azimuthal
resolution and the aperture contrast resolution. This compromise can be achieved
by choosing the synthesis time equal to the signal correlation time.

zino: “chap04” — 2005/11/7 — 15:37 — page 98 — #20


Imaging radars and partially coherent targets 99

The overall analysis of the results presented in this chapter shows that the available
methods for describing the properties of sea surface images can be supplemented
by a more general approach to SAR viewing of partially coherent objects. The con-
cept of partial coherence allows one to cover a much larger class of targets and to
describe the basic principles of their imaging. The advantages of this approach are
as follows: first, it is based on a fairly general model of the radar signal. Expres-
sion (4.10) accounts for general and specific features of the viewing of fluctuating
targets. We shall show in the following chapters that the correlation function of time
fluctuations in Eq. (4.13) can be used, for example, to describe trajectory instabilities
of the SAR carrier. Second, this approach provides an analytical description of the
major statistical characteristics of images of partially coherent targets; these, in turn,
enable one to evaluate image quality. Finally, the relative simplicity of mathematical
calculations and the clear physical sense of the results obtained make this approach
advantageous and convenient as a tool for solving practical tasks associated with SAR
designing and for remote sensing of partially coherent targets.

zino: “chap04” — 2005/11/7 — 15:37 — page 99 — #21


zino: “chap04” — 2005/11/7 — 15:37 — page 100 — #22
Chapter 5
Radar systems for rotating target imaging
(a holographic approach)

The possibility of using the rotation of an object to resolve its scattering centres was,
probably, first shown by W. M. Brown and R. J. Fredericks [21]. Independently,
microwave video imaging of rotating objects was demonstrated theoretically and
experimentally by other researchers [109].
An analysis of three approaches (in terms of the antenna, range-Doppler and
cross-correlation theories) was made in References 104 and 146 for the imaging of
rotating targets. Here we discuss this problem in terms of a holographic approach.

5.1 Inverse synthesis of 1D microwave Fourier holograms

We shall start with the basic principles of inverse synthesis of microwave holograms
of an object rotating around the centre of mass. The analysis will be based on the
holographic approach discussed in Sections 1.2 and 2.4.
Lens-free optical Fourier holography [131] implies that an optical hologram is
recorded when the amplitude and phase of the field scattered by the object are fixed in
a certain range of bistatic angles 0 < β < β0 (Fig. 5.1). In the microwave range, this
is equivalent to the displacement of the radar receiver along arc L of radius R0 from
point A to point B, while the transmitter remains immobile. A coherent background
must be created by a reference supply located in the object plane. Since such a supply
is unfeasible, the coherent background is created by an artificial reference wave in
the radar receiver (Chapter 2). In further analysis, we shall use a model object made
up of scattering centres described by Eq. (2.3). Then a direct synthesis along arc L
of radius R0 by a bistatic radar system (Fig. 5.1) can produce a classical microwave
Fourier hologram [109], with a subsequent image reconstruction as a 1D distribution
of the scattering centres and their effective scattering surfaces.
To discuss the principles of inverse synthesis and formation of a 1D microwave
Fourier hologram, we shall make use of the well-known relation for uni- and bistatic

zino: “chap05” — 2005/11/7 — 15:38 — page 101 — #1


102 Radar imaging and holography

Target
0 y
L

0
A
R
b0
x
C
B

1 N

Figure 5.1 A schematic diagram of direct bistatic radar synthesis of a microwave


hologram along arc L of a circle of radius R0 : 1 – transmitter,
2 – receiver

radars [69]. According to Kell’s theorem, at small bistatic angles β the bistatic radar
cross-section (RCS) for the angle α (Eq. 2.5) and the bistatic angle β is equal to the
unistatic RCS measured along the bisectrix of the angle β at a frequency reduced by
a factor of cos(β/2) (Chapter 2).
Kell’s theorem and the fact that the rotation of a transmitter–receiver unit around
the object can be replaced by the rotation of the object round its axis passing through
the centre of mass normal to the radar viewing line lead one to the conclusion that
such a unit, fixed at the point C (Fig. 5.2), can synthesise a 1D microwave Fourier
hologram identical to a lens-free optical Fourier hologram. This approach was first
discussed by S. A. Popov et al. [109].
In order to find analytical relations for the classical and synthesised Fourier holo-
grams, let us consider the schematic diagram in Fig. 5.3. To simplify the calculations,
we shall deal only with one kth scattering centre with the coordinates

rkx = rk sin θk cos(ϕ + ϕk ),

rky = rk sin θk sin(ϕ + ϕk ), (5.1)

rkz = rk cos θk ,

where ϕ = t is the object rotation angle, = | |  is the angular velocity of


the rotating object, ϕk is the initial angle between the rk vector projection on the
xOz plane and the positive x-axis, and θk is the angle between the rk vector and
the positive z-axis. In our further analysis, we shall follow the References 109
and 145.

zino: “chap05” — 2005/11/7 — 15:38 — page 102 — #2


Radar systems for rotating target imaging (holographic approach) 103


Target

0 y

A b0
0
R
x
C B

Figure 5.2 A schematic diagram of inverse synthesis of a microwave hologram by


a unistatic radar located at point C

z

Target
uK rK

0 y
wK + w

→ → → →
R0 d 0(t K, R0)

x
O1

Radar

Figure 5.3 The geometry of data acquisition for the synthesis of a 1D microwave
Fourier hologram of a rotating object

With Eq. (5.1), the input receiver signal can be described as a function of the
object rotation angle:


N  
4π  ϕ
u̇r (ϕ) = u0 σk exp −j d(rk , R0 ) · exp iω0 , (5.2)
λ1
k=1

zino: “chap05” — 2005/11/7 — 15:38 — page 103 — #3


104 Radar imaging and holography

where
 
 0) ∼  rk
d(rk , R = 0
R 1 − [sin γ sin θ cos(ϕ + θk ) + cos γ cos θk ] , (5.3)
R0
λ1 = 2π c/ω0 is the radar wavelength; σk is the amplitude coefficient accounting
for the reflection characteristics of the kth scattering centre; γ = arc tg(xo /yo ) is the
angle between the vector R 0 and the positive z-axis; xo , yo , zo are the observation point

coordinates; O1 is the observation point; and R0 = |R|  = xo2 + zo2 is the distance
between the observation point and the centre of mass of the object.
In order to derive the hologram function in a way shown in Chapter 2, it is
reasonable to use the multiplication procedure performed by an amplitude-phase
detector, followed by an averaging. The artificial reference signal is
 ϕ

u̇ref (ϕ) = u0 exp −iω0 +ψ , (5.4)



where t = ϕ/ is the current moment of time and ψ is an arbitrary initial phase.
Using Eqs (5.2) and (5.4), we can write down the hologram function in the form:
H (ϕ) = Re u̇r (ϕ)Re u̇ref (ϕ)
 
u02 
N

= σk cos rk (cos γ cos θk + sin γ sin θk cos(ϕ + ϕk )) , (5.5)
2 λ
k=1

where the sign · · · stands for the averaging.


To derive Eq. (5.5), the arbitrary initial phase of the reference signal has been
chosen such that
4πR0
− ψ = 0.
λ1
By expanding the function cos(ϕ + ϕk ) into the power series of ϕ and choosing only
the first two terms of the series, we get
 
u2 
N

H (ϕ) = 0 σk cos 2 βk − rk lk (ϕ) , (5.6)
2 λ1
k=1

with

βk = rk (cos γ cos θk + sin γ sin θk cos ϕk ) (5.7)
λ1
and

ϕ2 ϕ3
lk (ϕ) = sin γ sin θk ϕ sin ϕk + cos ϕk − sin ϕk . (5.8)
2 6
Consider now the microwave hologram function of the same object (Fig. 5.1),
obtained by a classical method. In this method, the radar receiver scans, with an
angular velocity , the surface of a cylinder of radius R0 sin γ , having the gener-
atrix parallel to the z-axis. The transmitter is at the point A with the coordinates

zino: “chap05” — 2005/11/7 — 15:38 — page 104 — #4


Radar systems for rotating target imaging (holographic approach) 105

xA = R0 sin γ , yA = O, zA = R0 cos γ , while the angle β0 is equal to the rotation


angle ϕ. Then the function Hcl (ϕ) for the classical microwave Fourier hologram is
 
u02 
N
π
Hcl (ϕ) = σk cos 2 βk − rk lk (ϕ) , (5.9)
2 λ1
k=1

where the functions βk and lk (ϕ) are similar to those of Eqs (5.6) and (5.7).
A comparison of Eqs (5.6) and (5.9) shows that the function Hcl (ϕ) differs from
the function H (ϕ) for the synthesised hologram of the same object in having the
factor ( 12 ) in the second term of the argument cos 2[· · · ]. It is clear that the synthesised
hologram possesses a double capacity to change the argument and, hence, it has twice
as high resolution because it looks like the classical hologram recorded in a field with
a wavelength twice as short as the real one. This effect is due to the simultaneous
scanning by several elements of the transmitter–object–receiver system. It is easy
to see that a microwave hologram recorded by a simultaneous receiver–transmitter
scanning of a fixed object along the arc L (Fig. 5.1) is totally identical to the HA (ϕ)
hologram. In the case of inverse scanning, however, the rotation of the object alone
is equivalent to the movement of two devices – the transmitter and the receiver.
We shall show below that the constant initial phase βk does not affect the structure
of microwave radar imagery. We shall use a simplified expression for the synthesised
Fourier hologram:


N  

H1 (ϕ) ∼
= σk cos rk sin θk cos(ϕk + ϕ) , (5.10)
λ1
k=1

where rk , θk , ϕk are the spherical coordinates of the kth centre. Equation (5.10) was
derived from Eq. (5.5) on the assumption of γ = 90◦ and is valid for the far-zone
approximation.
Since the H1 (ϕ) function basically coincides with Hcl (ϕ), the image reconstruc-
tion from a synthesised Fourier hologram can be made in visible light, using the same
techniques as those of optical Fourier holography [131].
Sometimes, a microwave hologram recorded on a flat transparency is placed in
the front focal plane of the lens L (Fig. 5.4(a)). When the transparency is illuminated
by a plane coherent light wave, two real conjugate images of the object, M and M  ,
are formed near the rear focal plane of the lens. An alternative is to use a spherical
transparency of radius F0 , illuminated by a coherent light beam converging at the
sphere centre (Fig. 5.4(b)). The two variants are identical in the sense that the opera-
tions to be performed are the same. Practically, it is convenient to use the first variant
but to analyse the second one.
If a microwave hologram is recorded on an optical transparency uniformly moving
with velocity vt , the angular coordinate α = vt τ/F0 on the transparency in the
reconstruction space will be related to the angular coordinate ϕ = τ on the hologram
in the recording space:

α = ϕv / F0 = ϕ/µ, µ = F0 /vt . (5.11)

zino: “chap05” — 2005/11/7 — 15:38 — page 105 — #5


106 Radar imaging and holography

(a) Lens u

uM M

I0
v
0 vM

M⬘
H
F0

(b) d(u,v,a) u
A
M (u,v)
F0
a I0
v
2ao 0

L
M⬘

Figure 5.4 Optical reconstruction of 1D microwave images from a quadrature


Fourier hologram: (a) flat transparency, (b) spherical transparency

For a hologram of a point object, the distribution of complex-valued light amplitudes


in the image space u, v at the point M (up , vp ) in the vicinity of the point O can be
represented by an integral (at θk = 90◦ ):
α0   

E(u, v) = A 1 + cos 2 · rk cos(µα + ϕk )
λ1
−α0
 

× exp −j d(u, v, α) dα = I0 + I+1 + I−1 , (5.12)
λ2

α0  

I0 = A exp −j d(u, v, α) dα,
λ2
−α0

α0
A
I±1 = exp[jψ±1 (u, v, α)]dα,
2
−α0

π 2π
I±1 (u, v, α) = ±4 rk cos(µα + ϕk ) − d(u, v, α),
λ1 λ2
d(u, v, α) = [F02 + 2F0 (v cos α − u sin α) + u2 + v2 ]1/2 ,

zino: “chap05” — 2005/11/7 — 15:38 — page 106 — #6


Radar systems for rotating target imaging (holographic approach) 107

where λ2 is the wavelength in the optical range; A is a complex-valued proportionality


factor A = (u02 /4)σ1 ; σ1 is the amplitude coefficient accounting for the reflection
characteristics of the scattering centre k = 1; d(u, v, α) is the distance between an
arbitrary point in the arc L and the point M near the arc centre on the image; 2α0 is
the angular size of the hologram in the image space; and F0 is the lens focal length.
The integrals I0 , I+1 , and I−1 describe the distribution of the complex-valued light
amplitudes in the zeroth and first diffraction orders, both positive and negative. If
the angular dimensions of the hologram are not too large, the functions cos(µα + ϕk )
and d(u, v, α) can be represented as the first term of the respective expansion series
to write down the function ψ(u, v, α):

2π λ2 α2 λ2
ψ±1 (u, v, α) = α 2 µrk sin ϕk ± u + 2 µ2 rk cos ϕk ± v
λ2 λ1 2 λ1

α 3 λ2
− 2 µ3 rk sin ϕk ± u . (5.13)
6 λ1
Here we have omitted the constant expansion terms independent of the argument α.
The coordinates of the points M(uM , vM ) and M (uM , v  ), at which two conjugate
M
images of the point object are formed, can be found from the expressions
∂ψ(u, v, α) ∂ 2 ψ(u, v, α)
= O, = O. (5.14)
∂α ∂α 2
With xM = rk sin ϕk and yM = rk cos ϕk , using Eq. (5.14), we get
λ2 λ2
uM,M = ±2µ xM , vM,M = ±2µ2 yM . (5.15)
λ1 λ1
Equation (5.15), in turn, gives the transverse and longitudinal scales of the image
being reconstructed:
   
 vM  2 λ2
 uM  λ2
 
my =   = 2µ , mx =   = 2µ . (5.16)
yM λ1 xM λ1
An undistorted image of an object can be reconstructed only if all the derivatives of
ψ(u, v, α) are simultaneously equal to zero with respect to the argument α. It is easy
to show that this condition is met at one point (M and M ) at µ = F0 /vt or ϕ ≡ α.
The latter identity defines the criterion for optical processing of synthesised Fourier
holograms: the aperture angles in the recording and reconstruction space must be the
same. If the reconstruction procedure has been designed in the optimal way, we have
mx = my = m, and the object is reproduced without distortions along the longitudinal
and transverse directions.
A specific feature of a synthesised Fourier hologram is that the resolution obtained
is independent of the distance to the object. Indeed, let us take the following expression
to be the measure of the resolving power:

 = |I (uM )|−2 |I (u)|2 du, (5.17)
−∞

zino: “chap05” — 2005/11/7 — 15:38 — page 107 — #7


108 Radar imaging and holography

where |I (u)|2 is the light intensity distribution across the scattering centre image and
uM is the coordinate of the maximum intensity of the image focusing.
Equation (5.17) describes the receiver pulse response to the point object. Then,
neglecting all the terms in Eq. (5.13) except for the first one and using the scale
relations of Eq. (5.16), we can define the resolving power of the object as
u λ1 λ1
x (λ1 , ψS ) = = = , (5.18)
mx 4ϕ0 2ψS
where ψS is the object angle variation during the recording. Therefore, when the holo-
gram angles are small, the resolving power of the object varies with the wavelength
and the synthesised aperture angles, rather than with the distance to the object or the
reconstruction parameters.
With the scale relations from Eq. (5.16), we find for µ = 1
λ2
m y = mx = 2 .
λ1
Then the criterion described by Eq. (5.18) can yield the resolution of a video
microwave image:
λ2
u (α0 ) = x (λ1 , ψS )mx = . (5.19)
2ϕ0
It follows from Eq. (5.19) that the resolution of a microwave image obtained by
inverse synthesis and optimal processing is fully consistent with the Abbe criterion
for optical devices (Chapter 1).
Consider now distortions arising from the reconstruction of a microwave image.
These are defined by the high-order terms of Eq. (5.13) for the following reason. When
an image is viewed in one plane, some of the scattering centres are shifted relative to
this plane, that is, they are defocused. With the quadratic term of Eq. (5.13), the field
distribution in a defocused point image is defined as
 
α0 4rx p2
I+1 (p, t0 ) = A exp πj −
t0 λ1 2
× {C(t0 + p) + C(t0 − p) + j[S(t0 + p) + S(t − p)]}, (5.20)

where t = 2(vM − v)/λ2 describes the viewing plane shift relative to the focusing
plane and p = uM /λ2 t, t0 = α0 t and S(z), C(z) are the Fresnel integrals.
The resolution of a defocused microwave image is described by the function

|I+1 (p, t0 )|2
(to ) = dp (5.21)
|I+1 (O, t0 )|2
−∞

shown in Fig. 5.5. Obviously, the best resolution  ˆ = 1.2 is achieved at a certain
optimal value of t0 = t̂0 = 1 and an optimal aperture size
α̂0 = [2(vM − v)/λ2 ]−1/2 . (5.22)

zino: “chap05” — 2005/11/7 — 15:38 — page 108 — #8


Radar systems for rotating target imaging (holographic approach) 109

∆ (t0)
2.0

1.8

1.6

1.4

1.2

1.0 t0
0.4 0.6 0.8 1.0 1.2 1.4 1.6

Figure 5.5 The dependence of microwave image resolution on the normalised


aperture angle of the hologram

At v = 0, when the viewing plane is superimposed with the focal plane of the lens,
we can use Eq. (5.15) to get
  −1  √ −1
α̂0 = 2µ yM /λ1 = µ τmax , (5.23)

where τmax = 2Lmax /λ1 is the maximum longitudinal dimension of the object,
expressed as half-wavelengths.
As the size of the object or the aperture increases, the influence of the high-order
terms of Eq. (5.13) becomes more pronounced resulting in distortions and a lower
resolution. These factors impose constraints on the synthesised aperture size.
The image reconstruction of microwave Fourier holograms has some specificity
associated with the way the artificial reference wave is created. If the reference
signal phase is not modulated, the phase of the coherent reference background along
the hologram is constant, a situation equivalent to the position of a point object
at the rotation centre. So during the reconstruction, the three images – that of the
reference source and the two conjugate images of the object – overlap. To separate
these images, one should introduce a space carrier frequency (SCF) by changing the
phase of the reference signal at a constant rate, like in the expression

dψ/dτ ≥ 4π rmax /λ1 , (5.24)

where rmax is the vector radius of the scattering centre located at the maximum distance
from the object rotation centre.
The reference wave phase can be modulated by a phase shifter or by introducing
translational motion along the viewing line, in addition to the rotational motion. In
the latter case, the translational velocity v must satisfy the inequality v > rmax .

zino: “chap05” — 2005/11/7 — 15:38 — page 109 — #9


110 Radar imaging and holography

5.2 Complex 1D microwave Fourier holograms

We have shown in Section 5.1 that a 1D quadrature microwave Fourier hologram


H1 (ϕ) can be described by Eq. (5.10). A conjugate quadrature Fourier hologram with
a π/2 phase shift has the form:


N  

H2 (ϕ) ∼
= σk sin rk sin θk cos(ϕk + ϕ) . (5.25)
λ1
k=1

According to Eq. (2.23), the holograms H1 (ϕ) and H2 (ϕ) can form a complex Fourier
hologram:


N  

H (ϕ) = H1 (ϕ) + jH2 (ϕ) = σk exp j rk sin θk cos(ϕk + ϕ) . (5.26)
λ1
k=1

This expression can be re-written in a simpler form:

H (x) = u exp( j), (5.27)

where u and  are the amplitude and phase (in the recording plane) of the total
field scattered by the object. The argument ϕ of the H function has been replaced
by the linear x-coordinate, since a 1D microwave hologram is recorded on a flat
transparency.
The image reconstruction by a plane wave in a paraxial approximation is reduced
to the Fourier transformation of the hologram function, assuming for simplicity that
the recording and the reconstruction are performed at the same wavelength:


V (ωx ) = H (x) exp(−jωx x) dx, (5.28)
−∞

where ωx is the space frequency corresponding to the coordinate in the image plane.
The substitution into Eq. (5.28) of the expressions for the quadrature holograms
in Eqs (5.10) and (5.25), re-written as Eq. (5.27), gives


1
V1 (ωx ) = u exp( j) exp(−jωx x) dx
2
−∞


+ u exp(−j) exp(−jωx x) dx , (5.29)
−∞

zino: “chap05” — 2005/11/7 — 15:38 — page 110 — #10


Radar systems for rotating target imaging (holographic approach) 111
 ∞

1 
V2 (ωx ) = u exp( j) exp(−jωx x) dx
2j
−∞


− u exp(−j) exp(−jωx x) dx . (5.30)
−∞

It is seen that each quadrature hologram gives two conjugate images described by the
appropriate terms in Eqs (5.29) and (5.30).
In a complex hologram, the first quadrature component gives two conjugate
images in Eq. (5.29), while the second component reconstructs the images
 ∞

1
V2 (ωx ) = u exp( j) exp(−jωx x) dx
2
−∞


− u exp(−j) exp(−jωx x) dx . (5.31)
−∞

The first terms in Eqs (5.29) and (5.31) are identical, while the second terms differ
in the phase by the value π . A combined reconstruction after summing up the fields
in Eqs (5.29) and (5.31) yields one pair of conjugate images that enhance each other
and another pair of images that annihilate each other; so we eventually have

V (ωx ) = u exp( j) exp(−jωx x) dx. (5.32)
−∞

The complex-valued function V (ωx ) describes the only image reconstructed from a
complex hologram [145]. The image intensity can be defined as

W (ωx ) = |V (ωx )|2 . (5.33)

To illustrate, consider the case when the object is a point and the parameters θ1 and
ϕ1 are equal to π/2. For small values of ϕ (ϕ < 1 rad.) and ϕ = x/vt , where vt is
the velocity of the recording transparency, Eq. (5.26) reduces to

∼ 4π
H (x) = u exp j r x . (5.34)
λ1 v t
Since the hologram is recorded in a finite time interval,τ ∈ [−T /2, T /2], Eq. (5.28)
yields
t T /2
v

V (ωx ) = H (x) exp(−jωx x) dx. (5.35)


−vt T /2

zino: “chap05” — 2005/11/7 — 15:38 — page 111 — #11


112 Radar imaging and holography

The substitution of Eq. (5.34) into Eq. (5.35) and the integration give
 
4π vt T 4π
V (ωx ) = 2σ sin r − ωx r − ωx . (5.36)
λ 1 vt 2 λ1 v t
Clearly, this function is of the sin z/z type and has a maximum at ωx =
(4π/λ1 )r( /vt ), which corresponds to the image of the point.
Digital reconstruction reduces to the calculation of the integral in Eq. (5.28) and
has no zeroth order. So a complex hologram can be formed without introducing
the carrier frequency, which decreases the amount of data to be processed: a single
quadrature hologram requires, at least, twice as many discrete counts because of the
high carrier frequency.
Optical reconstruction produces the zeroth order, in addition to a single image,
because of the presence of the reference level of Hr (Eq. (2.20)). During the process-
ing of a complex hologram recorded without the carrier frequency, the zeroth order
overlaps the image. Their spatial separation can be made by just introducing the car-
rier frequency. Then the use of a complex hologram has no sense, since one does not
have to remove the conjugate image. Besides, the optical reconstruction of a complex
hologram is hard to make due to the strict requirements on the adjustment of the two-
channel processing suggested in Reference 35. Thus, complex microwave holograms
should be recorded without introducing the carrier frequency and reconstructed only
digitally.

5.3 Simulation of microwave Fourier holograms

A comparison of various techniques applied in microwave Fourier holography can be


made using a special algorithm for digital simulation of 1D quadrature and complex
hologram recording and reconstruction for simple objects. The algorithm consists
of two units, one of which records a hologram following Eq. (5.26) and the other
reconstructs the image, that is, calculates the integral of Eq. (5.28). The image recon-
struction from individual quadrature holograms is performed using an additional
procedure for the calculation of the Fourier integrals of the real functions H1 and H2
from the Fourier transform of the complex function H = H1 + jH2 .
Figure 5.6(a–c) illustrates some of the results of the digital simulation. The ordi-
nate shows the image intensity in relative units and the abscissa the image size. In
digital reconstruction, a microwave image represents a series of discrete counts spaced
at a distance λ1 /2ψS . The model object consisted of two scattering centres arranged
to form a dumb-bell structure of 10λ1 in length, which rotated at a constant angular
velocity round the centre of mass. The quantity θk (Fig. 5.3) was taken to be equal to
π/2. The image illustrated in Fig. 5.6(a) was reconstructed from a single quadrature
hologram. Peaks 1 and 2 correspond to one conjugate image of the two scattering
centres and peaks 3 and 4 to the other. The image separation was made using the SCF,
whose introduction was simulated by the radial displacement (with the velocity vl ) of
the object rotation centre relative to the receiver. One of the conjugate images vanished
during the processing of the complex hologram (Fig. 5(b)), so the carrier frequency

zino: “chap05” — 2005/11/7 — 15:38 — page 112 — #12


Radar systems for rotating target imaging (holographic approach) 113

(a) W, rel. un (d) W, rel. un


1 2 3 4
1.0
cs = p/120
0.8
0.6
0.4
0.2

–20 –10 0 10 20 –9.5 0 9.5


r/l1 r/l1

(b) W, rel. un (e) W, rel. un


1.0 1.0
cs = p/6
0.6 0.6

0.2 0.2

–20 –10 0 10 20 –4.8 0 4.8


r/l1 r/l1

(c) W, rel. un (f) W, rel. un


1.0 1.0
cs = p/2
0.6 0.6

0.2 0.2

–20 –10 0 10 20 –6.4 0 6.4


r/l1 r/l1

Figure 5.6 Microwave images reconstructed from Fourier holograms: (a) quadra-
ture hologram, (b) complex hologram with carrier frequency, (c) com-
plex hologram without carrier frequency and (d,e,f) the variation of the
reconstructed image with the hologram angle ψs (complex hologram
without carrier frequency)

was not needed. This is clearly seen in Fig. 5.6(c) showing the image reconstructed
from a complex hologram recorded without the carrier frequency.
Figure 5.6(d–f) presents the variation of the reconstructed image with the holo-
gram angle. The comparison of these results supports the above conclusion that
there is an optimal size of the synthesised aperture. As the angle ψS becomes larger,
the resolution increases to a certain limit, beyond which distortions arise in the
image structure. The resolving power of this technique estimated from the results
of the digital simulation is ∼λ1 .
Currently, there are two methods used in microwave Fourier holography. One is
based on the recording of a single quadrature phase-amplitude hologram of the type
described by Eq. (5.10) with the carrier frequency and optical image reconstruction.
The other method records a complex hologram of the type described by Eq. (5.26)
without introducing the carrier frequency but using a digital image reconstruction.
The application of the first method involves some problems associated with the
use of an anechoic chamber (AEC), because the linear displacement of the object
for introducing the carrier frequency leads to the camera decompensation. So we

zino: “chap05” — 2005/11/7 — 15:38 — page 113 — #13


114 Radar imaging and holography

Input H1 Input H2

Normalisation Normalisation
H1 H2

Selection of Selection of
synthesis interval synthesis interval

Interpolation Interpolation

Fast Fourier
transform

Output Re(V ) Output Im(V )

Computation W = /V/2

Output W

Figure 5.7 The algorithm of digital processing of 1D microwave complex Fourier


holograms

recommend the second technique when one uses an anechoic camera. We shall discuss
some of the results obtained by the second method.
Figure 5.7 illustrates the algorithm of digital image reconstruction, which operates
as follows. The setting of discrete data is followed by their normalisation, that is, the
data are reduced to the variation range [−1, 1]. The hologram is usually recorded for
a full 2π rad rotation; so for the subsequent processing, one selects a series of counts
in such a way that their number describes the optimal aperture and their position in
the array corresponds to the required aspect. An interpolation unit makes it possible
to reduce the number of signal records to 2m , where m is a natural number. The image
reconstruction is performed by a Fourier transform unit using the FFT algorithm for
the complex-valued function H (x). Arrays of Re(V ) and Im(V ) numbers that define
the image, whose intensity is found as W = Re2 (V ) + Im2 (V ), are produced at the
unit output.
Figure 5.8 presents the results of digital processing of 1D complex Fourier holo-
grams recorded experimentally with an anechoic camera. The image intensity is
plotted in relative units along the y-axis and its linear dimension along the x-axis.
The object is a metallic sphere of radius 0.3λ1 , rotating along a circumference of
radius 3λ1 . The positions of the point image in Fig. 5.8(a–c) are different and vary
with the object aspect ψ0 as shown schematically in each figure.

zino: “chap05” — 2005/11/7 — 15:38 — page 114 — #14


Radar systems for rotating target imaging (holographic approach) 115

(a)
c0 = p/12, cs = p/6

W, rel. un.
1.0

0.6

0.2
–12.8 0 12.8 r, cm

(b) c0 = 5p/2, cs = p/6

W, rel. un.
1.0

0.6

0.2
–12.8 0 12.8 r, cm

(c) c0 = 3p/4, cs = p/6

W, rel. un.
1.0

0.6

0.2
–12.8 0 12.8 r, cm

Figure 5.8 A microwave image of a point object, reconstructed digitally from


a complex Fourier hologram as a function of the object’s aspects
0 (s = π/6): (a) 0 = π/12, (b) 0 = 5π/2 and (c) 0 = 3π/4.

The methods we have discussed have some advantages and limitations. The
recording of single quadrature holograms is made in one channel but requires that
the carrier frequency should be introduced in this way or another. The recording of
complex holograms does not require the carrier frequency but it is more complicated
because the channels must have a strict quadrature character, their parameters must
be identical, and the measurements must be well synchronised. However, the record-
ing errors associated with these characteristics of a two-channel system can be easily
eliminated by the processing. (We have mentioned above that complex microwave
Fourier holograms should be processed only digitally.) The image reconstruction
from quadrature holograms can be made both digitally and optically. The possibil-
ity of recording a hologram in a form suitable for digital processing increases the
dynamic range of the system. It does not then need the use of sophisticated units,

zino: “chap05” — 2005/11/7 — 15:38 — page 115 — #15


116 Radar imaging and holography

such as high-resolution cathode-ray tubes or high-precision focusing and deflecting


devices. In optical processing, the aperture size is normally limited by the characteris-
tics of the reconstruction unit, so it cannot be made optimal. On the other hand, optical
processing allows re-focusing of the observation plane without difficulty, providing
a 2D image (in longitudinal and transversal directions).
The investigation and analysis of methods for microwave Fourier holography have
shown that they can be successfully used for imaging objects which can be repre-
sented as an array of scattering centres. These methods are of interest to those studying
diffraction with anechoic cameras (Chapter 9), in particular, for the experimental
verification of the applicability of the physical theory of diffraction developed by
P. Ya. Ufimtzev [137] and of the geometrical theory of diffraction by J. B. Keller [70].
These methods can also be useful in designing radar systems with an inversely
synthesised aperture (Chapter 9).

zino: “chap05” — 2005/11/7 — 15:38 — page 116 — #16


Chapter 6
Radar systems for rotating target imaging
(a tomographic approach)

6.1 Processing in frequency and space domains

Section 2.4.2 discussed the tomographic approach to target imaging in


two-dimensional (2D) viewing geometry. We suggested an algorithm for processing
in the frequency domain, which finds the reflectivity function ĝ(x, y) from Eq. (2.48).
The first procedure to be performed is to reconstruct an image in the frequency
domain by calculating the N number of discrete Fourier transform (DFT) records of
the echo complex envelope
N
 =1
Pθ (l, m) = sv (nt, mδθ) exp(−j2π ln /N ) (6.1)
n=0
for each of the M number of the target angular positions mδθ , m = 0, . . . , M − 1. The
pixels found in this way are located at the polar grid nodes formed by the interceptions
of concentric circumferences separated by the frequency step 1/N t and rotated by
the radial beam angle δθ from one another.
Since an inverse DFT can be made only on a rectangular grid, the second procedure
should include the finding of pixels at the equidistant nodes of a rectangular grid, using
the Pθ (l, m) values obtained by the first procedure. This is followed by a 2D inverse
DFT computation of the target reflectivity ĝ(xi , yj ) at the rectangular grid nodes.
This algorithm has two important features that deserve attention. First, since the
complex envelope of an echo signal is finite, there are distortions near the ±I/2t
boundaries of the major period of the Pθ (l, m) spectrum. The distortions arise from the
superposition of high-frequency components of the adjacent spectral periods. Besides,
the high-frequency spectrum may contain noise that dominates over the signal data.
To reduce the noise, one has to resort to weighting by multiplying the Pθ (l, m) DFT
data by a ‘window’ function. The choice of such a function should be based on the
consideration of how much the noise abates the radar data and what kind of target is
being probed [57].

zino: “chap06” — 2005/11/7 — 15:38 — page 117 — #1


118 Radar imaging and holography

Second, since the radar is a coherent system, it seems important to define the
discretisation step δθ of the θ angle as the target aspect changes. The criterion for
choosing a δθ value can be formulated as follows: the phase shift of the echo signal
from the point scatterer most remote from the target centre of mass should not be
larger than π when the target aspect changes by δθ . This criterion is written as

δθ ≤ λc /4|r̄o |max . (6.2)

This expression is valid for relatively narrowband signals, whose spectral width is
much less than the carrier frequency. Otherwise, one should substitute λc in Eq. (6.2)
by the wavelength of the highest frequency component in the signal spectrum.
It is worth noting that the method of synthesising the so-called unfocused aperture
is a particular case of the above processing algorithm for the frequency domain. The
movement of a point scatterer along an arc is approximated by the movement along a
tangent to it. By substituting v = y cos θ − x sin θ into Eq. (2.40) and using sin θ ≈ θ
and cos θ ≈ 1 − θ 2 /2, we get
 ∞
S( f ) =H ( f ) g(x, y){exp[ j(kc + k)θy2 ]}
−∞

× exp[−j2(kc + k)y + j2(kc + k)θx ] dx dy.

If we eliminate the squared phase term, it will be clear that the ĝ(x, y) function can be
reconstructed by an inverse Fourier transform (IFT) over the rectangular raster which
has replaced the respective region of the polar raster. This approximation works well
only if the aspect variation during the data acquisition was small.
Let us discuss now the processing algorithm for the space domain, or the convo-
lution algorithm. For this, Eq. (2.48) will be transformed from the Cartesian to polar
coordinates:
π ∞
ĝ(x, y) = dθ Sθ ( fp )|fp | exp[ j2π fp r cos(θ − ϕ)]dfp . (6.3)
0 −∞

The inner integral in Eq. (6.3) represents the IFT of the product of fp and the function
defined by expression (2.43). The result is the convolution of the quantity F −1 {Sθ ( fp )}
with the so-called kernel function q(v) = F −1 {|fp |}. If one uses the window function
F( fp ) to reduce the effect of high-frequency spectral noise, one gets

g(v) = F −1 {|fp |F( fp )}. (6.4)

The result of the integration with respect to the variable fp in Eq. (6.3) using Eq. (6.4)
is known as a convolutional projection. It can be used for making a back projection
procedure:

ĝ(x, y) = ξθ [r cos(θ − ϕ)] dθ. (6.5)
0

zino: “chap06” — 2005/11/7 — 15:38 — page 118 — #2


Radar systems for rotating target imaging (tomographic approach) 119

This procedure implies the integration of the contribution of each convolutional pro-
jection ξ0 (·) to the resulting image. The substitution of the integral in Eq. (6.4) by the
Riman sum gives
M
 =1
ĝ(xi , yj ) = ξ0 [r(xi , yj )mθ ] δθ , (6.6)
m=0

where

r(xi , yj , mδθ ) = xi2 + yj2 cos[mδθ − arctg(xi /yj )]. (6.7)
The latter expression is used to find (by interpolation) the contribution of the convo-
lutional projection obtained at the mth target aspect to each of the (xi , yj ) pixels of
the rectangular image grid.
An important advantage of the convolution algorithm is the possibility of pro-
cessing data as they become handy, because the contribution of every projection to
the final image is computed individually.
If the transmitter signal contains a finite number L of discrete frequencies, Eq. (6.3)
will take the form:
L
 π
4πfp l
ĝ(x, y) = Sθ ( fp l) exp[ j2π fp lr cos(θ − ϕ)] dθ (6.8)
c
l=1 0
and the processing algorithm reduces to summing up 1D integrals with respect to
the variable θ . We can make computations with formula (6.8) in two ways. One is to
calculate the integral for every value of (xi , yj ) and the other is to solve the subintegral
expression for the M number of aspects for every frequency value, followed by
interpolation, as in the common convolution algorithm.
Thus, radar imaging of extended compact targets by inverse aperture synthesis
can be made by using a number of algorithms well known in computerised tomog-
raphy. The application of the convolution algorithm of the back projection method
allows a reduction in the imaging time, as compared with the time of reconstruction
in the frequency domain, due to the processing of individual echo signals. The inter-
polation can be omitted in the case of discrete-frequency transmitter signals, giving
an additional reduction in the processing time.
Another important feature of an imaging radar is its coherence, so it provides more
information than conventional systems using computerised tomography. On the other
hand, coherence must be maintained in all of the radar units during the operation. This
circumstance also imposes restrictions on the minimum repetition rate of transmitter
pulses.

6.2 Processing in 3D viewing geometry: 2D and 3D imaging

It has been shown in Chapter 5 that inverse aperture synthesis is the most promising
technique for imaging extended proper and extended compact targets with a high

zino: “chap06” — 2005/11/7 — 15:38 — page 119 — #3


120 Radar imaging and holography

angular resolution. The fact that such targets can be imaged during their arbitrary
motion makes it possible to use this technique in available radar systems (Chapter 9).
The conditions for microwave hologram recording are primarily determined by the
application of the images to be obtained. For example, if radar responses are studied
in an anechoic chamber (AEC) (Chapter 9), it is sufficient to use a 2D geometry
with an equidistant arrangement of the aspect angles. The target rotates uniformly
around the axis normal to the line of sight. By deviating the rotation axis from this
normal after every measurement run, one can, in principle, obtain 2D images even
with monochromatic radar pulses.

6.2.1 The conditions for hologram recording


There are a number of applied tasks when the target aspect variation must reflect
natural viewing conditions. Let us consider the aspect variation relative to the line
of sight of a ground radar viewing a hypothetical satellite moving at an altitude
H = 400 km along a circular orbit with the inclination i = 97◦ (Fig. 6.1). The target
is assumed to be perfectly stabilised in the orbital coordinates, and its aspect in the
orbital plane is defined by the angle α between the longitudinal construction line and
the projection of the line of sight onto the orbital plane. The angle β between the line
of sight and the orbital plane describes the aspect variation in the plane normal to
orbital plane. The analysis of the plots presented shows that the aspect variation of
this class of targets during hologram recording in real viewing conditions should be
characterised by (1) a 3D viewing geometry and (2) a non-equidistant arrangement
of samples within the view zone.
To derive analytical relations for the description of a microwave hologram for 3D
viewing geometry, we shall consider the following conditions for viewing an orbiting
satellite. The target is scanned by a ground coherent radar transmitting a probing
signal with the carrier frequency fo and the modulation function w(t) from Eq. (2.30).
The radar measures the amplitude and phase of the echo signal (for a narrowband
signal ẇ(t) = A, where A is the complex envelope amplitude).
The target is large relative to the wavelength λ of the radar carrier oscillation,
such that the target can be represented as an ensemble of individual and independent
scatterers. Every scatterer is rigidly bound to the target’s centre of mass or moves
across its surface as its aspect changes with respect to the radar. The position of the
nth scatterer at any moment of time is defined by the radius vector rno with the origin
at point O rigidly bound to the target’s centre of mass.
The positions of the arbitrary nth scatterer and the rotation centre of the satellite
will be described by the radius vectors rno and R  o , respectively (Fig. 2.8). In the
general case of 3D viewing geometry, an echo signal is defined, within the accuracy
of a constant factor, as

Sv (t) = g(rno ){w(t − 2|R  o |/c − 2r̂n /c) exp(−j2π f0 2|R  o |/c)}
V

× exp(−j2πf0 2r̂n /c) drno . (6.9)

zino: “chap06” — 2005/11/7 — 15:38 — page 120 — #4


Radar systems for rotating target imaging (tomographic approach) 121

(a) 350.0

31 grad
66 grad
88 grad

300.0
Aspect angle a, grad

250.0

200.0
0.0 100.0 200.0 300.0 400.0 500.0
Observation time, s

(b) 60.0

50.0
Aspect angle b, grad

40.0

30.0

20.0

10.0

0.0

–10.0
0.0 100.0 200.0 300.0 400.0 500.0
Observation time, s

Figure 6.1 The aspect variation relative to the line of sight of a ground radar as a
function of the viewing time for a satellite at the culmination altitudes
of 31◦ , 66◦ and 88◦ : (a) aspect α and (b) aspect β

It follows from Eq. (2.34) that signal noise due to the presence of coordinate infor-
mation can be corrected by the receiver. The correction consists in selecting the time
strobe position in accordance with the delay 2|R  o |/c and in introducing the phase
 o |/c) in the reference signal during the coherent sensing.
factor exp j2πf0 (2|R

zino: “chap06” — 2005/11/7 — 15:38 — page 121 — #5


122 Radar imaging and holography

As a result of the compensation for the radial displacement of the satellite, the
family of spectra of video pulses must be represented as a microwave hologram. For
this, we go from time frequencies to space frequencies to get


S( fpo + fp ) = F{Sv (ct/2)} = H ( fp ) g(rno ) exp−j2π( fpo + fp )d(t) drno ,
v (6.10)
where F{·} is the Fourier transform operator, W ( fp ) = F{w(v)} is the space frequency
spectrum of the transmitter pulse, fpo = 2fo /c is the space frequency corresponding to
the spectral carrier frequency, 2fl /c < fp < 2fu /c is the space frequency determined
over the whole frequency bandwidth of the transmitter pulse, H ( fp ) = W ( fp )K( fp )
is the aperture function, and K( fp ) is the transfer function of the filter for the range
processing of video pulses.
The above analytical description of video pulse spectra in terms of space fre-
quencies has not changed the r̂no (t) function, which is still considered to be a time
function at the synthesis step. Now the pair of angular coordinates θ , B in the 3D
frequency space (Fig. 6.2(b)) will be compared at every moment t of the synthesis
step. The microwave hologram function can be presented as a 3D Fourier transform
in the spherical coordinates fp , θ , B:

S( fp ) = H ( fp ) g(rno ) exp(−j2π fp rno ) drno , (6.11)
v

where fp = ( fpo +fp )e(θ, B) is the radius vector of the space frequency in the frequency
domain.
The geometrical relations for the recording of such a hologram will be derived for
two typical cases of ground radar viewing of orbiting satellites. Fig. 6.2(a) shows the
viewing geometry and Fig. 6.2(b) illustrates fragments of the holograms obtained.
The angular position of the radar line of sight (RLOS) is described by the azimuthal
angle θ = α − 3π/2 and the polar angle β with respect to the whole body-related
coordinate system xyz. The line of sight is represented in space as a line across a
unit sphere with the centre at the coordinate origin. The arrangement of the hologram
pixels in the frequency domain is defined relative to the fx fy fz coordinates by the
angles θ, B and the radial fp coordinate (Fig. 6.2(b)). The hologram recording should
meet the conditions θ = θ ∗ and B = β ∗ , where θ ∗ and β ∗ are the estimates of the θ
and β angles.
In the first of the above cases, a narrowband radar tracks a satellite, stabilised
by the body-related coordinates along the three axes, during its translational motion
along the orbit. The line of sight turns relative to the satellite to describe a curve
on the unit sphere (the left side of Fig. 6.2(a)), which represents an arc in the xy plane
if the radar is located in the orbital plane, or a 3D curve in all other cases. If the radar
transmits a continuous wave, the hologram reproduces the shape of this line on the
sphere fpo in the frequency domain (Fig. 6.2(b)).
If a radar transmits a pulsed signal with the repetition rate Fr or if a continuous echo
signal is appropriately discretised, a hologram will represent a series of individual

zino: “chap06” — 2005/11/7 — 15:38 — page 122 — #6


Radar systems for rotating target imaging (tomographic approach) 123

(a) v
z

b b2 b1
q y
q
a

(b) Spherical surface fz


fpo
DB

dfB

DB Dfp
B
u fy

Du Du

fx

Figure 6.2 Geometrical relations for 3D microwave hologram recording: (a) data
acquisition geometry; a–b, trajectory projection onto a unit surface
relative to the radar motion and (b) hologram recording geometry

samples separated by δfψ = fpo θ̇ ∗ cos β ∗ /Fr , where θ̇ ∗ = d θ̇ ∗ (t)/dt is the angular
velocity of the satellite rotation in the orbital plane.
In the second case, one gets a wideband hologram of a satellite stabilised by
rotation of the body-related coordinates around the z-axis (the right side of Fig. 6.2(b)).
During the tracking, the angle between the line of sight and the rotation axis changes
slowly by the value β = β2 − β1 with β̇  θ̇ . The interception of the unit sphere
surface by the line of sight forms a spiral confined between two conic surfaces with the
half angles π/2 − β1 and π/2 − β2 at the vertex. The resulting hologram represents a
multiplicity of real beams that form a spiral band (Fig. 6.2(b)). The band is transversely
bounded by two spherical surfaces and is ‘fitted’ between two conical surfaces, with
B1 = β1 and B2 = β2 . The radii of the spheres are equal to the lower fpl and upper fpu
space frequencies of the hologram. Figure 6.2(b) shows a fragment of such a hologram
bounded by the azimuthal step θ, while the satellite makes the θ̇t/2π number of

zino: “chap06” — 2005/11/7 — 15:38 — page 123 — #7


124 Radar imaging and holography

rotations during the synthesis time step t. The adjacent hologram slices synthesised
during consecutive rotations are spaced by the frequency step δfu = 2π fpo β̇ ∗ /θ̇ ∗ .
Under the condition
δfu−1 ≥ D,
where D is the maximum linear size of a satellite, the resolution can be achieved
by the synthesis in the plane intercepting the z-axis. The resulting 3D wideband
hologram containing, at least, several slices will be referred to as a surface hologram.
A surface hologram is usually synthesised by a wideband radar, when tracking a
satellite stabilised along the three axes, or when dealing with a model target in an
AEC. In the latter case, a hologram lies entirely in the fx –fy plane.
Every beam of a wideband microwave hologram corresponds to a single echo
signal and is made up of a certain number of discrete pixels, L, since digital hologram
processing implies discretisation of the echo pulse spectrum.
It is clear from the foregoing that the conditions for recording a hologram of a target
performing a complex movement relative to an imaging radar are the compensation
for its radial displacement and the recording of the video signal spectrum in a form
adequate for the respective aspect variation, that is, in a spherical or polar geometry.

6.2.2 Preprocessing of radar data


The preliminary processing of radar data integrated in the form of a microwave
hologram to be further used for image reconstruction can be described in terms of a
linear filtering model as a processing by an inverse filter in a limited frequency band.
The transfer function of the filter is
Hf ( fp ) = H −1 ( fp )Ho ( fp )Hr ( fp ), (6.12)

where Hr (fp ) is a non-zero aperture function within the chosen boundaries fph of the
hologram (Fig. 6.2(b)):

1, fp ⊂ Vf ,
Ho (fp ) = rect( fp /fph ) = (6.13)
0, fp  ⊂ Vf ;
and Hr ( fp ) = exp[j2π( fpo + fp )|ra |] is the transfer function of the compensation step
of the target radial displacement.
The process of image reconstruction from a hologram described by Eq. (6.11) can
be represented as

ĝ(rno ) = F {S( fp )Hf ( fp )} = S( fp )Hf ( fp ) exp( j2π fp rno ) d fp
−1  
Vf

= g(rno )∗ho (rno ), (6.14)

where ho (rno ) = F −1 {Ho ( fp )} is a perfect impulse response which only describes the
image noise due to the finite diffraction limit, or to the limited size of the aperture
function Ho ( fp ).

zino: “chap06” — 2005/11/7 — 15:38 — page 124 — #8


Radar systems for rotating target imaging (tomographic approach) 125

Thus, the processing of an echo signal during the imaging includes two stages
(Fig. 6.3). The signal preprocessing is aimed at synthesising a Fourier hologram,
whose size and shape are determined by the transmitter pulse parameters and the
target aspect variation. The structure and composition of processing operations 1–5
are conventional radar operations and can be varied with the type of transmitter

Echo signal

Step 1
Pre processing

Coherent detection 1

Range processing 2

Analogue – digital transform 3

DFT 4

Annihilation of phase distortions due to turbulent 5


troposphere

Range
estimation Annihilation of target radial displacement 6

Aspect
estimation Spherical (polar) recording 7

Microwave hologram

Image reconstruction Step 2

Subdivision into partial holograms 8

Partial image reconstruction by inverse DFT 9

Computation of partial image contributions to the total 10


image

Radar image

Figure 6.3 The sequence of operations in radar data processing during imaging

zino: “chap06” — 2005/11/7 — 15:38 — page 125 — #9


126 Radar imaging and holography

signal, the processing techniques used, and the tracking conditions. For example,
a monochromatic pulse does not require operations 2 and 4. When a signal with a
LFM is subjected to correlated processing, operations 1 and 2 coincide, and oper-
ation 4 becomes unnecessary. The compensation for the radial displacement of a
satellite during hologram recording in field conditions is a fairly complex problem
[8,10]. In an AEC, the latter operation reduces to the introduction of the phase factor
exp j2πfpl (2Ro /c), where fpl is the space frequency of the first spectral component
of the hologram and R0 is the distance between the antenna phase centre and the target
rotation centre [8,10]. Obviously, the phase factor is constant for a particular AEC.
A necessary operation specific to ISAR systems at the preprocessing stage is the
recording of the target aspect variation. It is assumed that each pixel on the hologram
is compared by a digital recorder with the family of coordinates defining its position
in the frequency domain fx fy fz (in the frequency plane fx –fy ) (see Fig. 6.2).
It is worth discussing a possible application of available processing algorithms
for image reconstruction from a microwave hologram.
The experience gained from the application of inverse aperture synthesis for imag-
ing aircraft and spacecraft as well as from the study of local radar characteristics has
stimulated the development of algorithms for processing echo signals by coherent
radars. A fairly detailed analysis of the algorithms can be found in Reference 8 and
in Chapter 2 of this book, so we shall discuss only the possibility of applying them
to the aspect variation of real targets.
It has been shown in Section 2.3.2 and in the References 9 and 10 that the condi-
tions for tracking real targets differ from the conditions in which available algorithms
operate. First, discrete aspect pixels are not equidistant because of a constant rep-
etition rate of the transmitter pulses. Second, the angle between the RLOS and the
target rotation axis changes during the viewing. An inevitable result of the latter is
the consideration of a 3D character of the problem. Attempts at applying the 2D
algorithms discussed above to the processing of 3D data lead to essential errors in the
images [8]. The level of errors rises with increasing relative size of a target (the ratio
of the maximum target size to the carrier radiation wavelength) and with increasing
deviation from 90◦ of the angle formed by the line of sight and the target rotation axis.
To conclude, radar imaging should consider the viewing geometry, which requires
the use of a radically new approach to data processing. The approach should provide
3D microwave holograms and be able to overcome a non-equidistant arrangement of
echo pixels representing the aspect variation of space targets.

6.3 Hologram processing by coherent summation of


partial components

It has been shown earlier that image reconstruction from a microwave hologram
should generally include a 3D IFT of the hologram function. The obtained estimate
of ĝ(rno ) is a distorted representation of the target reflectivity function.
If there is no processing noise and the radial displacement has been perfectly
compensated, an error may be due to a limited bandwidth of the transmitter pulse

zino: “chap06” — 2005/11/7 — 15:38 — page 126 — #10


Radar systems for rotating target imaging (tomographic approach) 127

or a limited aspect variation. The resolving power of image-synthesising devices is


then restricted only by the diffraction limit, and the image produced is known as a
diffraction-limited image. Recording and processing noise additionally deteriorate
image quality. So when designing algorithms and techniques for image processing,
one should bear the following things in mind: (1) the dimensionality of an image is
not to be higher than that of a microwave hologram and (2) the image resolution in any
direction is to be inversely proportional to the hologram length. Hence, processing
of 3D holograms can yield 1D, 2D and 3D images. An advantage of a 3D image
is that it fully represents the information recorded on the hologram, but it is to be
computer-processed and analysed. For visualisation, an image must be displayed on
2D media, such as paper or photosensitive films, or on computer screens. Moreover,
the ‘third’ dimension of a hologram is sometimes insufficient to get a good resolution.
Nonetheless, the neglect of a non-3D format of a hologram leads to serious image
errors during its processing. Therefore, the problem of producing undistorted 2D
images from 3D holograms seems quite important. We can suggest two ways of
solving this problem.
One way is to obtain a 3D image and then intercept it with a plane of pre-
scribed orientation. However, the computations with cumbersome 3D coordinates
and data arrays of lower dimensionality require special processing algorithms and
large computation resources.
A more simple and cost-effective approach is to compute directly the contributions
of single 3D hologram components to a 2D image, if their dimensionality is not higher
than that of the image. The computations become less complex and all highlighted
components of a hologram can be processed simultaneously, provided that the number
of processors is sufficient. The applicability of this technique can be easily extended
to 2D holograms.
This method of image reconstruction can be termed coherent summation of partial
components of a hologram. This method includes the following procedures:

Stage 1. A microwave hologram is subdivided into regions of limited size called


partial holograms (PH). Since discrete pixels making up the hologram are formed
by the interceptions of radial lines (corresponding to single echo signals) and cofo-
cal spherical surfaces (corresponding to discrete values of the space frequency),
PHs can be separated from the initial hologram in different ways.
The PH dimensionality is chosen from the initial hologram geometry and
from considerations of processing convenience. In the case of a 2D hologram, the
PHs may be one- or two-dimensional, while for a 3D hologram they may be, in
addition, three-dimensional. Figures 6.4 and 6.5 depict 1D PHs with lines having
points at their ends, which represent the initial and final pixels. The points on the
surfaces of 2D and 3D PHs correspond to single pixels.
One-dimensional PHs are composed either of pixels coinciding with the radial
rays which correspond to single pulses (radial PHs) or of pixels located on the
cofocal spherical surfaces with fpo = const. (transverse PHs). Radial 2D PHs are
made up of ensembles of 1D radial PHs and represent regions of planar conic
(Fig. 6.5(b)) or more complex curved (Fig. 6.4(b)) surfaces. Transverse 2D PHs

zino: “chap06” — 2005/11/7 — 15:38 — page 127 — #11


128 Radar imaging and holography

(a) ∆Ψ ≈ ∆u cos B
∆Ψ ≈ ∆B

Radial Transversal

(b)

Radial Transversal

(c)

Figure 6.4 Subdivision of a 3D microwave hologram into partial holograms:


(a) 1D partial (radial and transversal), (b) 2D partial (radial and
transversal) and (c) 3D partial holograms

can be separated only from volume holograms. They are regions of spherical
surfaces with fpo = const.
If the angular discretisation of a hologram is uniform, the maximum angle
of 1D transverse, 2D and 3D PHs are chosen from the following considerations.
When a spherical coordinate grid (or a polar grid for plane holograms) is replaced
by a rectangular grid, the phase noise at the PH edges should not exceed π/2.
This criterion leads to the following restrictions:

ψ ≤ (λ/D)1/2 , (6.15)
ψ ≤ c/Df . (6.16)

If the intersample spacing on a hologram varies slowly because of a non-uniform


rotation of a target, the choice of the PH angle should meet the condition:

δν  ≤ f arc cos(1 − λ/4rn cos β), (6.17)

where δν  is the difference between the maximum (or minimum) discretisation


step and its average value. Condition (6.17) is based on the limited phase noise
due to the non-equidistant arrangement of the hologram samples.

zino: “chap06” — 2005/11/7 — 15:38 — page 128 — #12


Radar systems for rotating target imaging (tomographic approach) 129

(a) (b)

(c)

∆Ψ

Figure 6.5 Subdivision of a 3D surface hologram into partial holograms: (a) radial,
(b) 1D partial transversal and (c) 2D partial

When choosing the PH angle, one should always follow the more rigid of
the above criteria. The restriction on the PH size is introduced in order to keep
the deviation of the hologram samples from the rectangular grid nodes within a
prescribed limit. The PH angles can be easily calculated analytically at a con-
stant or slightly varying value of one of the angles of the spherical coordinates
describing the PHs (Fig. 6.4(a)). In that case the PH boundaries will be close to
the coordinate surfaces. If both angles θ and B change markedly (Fig. 6.5), the
angular step ψ should be found in the plane tangent to the PH.
Stage 2. Every PH should be subjected to a DFT providing a radar image with
the same dimensionality as that of the PH, while the resolution is determined by
its size.
Stage 3. The contributions of partial images to the integral image are computed.
When the dimensionalities of a PH and a partial image are the same, the pixels
of the latter are interpolated to those of the integral image. If the dimensionality
of the integral image is higher, the major procedure for the computation is that of
back projection [127].
Consider algorithms for the reconstruction of 2D images by processing narrow
and wideband surface holograms (Fig. 6.5) produced by a three-axially stabilised
ground radar. With such algorithms we shall try to justify the specific features
of coherent summation of partial components: (1) the possibility of highlighting
partial regions of various shapes on a PH and their independent processing and
(2) the possibility to increase the resolution of the integral image as the individual
contributions of the partial components are accumulated and the diffraction limit
corresponding to the initial hologram size is achieved.

The above analysis allows the following conclusions to be drawn. The most general
approach to radar imaging of a satellite by inverse aperture synthesis, no matter
how it moves and what probing radiation is used, includes two stages of echo
signal processing. The preprocessing involves some conventional operations, the

zino: “chap06” — 2005/11/7 — 15:38 — page 129 — #13


130 Radar imaging and holography

compensation for the phase noise specific to coherent radars, and data recording
allows the aspect variation to produce a microwave hologram. The second stage is to
reconstruct the image by a special digital processing of PHs.
A procedure specific to preprocessing is the compensation for the phase shift due
to the radial displacement of a space target. In the case of an AEC, this operation is
replaced by the introduction of constant phase factors in the wideband echo signal. The
use of monochromatic transmitter pulses does not require this operation (Chapter 5).
The complex pattern of aspect variation of low orbit satellites requires a 3D
hologram with a non-equidistant arrangement of the aspect samples. Since there are
no adequate methods for processing such holograms, we have designed a way of image
reconstruction by coherent summation of PHs. This reduces the digital processing of
a hologram of complex geometry to a number of simple operations. A hologram
is subdivided into PHs, from which partial images are reconstructed using a fast
Fourier transform (FFT). The contributions of the partial images to the integral image
are computed.

6.4 Processing algorithms for holograms of complex geometry

We should first change Eq. (2.38) generally relating the hologram and image functions
to the Cartesian coordinates necessary for a DFT:

fp rno = fx rx + fy ry + fz rz , (6.18)


where fx = |fp | sin θ cos B, (6.19)
fy = −|fp | cos θ sin B, (6.20)
fz = |fp | sin B; (6.21)
rx = |rno | sin ν cos β, (6.22)
ry = −|rno | cos ν cos β, (6.23)
rz = −|rno | sin β. (6.24)

The substitution of Eq. (6.18) into Eq. (2.38) reduces it to the conventional 3D Fourier
transform. However, it is impossible to apply it directly to a microwave hologram
recorded in spherical coordinates (Fig. 6.2(b)). The transition to pixels located at rect-
angular grid nodes is considered as an interpolation problem. Even a first-order inter-
polation for a 2D case would require large computational resources. Besides, any noise
arising from the interpolation would lead to large errors in the reconstructed image.
The procedure of coherent summation of partial components will simplify this
problem if we use the reverse order of computational operations: a number of DFT
operations and the interpolation of their results (partial images) to the rectangular
grid nodes of the integral image. Of special practical importance is the case when a
PH and its partial image have a lower dimensionality than the integral image. This
is due to a higher computation efficiency of the algorithms used. The interpolation

zino: “chap06” — 2005/11/7 — 15:38 — page 130 — #14


Radar systems for rotating target imaging (tomographic approach) 131

then represents a transition from a rectangular grid of lower dimensionality to that of


a higher dimensionality, a procedure known as back projection [127].
As previously mentioned, we shall focus on designing algorithms for producing
2D images by coherent summation of 1D PHs and individual initial hologram samples.
The algorithm for coherent summation of 2D partial images will largely be discussed
for a theoretical completeness of the treatment. The analysis will start with algorithms
for processing 2D holograms recorded in an AEC and during the imaging of low orbit
satellites by a SAR located in the orbit plane.

6.4.1 2D viewing geometry


Equation (6.14) will be transformed to polar coordinates by substituting Eq. (6.18)
into it and using Eqs (6.19)–(6.24). Assuming B = β = 0 and denoting |fp | = fpo + fp
and |rno | = r, we get:

θf fpu
ĝ(r, ν) = S( fpo + fp , θ)|fp | exp[j2π( fpo + fp )r cos(ν − θ)] dfp dθ, (6.25)
θi fpl

where θi and θf are the initial and final values of the angle θ of the hologram (Figs 6.6
and 6.7), fpl = fpo − fp /2 and fpu = fpo − fp /2 are the lower and upper boundaries
of the space frequency band along the hologram radius.
It is easier to start the analysis of processing algorithms with a simple case of
narrowband microwave holograms. The limit of expression (6.25) at fp → 0 is
θf
g(r) = fpo S( fpo , θ ) exp[ j2π fpo r cos(ν − θ)] dθ . (6.26)
θi

This expression coincides with the formula for the CCA for a narrowband signal
[94]. When an image is reconstructed by this algorithm, circular convolution is per-
formed for every sample of the polar coordinate r in the image space with respect
to the parameter θ of the hologram function and the phase factor. The contribution
of all hologram samples to every (r, ν) node of the image polar grid is computed. If
the satellite aspect changes non-uniformly, the samples are arranged along the holo-
gram circumference with a variable step, so a discrete circular convolution becomes
impossible.
Let us single out a series of adjacent regions on a hologram, or PHs shown in
Fig. 6.6(a), with an angle satisfying the condition of Eq. (6.15). The convolution step
of Eq. (6.26) over the whole hologram angle can be represented as a sum of integrals,
each taken over a limited angle step θ:
M

ĝ(r, ν) = fpo Sm ( fpo , θ) exp[ j2π fpo r cos(ν − θ)] dθ , (6.27)
m=1

where Sm ( fpo θ ) is the mth PH and M is the total number of such holograms.

zino: “chap06” — 2005/11/7 — 15:38 — page 131 — #15


132 Radar imaging and holography

(a) fy

uM

Du
fpo
uf

O fx

ui
u1

(b) y

xm

ym

(xn, yn)
rno

oq x
m

Figure 6.6 Coherent summation of partial hologram. A 2D narrowband microwave


hologram: (a) highlighting of partial holograms and (b) formation of
an integral image

We now introduce the Cartesian xm ym coordinates (Fig. 6.6(b)) for each mth
PH with the origin O coinciding with that of the rectangular x–y coordinates of the
integral image. The xm -axis is parallel to the tangent to the arc connecting the mth
PH pixels at its centre. Since the microwave hologram in question is 2D, let us
introduce the azimuthal coordinate fpθ = fpo θ to describe it in the frequency fx –fy
plane (Fig. 6.6(a)), in addition to the radial polar coordinate fp . With xm = r sin θm and
ym = r cos θm , the transformation of the phase factor under the integral of Eq. (6.27)

zino: “chap06” — 2005/11/7 — 15:38 — page 132 — #16


Radar systems for rotating target imaging (tomographic approach) 133

(a) fy
dfu

Dfp
uf
fpu fpe Du
Of fx
u1

ui

dfp

(b) y

ym xm

rno

oq x
m

Figure 6.7 Coherent summation of partial hologram. A 2D wideband microwave


hologram: (a) highlighting of partial holograms, (b) formation of an
integral image

will give
fθm +f
 θ /2
M

ĝ(x, y) = S( fpo , θ) exp[ j2π fθ xm ] dfθ m , (6.28)
m=1f −f /2
θm θ

where dfθ = fpo dθ is the differential of the space frequency f0 , while fθm is the space
frequency corresponding to the mth PH centre.

zino: “chap06” — 2005/11/7 — 15:38 — page 133 — #17


134 Radar imaging and holography

Expression (6.28) describes the algorithm for coherent summation of partial


images obtained from 1D transverse (azimuthal) PHs. Each partial image results from
a Fourier transformation of the appropriate PH and is resolved along the azimuthal
xm -coordinate. The synthesis of a PH is made simultaneously with its summation with
the radar image by moving the partial image along the ym -coordinate (back projec-
tion), accompanied by the multiplication of all of its samples by a coherent processing
phasor m .
The process of image summation by the algorithm of Eq. (6.28) will be discussed
with reference to a point scatterer with the xn , yn coordinates in the x–y coordinate
system (Fig. 6.6(b)). This scatterer will be assumed to possess an isotropic local radar
1/2
target characteristic g(rno ) = σn exp(jϕn ).
A narrowband microwave hologram is defined as
S( fpo , θ ) = σn1/2 exp[−j2πfpo r̂n (ϑ)] exp(jϕn ). (6.29)
The relative range of the point scatterer is expressed by the rectangular xm , ym coor-
dinates. The expansion of r̂n (ϑ) into a Taylor series with respect to the centre ϑm of
the mth partial angle step with the linear terms only gives
˙ m )ϑ − r̂˙n (ϑm )ϑm ,
r̂n (ϑ) = r̂n (ϑm ) + r̂(ϑ (6.30)

where r̂˙n (ϑ) = d r̂n (ϑ)/dϑ, r̂˙n (ϑm ) = r̂˙n (ϑ) ϑ=ϑm .
By substituting Eq. (6.30) into Eq. (6.29) and denoting r̂n (ϑ) = ym , r̂˙n (ϑm ) = xm ,
we transform the expression for the mth PH to
Sm ( fpo , θ ) = σn1/2 exp{−j2π fpo (ymn − xmn ϑm )} exp(−j2π fpo xmn ϑ) exp(jϕn ).
(6.31)
It is further assumed that the estimate of the target rotation rate obtained during the
hologram recording contains no error: θ = ϑ. It should also be taken into account
that a rectangular window of width fθ = fpo θ framing the PH (6.31) is shifted
relative to the centre of the space frequency axis by its half width: fpo θm = fθ /2.
Then the expression for the partial image can be written as
fθm
+fθ/2

ĝ(xm ) = S( fpo , θ ) exp( j2π fθ xm ) dfθ


fθm −fθ/2
1/2
= σn fθ {sin[π(xm − xmn )fθ ]/π(xm − xmn )} exp[ jπ(xm − xmn )fθ ]
× mn exp(jϕn ). (6.32)
The integral image will be described as
M

ĝ(x, y) = σn1/2 exp(jϕn )fθ {sin[π(xm − xmn )fθ ]/π(xm − xmn )fθ }
m=1

× exp[ jπ(xm − xmn )fθ ]mn . (6.33)

zino: “chap06” — 2005/11/7 — 15:38 — page 134 — #18


Radar systems for rotating target imaging (tomographic approach) 135

It is clear from Eq. (6.33) that the complex phase factors varying with the xm , ym
coordinates and located at the integral image point corresponding to the position of
the scatterer response have the maximum values equal to unity. The contribution
of the PH to the integral image is defined by the product of the local radar target
characteristic of the scatterer and the sin(x)/x-type of function. Therefore, the PHs
are summed equiphasically at the point xn = rno sin ϑn , yn = rno cos ϑn and at other
points, of the image, they are mutually neutralised.
The width of the major lobe of the scatterer in the partial image (a function of the
sin(x)/x-type) is determined by the PH length fθ or by its angle θ (Fig. 6.6(a)).
The limiting value of the response width in the partial image derived from Eq. (6.14)
is expressed by the inequality δx ≥ 0.5(λD)1/2 . Since D  λ, the major lobe width
is much greater than the transmitter pulse wavelength.
It follows from this treatment that the mth partial component of the integral image
may be regarded as a 2D plane wave superimposed on the image plane. The wave front
is normal to the ym -axis and its period is equal to the half wavelength of the trans-
mitter pulse. The initial wave phase (along the xm -axis) is determined by the phasor
exp[ jπ(xm − xmn )fθ ] in such a way that a positive half-wave always arrives at the
scatterer’s xmn , ymn position. The wave amplitude along the xm -axis is described by a
sin(x)/x function with a maximum at the point xm . For this reason, the partial com-
ponent has a ‘comb’ elongated by the back projection of the partial image parallel to
the ym -axis.
Note that the resolution of the integral image is defined by the scatterer wavelength
rather than by the response width in the partial image. The reduction in the PH size
from the maximum value prescribed by Eq. (6.15) to a single sample should not affect
the result of summation in a PH. Therefore the synthesised aperture can be focused
accurately over the whole image field. Keeping in mind
fθm +f
 θ /2
lim S( fpo , θ ) exp(j2π fθ xm ) dfθ = fpo S( fpo , θ) dθ, (6.34)
fθ →0
fθm +fθ /2

we obtain from Eq. (2.4) the algorithm for coherent summation of a PH made up of
individual samples of the initial hologram:
M

ĝ(x, y) = fpo Sm ( fpo , θ)m . (6.35)
m=1
The coherent summation algorithm for hologram samples essentially represents a
particular case for 1D transverse (azimuthal) partial images described by Eq. (6.28).
However, each has its own specificity.
The major advantage of the algorithm for hologram samples is the absence of
phase errors due to either the PH approximation or the non-equidistant distribu-
tion of samples. As a consequence, this algorithm is applicable to the processing
of microwave holograms with any known sample arrangement. On the other hand,
the coherent summation algorithm for partial images does not require excessive com-
puter resources because the exhaustive search of the raster pixels in the integral image

zino: “chap06” — 2005/11/7 — 15:38 — page 135 — #19


136 Radar imaging and holography

Table 6.1 The number of spectral components of a PH

Target size Radial/azimuthal PH

D (m) Dλ = D/λ µ = 0.02 µ = 0.04 µ = 0.06 µ = 0.08 µ = 0.1

0.5 15 2/12 2/12 3/12 3/12 4/12


1.0 25 2/15 3/15 5/15 6/15 8/15
2.0 50 3/22 6/22 9/22 12/22 15/22
4.0 100 6/30 12/30 18/30 24/30 30/30
6.0 150 9/37 18/37 27/37 36/37 45/30
8.0 200 12/43 24/43 36/43 48/38 60/30
10.0 250 15/48 30/48 45/48 60/38 75/30
15.0 325 23/54 45/54 68/50 90/38 113/30

during the computation of the partial contribution is made for a group of PH samples
rather than for every single hologram sample.
Figure 6.8–6.9 compares the computational complexity of the two algorithms as a
function of the target size for a narrowband microwave hologram. The criterion for the
degree of complexity is taken to be the algorithmic time of the programme realisation.
The unit of measure of the algorithmic time is, in turn, taken to be 1 flop (floating
point), that is, the time for one elementary operation of summation/multiplication of
two operands with a floating point. So we have 1 Mflop = 106 flops. The estimations
of the computational complexity and the programme realisation time have been made
for a 2D image of 512 × 512 raster pixels in size and 2D microwave holograms with
a 120◦ angle.
When going from a narrowband hologram to a wideband one, we can just suggest
that the number of spectral components increases from 1 to L. As the size of a one-digit
image and the hologram discretisation step are inversely proportional to each other,
the minimal number of spectral components at a given pulse frequency bandwidth
must be proportional to the target size. Table 6.1 presents the L values for various
PHs as a function of the maximum target size. The computations have been made for
0.04 m carrier (centre) frequency of the transmitter pulse spectrum and the ratio of
the image field size to the maximum target length k = 1.5. One can easily see that
the number of azimuthal PH samples rises with the target size as long as the limiting
PH angle obeys the inequality (6.15).
When a target is rather large and the relative frequency bandwidth is µ = f /f0
(the lower right-hand side of Table 6.1), the inequality (6.16) imposes a more rigid
restriction on the PH size. Then both the PH size and its discretisation step decrease
inversely with respect to the target size. Therefore, the number of PH azimuthal
samples at a given transmitter pulse width f remains constant with increasing target
size D.
We shall start the discussion of digital processing of 2D wideband holograms with
the algorithm for coherent summation of 1D azimuthal partial images, which is the

zino: “chap06” — 2005/11/7 — 15:38 — page 136 — #20


Radar systems for rotating target imaging (tomographic approach) 137

(a) 0.6

0.5
Kpar im, Mflop · 103

0.4

0.3

0.2

0.1

0.0
0.0 0.5 1.0 1.5 2.0 2.5
Target dimension, m
(b) 12.0

10.0
Khol sam, Mflop · 103

8.0

6.0

4.0

2.0

0.0
0.0 0.5 1.0 1.5 2.0 2.5
Target dimension, m

Figure 6.8 The computational complexity of the coherent summation algorithms


as a function of the target dimension for a narrowband microwave
hologram: (a) transverse partial images, (b) hologram samples

extension of a similar algorithm for narrowband microwave holograms. Let us relate


Eq. (6.28) to the first, 1 = 1th, … , Lth spectral component:
fθm +f
 θ /2
M

ĝ(x, y) = Sm ( fpl , θ) exp( j2π fθl xm ) dfθl ml , (6.36)
m=1f −f /2
θm θ

zino: “chap06” — 2005/11/7 — 15:38 — page 137 — #21


138 Radar imaging and holography

where fpl , fθl are the radial and azimuthal space frequencies and ml is the coherent
processing phasor. By summing up the L number of PHs in each of the M number of
partial angle steps, we get
 
M  
fθm +f
 θ /2
  L 
ĝ(x, y) = fpl Sm ( fpl , θ) exp( j2π fpθ xm ) dfθ ml . (6.37)
 
m=1  l=1 fθm −fθ /2

Equation (6.37) describes the following processing operations:

• the L number of azimuthal PHs are selected in each mth partial angle step;
• the DFT is applied to each PH to get the L number of 1D partial images;
• the L number of partial images in every mth group are back projected and the
obtained contributions are multiplied by the coherent processing phasor ml .

The analysis of Eq. (6.37) shows that the consecutive multiplication by the phasor ml
of the contributions of partial images can be supplemented with a DFT. The result is a
new processing algorithm – the coherent summation algorithm for 2D partial images:
f /2  fθ m +f
 θ /2
M p
 
ĝ(x, y) = |fp | Sm ( fp , θ) exp( j2π fpθ xm ) dfθ
m=1 −f /2 fθm −fθ /2
p

× exp( j2πfp ym ) dfp m . (6.38)

Algorithm (6.38) implies the following series of operations:

• the M number of 2D PHs with an angle defined by the conditions of Eqs (6.15)
and (6.16) are selected in the initial microwave hologram;
• each PH is subjected to a 2D DFT to produce the M number of 2D partial images.
All of these have a common centre which coincides with the integral image centre
and are rotated by the angle θ relative to one another;
• the contribution of each partial image to the integral image is calculated using
a 2D interpolation and the result is multiplied by the coherent processing
phasor.

The last operation generally requires large computer resources. So we shall further
refer to the coherent summation algorithm for 2D partial images only to preserve a
theoretical completeness.
The advantages of coherent summation of individual samples discussed above
for narrowband holograms are fully valid for wideband holograms as well.
Equations (6.37) and (6.34) yield
M
 L 
 
ĝ(x, y) = fpl Sm ( fpl , θ)ml dθ . (6.39)
m=1 l=1

zino: “chap06” — 2005/11/7 — 15:38 — page 138 — #22


Radar systems for rotating target imaging (tomographic approach) 139

Among the wideband processing algorithms, the one described by Eq. (6.39) is
the most simple but it requires a large number of arithmetic operations to be made
because the processing is made online. The computational efficiency of this algorithm
can be raised by using a 1D DFT along the mth hologram beam:
 
 f
 p /2 
M  
ĝ(x, y) = S( fp , θ)|fp | exp( j2π fp ym ) dfp m dθ. (6.40)
 
m=1 −fp /2

In accordance with the accepted classification, expression (6.40) is the algorithm of


coherent summation of 1D radial (range) partial images. Its implementation involves
the following processing operations:
• the hologram samples making up the mth radial PH are multiplied by the linear
frequency function and are then subjected to DFT;
• the resulting 1D range partial image is back projected and the result is multiplied
by the coherent processing phasor.
The algorithm of Eq. (6.40) has much in common with the narrowband algorithm
for partial images in Eq. (6.28) but it also has some specific features. One is that
the 1D image module of a single point scatterer is described by the so-called kernel
function of computerised tomography [57], rather than by a function of the sin(x)/x
type. Depending on the chosen approximation of the linear frequency function in
Eq. (6.40), the kernel function is its Fourier image and may be described analytically
in various ways. It always has the form of an infinite periodic function with one major
lobe and side lobes decreasing with amplitude. Another specificity of this algorithm
is that the back projection operation is performed along the xm -axis. Still another
characteristic of the algorithm of Eq. (2.40) is that the PH samples are arranged
equidistantly along radial straight lines, so no restriction is imposed on the maximum
size of a PH.
The relative computational complexities of wideband processing algorithms are
compared in Figs 6.10–6.11. It is seen that the number of arithmetic operations always
increases with the relative frequency bandwidth of the transmitter pulse µ and the
relative target size Dλ , whereas the computational complexity of 1D partial image
algorithms changes differently with these parameters. At given values of µ and Dλ ,
more profitable is the algorithm for a PH with a larger number of samples. This is
because the efficiency of a FFT increases with the number of lobes, as compared with
an ordinary DFT. For example, at small values of µ and Dλ , it is more reasonable to
use the algorithm for azimuthal partial images (Fig. 6.11). As the relative frequency
bandwidth and the target size become larger, the number of samples in a radial PH
exceeds, at a certain moment, that of an azimuthal PH (see Table 6.1). This happens
because the restriction on the azimuthal PH size in Eq. (6.15) begins to dominate
over that of Eq. (6.16), such that the use of the coherent summation algorithm for
radial partial images becomes more profitable. In spite of its structural simplicity, the
coherent summation algorithm for hologram samples has the greatest computational
complexity (Fig. 6.10).

zino: “chap06” — 2005/11/7 — 15:38 — page 139 — #23


140 Radar imaging and holography

(a) 6.0

5.0

4.0
Kpar im/KCCA

3.0

2.0

1.0

0.0
0.0 0.5 1.0 1.5 2.0 2.5
Target dimension, m
(b) 150.0

125.0

100.0
Khol sam /KCCA

75.0

50.0

25.0

0.0
0.0 0.5 1.0 1.5 2.0 2.5
Target dimension, m

Figure 6.9 The relative computational complexity of coherent summation algo-


rithms as a function of the target dimension for a narrowband microwave
hologram: (a) transverse partial images/CCA, (b) hologram sam-
ples/CCA

It is clear that the time for a wideband hologram processing by the above algo-
rithms, estimated from the product of the computational complexity and the time
for an elementary multiplication/summation operation, is excessively long, so one
should consider the possibility of separate, independent processing of PHs in order
to considerably reduce this parameter.

zino: “chap06” — 2005/11/7 — 15:38 — page 140 — #24


Radar systems for rotating target imaging (tomographic approach) 141

35.0

30.0
0.06

0.02

25.0 0.08

20.0
Khol sam /Kpar im

0.04

15.0

10.0

5.0

0.0
0.0 0.5 1.0 1.5 2.0 2.5
Target dimension, m

Figure 6.10 The relative computational complexity of coherent summation algo-


rithms of hologram samples and transverse partial images versus the
coefficient µ in the case of a wideband hologram

6.4.2 3D viewing geometry


We now express Eq. (6.14) in spherical coordinates and use Eqs (6.19)–(6.21) to get
the relation
 
g(x, y, z) = S( fp , θ , B) exp[ j2π( fpo + fp )(y cos θ cos B
Vf

+ x sin θ cos B + z sin B)] dff dθ dB. (6.41)

zino: “chap06” — 2005/11/7 — 15:38 — page 141 — #25


142 Radar imaging and holography

7.0

6.5

6.0

5.5

5.0

4.5

4.0
Kpar im/Kzad par im

0.02
3.5

3.0

2.5

0.04
2.0

1.5
0.06

0.08
1.0

0.5

0.0
0.0 0.5 1.0 1.5 2.0 2.5
Target dimension, m

Figure 6.11 The relative computational complexity of coherent summation algo-


rithms for radial and transverse partial images versus the coefficient
µ in the case of a wideband hologram

To make the coherent summation of PHs more convenient, it is reasonable to separate


the integration variables in Eq. (6.41). This task could be simplified if one of the
variables remained constant through a synthesis step. For example, at B = const., the
image in the (z = 0) plane will be described as
 
g(x, y) = S( fp , θ , B) exp[ j2π fpe (y cos θ + x sin θ)] dfp dθ , (6.42)
Vf

zino: “chap06” — 2005/11/7 — 15:38 — page 142 — #26


Radar systems for rotating target imaging (tomographic approach) 143

z
fz, f ⬙zm

ym
f ⬙ym

jm
um Bm Of fy y
fym, f ⬘ym

f ⬘xm, f ⬙xm
fx
x xm

Figure 6.12 The transformation of the partial coordinate frame in the processing
of a 3D hologram by coherent summation of transverse partial images

where fpe = ( fpo + f0 cos B) is an ‘equivalent’ space frequency introduced just to


reduce Eq. (6.42) to a conventional form.
Clearly, the algorithms to be derived from Eq. (6.42) may differ from those for 2D
holograms only in the space frequency value. In reality, this may happen in viewing
geostationary objects stabilised by rotation. In hologram recording of a low-orbit
satellite, both angles describing its geometry change simultaneously. For this reason,
the polar angle can be considered to be ‘fixed’ only at certain moments of time.
This hologram geometry is best satisfied by the algorithms for coherent summation
of individual hologram samples and 1D radial partial images. Expressions for such
algorithms can be derived from Eq. (6.42) or directly from Eqs (6.35), (6.39) and
(6.40) by substituting fpe for ( fpo + fp ).
To design algorithms for transverse PHs, we need to introduce in the frequency
domain the partial coordinates fxm , fym , fzm with the origin at the point Of , which is
also the origin of the fx , fy , fz coordinates (Fig. 6.12). The fxm –fym plane of the partial
coordinates is tangential to the PH at a point with the angular coordinates θm , Bm (the
fxm - and fym -axes are not shown in Fig. 6.12).
Let us expand the polar angle B as a function of the azimuth θ into a Taylor series
in the vicinity of θm :
B(θ ) = Bm + [dB(θ )/dθ ]|θ =θm · (θ − θm ) + B , (6.43)
where Bm = B(θ ) at θ = θm and B are the residual terms of the series. Obviously,
in the ideal case of B = 0, the PH lies totally in the fxm –fym plane, and the difference
between the PH and a straight line is only determined by the curvature of the sphere fpo .
The non-zero nature of the polar angle derivatives with an order above the first one
may generally lead to additional phase errors in the PH approximation by a straight

zino: “chap06” — 2005/11/7 — 15:38 — page 143 — #27


144 Radar imaging and holography

line. However, a digital simulation of the aspect variation of a real target has shown
that the phase error is negligible. Therefore, we shall assume the PH angle to be
defined by the conditions of Eqs (6.15) and (6.16).
To describe the positions of PH samples in the fxm –fym plane, we introduce the
angle ψ and write down the partial Cartesian coordinates of the pixels as fxm =
fp sin ψ, fym = −fp cos ψ. An acceptable processing algorithm can be obtained if
the fxm –fym plane is superimposed with the fx –fy plane which corresponds to the x–y
plane in the image space containing the integral image. The superposition operation
will be made by two consecutive rotations of the partial coordinates (Fig. 6.12):
• the rotation by the angle ξm = arctg(dB/dθ)|θ =θm |  round the fym -axis gives the
 f  f  coordinates, whose f  -axis lies in the f –f plane;
polar fxm ym zm xm x y
 f  f  coordinates by the angle B round the f  -axis
• the rotation of the polar fxm ym zm m xm
gives the sought for polar fxm f  f  coordinates.
ym zm

These transformations of the polar coordinates result in the following expression for
the scalar product at the mth partial angle step:
 fp rno  = −fp bm xm
 
sin ψ + fpe ym cos ψ (6.44)
with

xm = xm cos ζm + ym sin ζm ,

ym = −xm sin ζm + ym cos ζm ,
bm = (1 − sin2 ξm cos2 Bm )1/2 ,
fpe = fp cos Bm .

In turn,
sin ζm = sin ζm sin Bm /bm ,
cos ζm = cos ξm /bm .
Thus, the variation of the polar angle B during hologram recording introduces two
specific features in the coherent summation algorithm for transverse partial images.
One is the necessity to make an additional rotation of the partial xm , ym coordinates
by the ζm angle round the zm -axis. The other is a change in the partial image scale
along the xm - and ym -axes by a factor of bm and cos Bm , respectively.
Let us now derive an expression for the coherent summation algorithm for trans-
verse partial images in the case of wideband pulses. This can be done by substituting
Eq. (6.44) into Eq. (6.14) and reducing the result to the form:
 
 fψm +f
 ψ /2 
M
   L 
ĝ(x, y) = fple S( fp , θ , B) exp( j2π fψ o xm

) dfψo mlθ ,
 
m=1  l=1 fψm −fψ /2

(6.45)

zino: “chap06” — 2005/11/7 — 15:38 — page 144 — #28


Radar systems for rotating target imaging (tomographic approach) 145

where fψo = fpo ψ is the transverse space frequency; fψo = f b ;f


ψo m ple = fpl cos Bm is
the equivalent space frequency for the first spectral feature; and mlθ is the coherent
processing phasor.
The processing with the algorithm of Eq. (6.45) includes the following opera-
tions:
• the L number of transverse PHs (equal to the number of spectral features) are
selected in every mth partial angle step;
• a DFT is performed with every PH with the space frequency f to produce the L
number of 1D partial images;
• every partial image is back projected, and the contribution is multiplied by the
phasor mlθ . The back projection is made along the ym  -axis rotated by the ζ
m
angle relative to the ym -axis.
Equation (6.45) can be easily solved to give expressions for coherent summation
algorithms for 2D partial and 1D transverse images of narrowband pulses, by analogy
with the case discussed in Section 6.4.1.
As compared with the respective algorithms for 2D holograms, the computational
complexity of coherent summation of individual hologram samples and 1D partial
radial images increases only because of the necessity to compute the sine of the polar
angle B. However, it does not increase more than by 2–3 per cent even in the most
unfavourable case of a narrowband signal and a relatively small target. The complexity
rises considerably when a 2D geometry is replaced by a 3D geometry of transverse
partial images. This is due to both the polar angle variation during the viewing and
the appearance of a variable in the hologram discretisation step. As a result, new
operations come into play, but the increase in the computational complexity still lies
within 10 per cent.
The above treatment allows the following conclusions to be made:
1. The algorithms for digital processing of microwave holograms designed in terms
of the theory of coherent summation of partial components provide imaging in
a wide range of viewing conditions, in particular, the probing geometry and the
frequency bandwidth of transmitter radiation.
2. The wider applicability of digital processing by coherent summation of partial
components implies a greater complexity of computations than that required by
available techniques. However, one can choose the least time-consuming algo-
rithm for particular values of the relative frequency bandwidth of the transmitter
pulse and the size of the space target. A radical reduction in the processing time
can be achieved by using separate processing of individual PHs.

zino: “chap06” — 2005/11/7 — 15:38 — page 145 — #29


zino: “chap06” — 2005/11/7 — 15:38 — page 146 — #30
Chapter 7
Imaging of targets moving in a straight line

When a target moves in a straight line normal to the radar line of sight, the inverse
synthesis of a tracking aperture can be regarded in terms of Doppler information
processing, in a way similar to the processing aimed at a high azimuthal resolution
by a side-looking radar. Clearly, an inverse aperture can then be considered as a
linear antenna array performing a periodic time discretisation of the radiation wave
front. This is the so-called antenna approach, and its capabilities are discussed in
Reference 139. The author analysed an equivalent array made up of (2N + 1) records
of target movement across a real ground antenna beam of sufficient width. It was
shown that the azimuthal range resolution R0 and the resolution along the ϕ direction
could be defined as
λR0
= , (7.1)
2VTr (2N + 1) cos ϕ
where λ is the transmitter pulse wavelength, V is the target velocity, ϕ is the angle
between the line directed to the target and normal to the synthesising aperture, and
Tr is the repetition rate of transmitter pulses.
Inverse aperture synthesis for a linearly moving target can also be examined
in terms of a holographic approach. This was first done by H. Rogers to study
ionosphere [85], making use of D. Gabor’s ideas of holography. Rogers described
a method for hologram recording of microwaves reflected by ionospheric inho-
mogeneities. The principle of this method is as follows. When an ionospheric
inhomogeneity moves, the resulting diffraction pattern on the earth surface also moves
across the receiver aperture. A signal that has been sensed is recorded on a photofilm
as a hologram. What is actually recorded is the wave front, and one can reconstruct
the inhomogeneity image from the hologram. For these reasons, E. Leith considered
Rogers’ device to be truly holographic rather than quasi-holographic.
Holographic concepts were successfully introduced in radar imaging by
W. E. Kock [71] who showed that echo signals from a linearly moving target, recorded
by the receiver of a coherent continuous pulse radar, were structurally equivalent

zino: “chap07” — 2005/11/7 — 15:38 — page 147 — #1


148 Radar imaging and holography

to 1D holograms. He pointed out a similarity among an airborne SAR, a ground


coherent radar and a holographic system.
The holographic approach treats inverse aperture synthesis of signals from a
linearly moving target as a particular case of hologram recording by the scanning tech-
nique (Chapter 3). Here we shall analyse the process of radar imaging in the range–
cross range coordinates, using inverse synthesis under real target flight conditions,
that is, imaging of partially coherent signals.
Radar images obtained in the range–cross range coordinates allow estimate of
the target size and shape, as well as the reflectivity of its individual scatterers. Such
images can be further used for target identification. The imaging should be performed
by ISARs transmitting complex pulses [85,104].
Apart from the prescribed movement, an aerodynamic target makes acciden-
tal motions with unknown parameters induced by destabilising factors, such as the
constant component of wind velocity, the operation of the internal control system,
turbulent flows, elastic fuselage oscillations and vibrations due to the engine oper-
ation and the target aerodynamics. Some of these can be estimated in advance by
comparing the synthesis time Ts and the correlation time of perturbing effects Tc and
by calculating the phase noise they introduce in the echo signal.
Among the above factors responsible for phase fluctuations of an echo signal
ψ(ϕ), of special importance are turbulent flows. This is because the constant wind
velocity factor can be eliminated during the compensation for the radial displacement
of a target. The second factor becomes important when a target is manoeuvring.
For a typical synthesis time (Ts ∼ 1 s), the value of Tc is smaller than that of Ts .
The effect of the fourth factor can be avoided by choosing the wavelength λ such
that the condition λ/2  ε (where ε is the maximum displacement due to fuselage
oscillations) is fulfilled [17].
An echo signal from this kind of target is partially coherent. In the case of direct
aperture synthesis, the effect of turbulent flows on the carrier pathway is accounted
for by introducing a phase correction in the echo signal, which is found from random
radial velocity and acceleration measurements [136]. In inverse synthesis, it is very
hard to correct phase fluctuations ψ(ϕ) of an echo signal. Below, we shall try to
define the imaging conditions, primarily along the cross range coordinate, for a
partially coherent signal [89]. The numerical simulation we have made shows that
the destabilising factors of interest do not affect the range resolution.

7.1 The effect of partial signal coherence on the


cross range resolution

Assuming that f (x) is the distribution of the complex scattering amplitude (the target
reflectivity) along the cross range x-coordinate, ϕ is an angle characterising the aspect
variation, and z(ϕ) is an echo signal, we have
  

z(ϕ) = f (x) exp −j ϕx dx. (7.2)
λ

zino: “chap07” — 2005/11/7 — 15:38 — page 148 — #2


Imaging of targets moving in a line 149

After the reconstruction of the radar image, which reduces to a Fourier transform of
the echo signal (7.2) with the weight function w(ϕ) and intensity, we obtain

2
|ν(s)| = f (x1 )f ∗ (x2 )U (s − x1 , s − x2 ) dx1 dx2 + η(x), (7.3)

U (s1 , s2 ) = w(ϕ1 )w∗ (ϕ2 )
 
j4π
× exp jψ(ϕ1 ) − ψ(ϕ2 ) + (s1 ϕ1 − s2 ϕ2 ) dϕ1 dϕ2 , (7.4)
λ
where s is the cross range coordinate in the image plane, the sign ∗ indicates complex
conjugation, η(x) is complex noise on the image, and U (s1 , s2 ) is the cross correlation
function of the hologram.
The statistical characteristics |ν(s)|2 and U (s1 , s2 ) will be analysed on the
assumption that f (x) is a sum of the δ-functions of point scatterers and ψ(ϕ) is
defined by the normal distribution law. Consider the average U (s1 , s2 ) value over
the phase fluctuations ψ(ϕ), taking them to be Gaussian. With the formula for the
characteristic function and the expansion of ρ(ϕ1 −ϕ2 ) into a Taylor series at σ 2  1,
we get
exp{ j|ψ(ϕ1 ) − ψ(ϕ2 )|} = exp{−σ 2 [1 − ρ(ϕ1 − ϕ2 )]}

= exp{−σ 2 /22 (ϕ1 − ϕ2 )2 },

where σ 2 is the phase noise dispersion, ρ(ϕ1 − ϕ2 ) is the correlation factor and 2 is
a quantity inverse to the second derivative of the correlation factor at zero, which
describes the angle correlation step of the target aspect variation.
Assuming w(ϕ) = exp−ϕ 2 /(2θ 2 ), where θ describes the angle step of the
synthesis, we find
λ2 C
U (s1 , s2 ) = 
64π (ds2 + dc2 )2 − dc4
  
ds2 + dc2 2 ds2
× exp − · (s1 − s 2 ) + 2 s s
1 2 ,
2[(ds2 + dc2 )2 − dc2 ] ds2 + dc2
(7.5)

where C = exp(−4π 2 ); ds = λ/(2θ) is a resolution step corresponding to the


synthesis time Ts (or the aspect variation θ = VTs sin α/ro ), V is the linear target
velocity, α is the angle between the antenna pattern axis and the vector V , dc =
λσ/(2) is a space correlation step of target path instabilities, and ro is the target
range at the moment of time Ts /2.
The average intensity of a point target image (the impulse response of the system),
derived for a partially coherent echo signal, is
 
2 2 (s − x)2
|ν(s)| = U (s − x1 , s − x2 ) = C1 |A| exp − 2 , (7.6)
ds + 2dc2

zino: “chap07” — 2005/11/7 — 15:38 — page 149 — #3


150 Radar imaging and holography

where C1 is the same factor of the exponent as in Eq. (7.5), A is the signal amplitude,
and x is the scatterer coordinate.
For a target composed of a multiplicity of scatterers, each scatterer will be rep-
resented by a peak in the image described by Eq. (7.6). Its image position of each
scatterer along the s-coordinate is its real position along the x-coordinate in the target
plane. Moreover, every pair of scatterers will be represented in the image function by
an interference term
 
∗ [s − (x1 + x2 )/2]2 (x1 + x2 )2
U (s − x1 , s − x2 ) = C1 Re A1 A2 exp − − ,
ds2 + 2dc2 4ds2
(7.7)
The additional term in Eq. (7.7) defines the peak located half way between the images
of the respective scatterers; it has the same width as the peak for any other scatterer
and is described by the ratio of the interscatterer distance to the resolution step value
at zero phase noise. If this ratio is large, the interference term due to the superposition
of side lobes in individual pixel images is negligible as compared with the average
image intensity.
Under the conditions of partial signal coherence, the real resolution can be found
from the 0.5 level of the maximum intensity |ν(s)|2 :


ds = 2s |ν(s)|2 =0.5 = C2 ds + 2dc = C2 ds 1 + 2(σ Ts /)2 ,
2 2 (7.8)
where C2 is a constant defined by the function w(ϕ) in exponential and uniform
approximations of C2 ∼ = 1.66 and C2 = 1, respectively. Obviously, if ds decreases
by the value s , the real resolution ds will improve only by s (Fig. 7.1(a)):

s = C2 ds2 + 2dc2 − C2 (ds − c )2 + 2dc2 (7.9)
and with increasing Ts the gain in the real resolution will become still smaller.
Equation (7.9) can be reduced to
ad 2s + bd s + c = 0, where a = 4(p2 − 2s ); b = 4s (2s − p2 );
c = p2 (22s + 4dc2 ) − p4 − 4s ; p = s /C2 .
We can now calculate ds and Ts values that may be considered most suitable for the
synthesis at given s and s :

ds opt = s /2 + 2s /4 − c/a, (7.10)
Ts opt = λro /2V sin αds opt . (7.11)
At the values of λ = 0.1 m, ro = 50 km, V = 600 m/s, α = 90◦ ,
s = 0.1 m,
s = 0.05 m and C2 = 1, we find Ts opt = 1.83 s for Tc = 1.5 s and Tc = 3 s,
respectively (dc = 6.98 m and dc = 3.49 m).
Formula (7.11) defines the synthesis time of a partially coherent signal, which
is optimal in the sense that it will require greater computer resources but will not
essentially improve image quality determined by the real resolution ds or the s /s

zino: “chap07” — 2005/11/7 — 15:38 — page 150 — #4


Imaging of targets moving in a line 151

(a)

30

d⬘s, m
20 D⬘
s

1
10 Ds
2
3
0
Ts, S

(b)

1.0
D⬘s/Ds, rel. un.

0.5
2
1
0
1 2 3 Ts, S

Figure 7.1 Characteristics of an imaging device in the case of partially coherent


echo signals: (a) potential resolving power at C2 = 1, (b) performance
criterion (1 – dc = 6.98 m, 2 – dc = 3.49 m and 3 – dc = 0)

ratio (Fig. 7.1(b)). This ratio quantitatively describes the gain in the angular radar
resolution owing to the synthesis of partially coherent signals, as compared with that
for perfect viewing conditions (dc → 0).
In the next section, we shall estimate the synthesis conditions by numerical
simulation. The key factor in the imaging model to be described is target path
fluctuations.

7.2 Modelling of path instabilities of an aerodynamic target

Path instabilities will be considered as random range displacements of a target


(model I) or as independent fluctuations of the target velocity along the x - and
y -axes (model II). The appropriate random processes will be expressed by recurrent
difference equations [26]
L

K

Yi [n] = al X [n − l] + bk Yi [n − k], (7.12)


l=0 k=0

where the coefficients a0 , a1 , . . . , al and b1 , b2 , . . . , bk , as well as L and K vary with


the cross correlation function; the subscript i denotes the number of the model for

zino: “chap07” — 2005/11/7 — 15:38 — page 151 — #5


152 Radar imaging and holography

a random deviation of the target motion parameter. The coefficients al and bk in


Eq. (7.12) that are necessary for obtaining the values of Yi [n] with a prescribed
coefficient ρ(τ ) were presented in the work [26].
In model I of path instabilities, the current range rT [n] to a target is described as a
sum of the predetermined range variation r[n] and the random component Yl [n], which
is the mean square deviation of the range. The quantities σp and Tc are: σp = 0.04
or 0.05 m, Tc = 1.5 and 3 s. The values of σp and Tc were found heuristically from a
preliminary simulation.
In model II, the vector modulus of the real target velocity is

Vr [n] = (V + Y2x [n])2 + Y2y 2 [n], (7.13)


where Y2x [n] and Y2y [n] are the current values of random velocity deviations along
the x - and y -axes, respectively; for comparison, the mean square deviation of the
velocity is σx ,y = 0.1 or 0.2 m/s at Tc = 1.5 or 3 s. The values of σx ,y and Tc are
presented here courtesy of A. Bogdanov, O. Vasiliev, A. Savelyev and M. Chernykh
who measured them in real flight conditions. Their experimental data on coherent
radar signals in the centimetre wave range are also described in Reference 28.
The current angle between the antenna pattern axis and the vector Vr [n], in this
model, is
α [n] = α + arctg(Y2y [n]/(V + Y2x [n])). (7.14)
With Eqs (7.13) and (7.14) combined with the viewing conditions of model II, we
have computed the real current range rT [n] to the target.

7.3 Modelling of radar imaging for partially coherent signals

To make the next step in the modelling of a radar image, we suggest that the predeter-
mined path component of a point target is normal to the antenna pattern axis, that is,
α = 90◦ , the transmitter pulses have a spectral width fc = 75 MHz, and their other
parameters are chosen with the account of well-known restrictions for the removal of
image inhomogeneities [104].
The range image of a target was formed by coherent correlation processing of
every echo signal. For every pixel on the range image, the nth (n = 1, . . . , 256)
value of a complex echo signal was recorded to form a microwave hologram [138].
The reference function was formed ignoring the errors in the estimated parameters of
target motion. The reconstructed image |ν(r, s)|2 was 2D in the r- and s-coordinates
(range and cross range). The simulation showed that the phase noise due to path
instabilities did not affect the range image of a target. Therefore, we shall further
treat only its cross range section along the range axis.
A visual analysis of impulse responses during the imaging of partially coherent
echo signals (Tc = 3 s, Ts = 1.5 s) indicates that phase fluctuations largely pro-
duce the following types of noise (Fig. 7.2). First, there is a shift of the impulse
response along the s-axis in the image field (Fig. 7.2(a)). Second, the peak of the

zino: “chap07” — 2005/11/7 — 15:38 — page 152 — #6


Imaging of targets moving in a line 153

(a) 1.0 (b)

|n(s)|2, rel. un.


0.5

(c) 1.0 (d)


|n(s)|2, rel. un.

0.5

0
20 40 60 0 20 40 60
s, m s, m

Figure 7.2 Typical errors in the impulse response of an imaging device along the
s-axis: (a) response shift, (b) response broadening, (c) increased ampli-
tude of the response side lobes and (d) combined effect of the above
factors

major impulse response becomes broader (Fig. 7.2(b)). Third, the side lobes of the
impulse response become larger to form some additional features commensurable in
their intensities with the major peak (Fig. 7.2(c)).
Combinations of the three effects on the final image are also possible (Fig. 7.2(d)).
It is worth noting that the first effect can be eliminated during the image processing
by relating the window centre to the nth pixel with maximum intensity.
The presence of distorting effects necessitates finding ways to measure a real
resolution step. A conventional way of estimating resolution is by measuring the
impulse response of the processing device at the level 0.5 of the maximum intensity
|ν(s)|2 . In that case, analysis is made of all the images along the s-axis, independent
of phase noise.
Another way of measuring a resolution step is that all additional features on a
point target image at the 0.5 level are considered to be side lobes, irrespective of their
intensity, and can be removed in advance.
Figures 7.3 and 7.4 present the estimates of an average resolution step ds for
models I and II of path instabilities, respectively. The average value was calculated
from 100 records of path instability of a point target for every discrete time moment
Ts (Ts = 0.1, . . . , 2.9 s). The estimation of a resolution step within model I fails to
predict the degree of partial coherence effect on the radar image, since we know
nothing about a perfect image a priori. The analysis of Fig. 7.3 has shown that the
resolution step error is fairly large at σ Ts / ≥ 1, where σ = 2π σp /λ. It is the
appearance of false features above the 0.5 level with increasing synthesis time that
leads to an overestimation of the resolution step computed from the impulse response

zino: “chap07” — 2005/11/7 — 15:38 — page 153 — #7


154 Radar imaging and holography

(a) 60

40
d9s, m 2 29
1

20
19

(b) 80

60
d9s, m

40 1 19 2 29

20

0
1 2 3
Ts, S

Figure 7.3 The resolving power of an imaging device in the presence of range
instabilities versus the synthesis time Ts and the method of resolution
step measurement: (a) −σp = 0.04 m; 1 and 1 (2 and 2 ) – first (second)
way of resolution step measurement; 1 and 2 – Tc = 1.5 s, 1 and
2 – Tc = 3 s; (b) −σp = 0.05 m, 1 and 1 (2 and 2 ) – first (second) way
of resolution step measurement; 1 and 2 – Tc = 1.5 s, 1 and 2 – Tc = 3 s

width and, hence, to a larger error in the target size measurement. Such an error is
inherent in this method of resolution evaluation.
In the model of velocity instabilities (model II), the ds (Ts ) curves in Fig. 7.3 show
a reasonable agreement with the theoretical curves in Fig. 7.1(a). The curve behaviour
in Fig. 7.4 differs from the calculated dependences and from the model computations
shown in Fig. 7.3 in that the ds (Ts ) curve has a minimum. The latter is due to an error
in the method of estimating a resolution step, although the calculated ds (Ts ) curve
does not indicate the presence of extrema.
The simulation results (curve 1 in Fig. 7.4(a)) can be used to find the synthesis
time intervals for a particular type of signal (or a particular imaging algorithm):
I – totally coherent, II – partially coherent and III – incoherent. One can choose
various imaging algorithms for available statistical characteristics of path instabilities
and for a particular time Ts . For instance, it is reasonable to use incoherent processing
algorithms at synthesis times for which a signal can be considered as incoherent [78].
For shorter intervals I and II, one should use coherent processing algorithms and
evaluate their performance in terms of the criterion s /s (Fig. 7.5).

zino: “chap07” — 2005/11/7 — 15:38 — page 154 — #8


Imaging of targets moving in a line 155

(a) 60
I II III

40

d9s, m
20 19 2 29
1

(b) 60

40
d9s, m

1
20
2 29
19

0
1 2 3
Ts, s

Figure 7.4 The resolving power of an imaging system in the presence of velocity
instabilities versus the synthesis time Ts and the method of resolution step
measurement: (a) σx = σy = 0.01 m/s (other details as in Fig. 7.3),
(b) σx = σy = 0.2 m/s (other details as in Fig. 7.3)

1.0

0.8
D⬘S/DS, rel. un.

0.6

0.4

0.2 2
1
0
0.5 1.0 1.5 2.0 2.5
Ts, S

Figure 7.5 Evaluation of the performance of a processing device in the case of


partially coherent signals versus the synthesis time Ts and the space
step of path instability correlation dc : 1 – dc = 6.98 m, 2 – dc = 3.49 m

zino: “chap07” — 2005/11/7 — 15:38 — page 155 — #9


156 Radar imaging and holography

The resolution estimate obtained by the second method is close to the theoretical
value. However, this approach has a serious limitation because a real target possesses
a large number of scatterers. The positions of respective intensity peaks on a radar
image are unknown a priori, so the application of this technique may lead to a loss of
information on adjacent scatterers on an image. This method proves to work well if
one knows in advance that the target being viewed is a point object or that a range pixel
corresponds to a single scatterer. In that case, the imaging device can be ‘calibrated’
by evaluating the phase noise effect on it.
The discrepancy between the simulation results presented in Figs 7.3 and 7.4
may be interpreted as follows. Model I of target path instabilities simulates random
phase noise associated only with the displacement of range aperture pixels. Model II
introduces greater phase errors in the echo signal, because the aperture is synthesised
by non-equidistant pixels, which are additionally range-displaced. This model seems
to better represent the real tracking conditions, since it accounts for random target
yawing in addition to random range displacements.
The analytical expressions given earlier and the simulation results on partially
coherent signals with zero compensation for the phase noise can provide the real
resolving power of an imaging device. Today, there are no generally accepted criteria
for evaluation of the performance of radar devices for imaging partially coherent
signals. The results discussed in this chapter allow estimation of the device perfor-
mance in the ideal case of dc → 0; on the other hand, they enable one to evaluate
the efficiency of computer resources to be used in terms of the possible gain in the
resolving power.
Track instabilities of real aerodynamic targets and other factors introducing phase
noise give rise to numerous defects on an image. So the application of conventional
ways of estimating the resolving power of imaging systems leads to errors. However,
there is an optimal synthesis time interval which provides the best angular resolution
with a minimal effect of phase fluctuations. Therefore, when phase noise cannot
be avoided, which is usually the case in practice, it is reasonable to make use of a
statistical database on fluctuations of motion parameters for various classes of targets
and viewing conditions. The processing model we have suggested can be helpful in the
evaluation of the optimal time of aperture synthesis in particular viewing conditions.
The viewing conditions also require a specific processing algorithm to be used,
so radar-imaging devices should also be classified into coherent, partially coherent or
incoherent. The simulation results presented in Fig. 7.4 do not question the validity of
analytical relations (7.4), (7.5) and (7.7) but rather define their applicability, because
a signal becomes incoherent when a fluctuating target is viewed for a long time.

zino: “chap07” — 2005/11/7 — 15:38 — page 156 — #10


Chapter 8
Phase errors and improvement of image quality

Possible sources of phase fluctuations of an echo signal, which negatively affect the
aperture synthesis, are turbulent flows in the troposphere and ionosphere. Fluctua-
tions of the refractive index due to tropospheric turbulence impose restrictions on
the aperture centimetre wavelengths. Ionospheric turbulence affects far-decimetre
wavelengths. Phase fluctuations decrease the resolving power of a synthetic aperture,
leading to a lower image quality.

8.1 Phase errors due to tropospheric and ionospheric turbulence

8.1.1 The refractive index distribution in the troposphere


Fluctuations in the troposphere may arise from changes in the meteorological con-
ditions and air whirls. As a result, there are non-uniform local distributions of
temperature and humidity, leading to a non-uniform distribution of refractivity N :
N = (n − 1) × 106 , (8.1)
where n is the refractive index.
At the centimetre wavelengths, a static air volume has refractivity N defined by
the Smith–Wentraub formula:
7.7P 3.73 × 105 e
N = + , (8.2)
T T2
where P is the total atmospheric pressure measured in millibars, T is temperature in
Kelvin degrees and e is the specific water vapour pressure in millibars.
It follows from Eq. (8.2) that the value of N at centimetre and longer wavelengths
strongly depends on the water vapour concentration, while its variation with the
wavelength λ is insignificant. The latter fact is quite important because it makes it
possible to obtain phase fluctuation spectra for various wavelengths in the microwave
range, using an experimental spectrum measured at any wavelength. The major type

zino: “chap08” — 2005/11/7 — 15:38 — page 157 — #1


158 Radar imaging and holography

of non-uniformity responsible for amplitude and phase fluctuations of an electromag-


netic wave are so-called globules. These represent spherical or ellipsoidal structures,
in which the refractive index differs, for some reason, from that in the environment.
Generally, globules have arbitrary and irregular shapes. They arise from the local
changes in the temperature, humidity or pressure accompanying turbulent phenom-
ena in the troposphere. Since these causative factors behave differently at different
points in space, the troposphere is generally non-uniform.
We shall first briefly describe the characteristics of a turbulent troposphere. The
refractive index of the troposphere is generally the function n(r , t) of the radius
vector r and time t, which can be written as
n(r , t) = n + δn(r , t), (8.3)
where n is an average value of the refractive index and δn(r , t) is its deviation
from the average n. Since the problem of interest is the fluctuation of the refractive
index only, we shall further take n = 1. The autocorrelation function of these
fluctuations is
Bn (r1 , r2 , t1 , t2 ) = δn(r1 , t1 )δn(r2 , t2 ), (8.4)
where r1 , r2 are the radius vectors of the selected points.
For a steady-state turbulence, the autocorrelation function is independent of t (the
steady state in time):
Bn (r1 , r2 ) = δn(r1 , t)δn(r2 , t). (8.5)
For a statistically non-uniform turbulence (the stationarity in space), the correlation
function will not change if a pair of points r1 ur2 is displaced by the same distance and
in the same direction simultaneously, that is, B(r1 , r2 ) varies only with r1 − r2 = r .
A spatially uniform distribution is called isotropic if Bn (r̄) depends only on r = |r |,
that is, on the distance between the observation points but not on the direction.
However, even in the case of a uniform and isotropic random distribution of
the refractive index, it appears to be quite difficult to choose an autocorrelation
function for its fluctuations such that it could describe the real troposphere accurately.
The only case when the fluctuation distribution can be described from theoretical
considerations is a locally uniform isotropic turbulence. The general theory of this
kind of turbulence was discussed in References 132 and 133. In real meteorological
conditions, the distributions of wind velocity, pressure, humidity, temperature and
the refractive index cannot be uniform or isotropic in large space regions. But in a
relatively small region, whose size Lo is known as the outer-scale size of turbulence,
the distributions may be taken to be both uniform and isotropic.
Theoretically, it is possible to describe fluctuations of the refractive index in terms
of physical considerations of turbulence origin and development. The theory treats
statistical fluctuations of velocity and related scalar quantities (such as temperature
and the refractive index), induced by disturbances in horizontal air currents because
of wind and by perturbations in laminar flow due to convection.
The physical mechanism of turbulence origin and development is as follows.
When the translational wind velocity exceeds the critical Reynolds number, huge

zino: “chap08” — 2005/11/7 — 15:38 — page 158 — #2


Phase errors and improvement of image quality 159

whirls (globules) arise and their size may exceed Lo . Such whirls are produced owing
to the energy of translational flow movement, for example, to the wind power. This
power is then given off to whirls of size Lo , and so on. Eventually, the energy is
dissipated because of viscous friction in the smallest whirls of size lo known as the
inner-scale size of turbulence. In this way, huge whirls gradually split into smaller
ones, and this process goes on until the power of rotational motion of the smallest
whirls transforms to heat in overcoming the viscous force. For this reason, a region
where huge whirls transform to small ones is called an inertia region. Within such
a region, the instantaneous distribution of the refractive index n(r ) is an unsteady
random function. However, the difference

n(r1 ) − n(r2 )

is steady under the condition

|r2 − r1 | < Lo .

In other words, n(r ) appears to be a random function with the first increments being
steady. Random processes, like those discussed in the books [132,133], can be conve-
niently described by structure functions. The one for the refractive index distribution
has the form:

Dn (r) = [n(r1 ) − n(r2 )]2 . (8.6)

The structure function is a fundamental characteristic of a random process with the


first steady increments, replacing the concept of autocorrelation function. The latter
just does not exist for random processes.
The quantity Dn (r) describes the intensity of n(r) fluctuations, whose periods are
smaller or comparable with r. For a locally uniform and isotropic turbulence, it is
defined as

Dn (r) = [n(r1 + r) − n(r1 )]2 , (8.7)

where r is an arbitrary increment of r1 .


Let us consider some statistical characteristics of the refractive index distribution
in the troposphere. The detailed analysis made in References 132 and 133 has shown
that the structure function of this parameter can be written as

Dn (r) = Cn2 r 2/3 , (lo r Lo ), (8.8)

where Cn2 is a structure constant of the refractive index. Equation (8.8) describes the
so-called 2/3 law by Obukhov and Kolmogorov for the refractive index distribution.
Numerous measurements made in the near-earth troposphere [132,133] showed a
good agreement between the fluctuation characteristics of n and the 2/3 law. The
value of lo in the troposphere is found to be ∼1 mm. The quantity Lo is a function
of direction and altitude. Therefore, one may assume that the horizontal extension
of large whirls near the earth surface will have the same order of magnitude as the
altitude, as far as the maximum altitudes lie in the range from 100 to 1000 m [110].

zino: “chap08” — 2005/11/7 — 15:38 — page 159 — #3


160 Radar imaging and holography

Energy dissipation
Whirl origin region region
Inertia region

Φn (x) 1 Tatarsky’s model-I


2 Tatarsky’s model-II
C 2n
3 Carman’s model
4 Modified Carman’s model
1

10–4

10–8
–11
10–12 ( x) 3

10–10
lo≈l mm
10–20 xo = 2p (Lo ~ l m) xm = 2p
Lo lo
x(m–1)

1 101 102 103 104

Figure 8.1 The normalised refractive index spectrum n (χ )/Cn2 as a function of the
wave number χ in various models: 1 – Tatarsky’s model-I, 2 – Tatarsky’s
model-II, 3 – Carman’s model, 4 – modified Carman’s model

The refractive index spectrum obeying the 2/3 law is

n (χ ) = 0.033Cn2 χ −11/3 , at (< χ < χm ), (8.9)

where χo ∼ (2π/Lo ), χm ∼ (2π/lo ) and χ is the spatial wave number. It has been
found experimentally that the n (χ ) spectrum has the form of χ −11/3 in an inertia
region where the wave numbers are larger than χo . Figure 8.1 shows the normalised
spectra for three regions: for the region of whirl origin (χ < (2π/Lo )), for the inertia
region ((2π/Lo ) χ (2π/lo )) and for the dissipation region (χ ≥ (2π/lo )).
It is seen that the spectral density n (χ ) in the region of χ ≥ (2π/lo ) decreases
much faster than might be expected from the (χ −11/3 ) formula. But in what way
n (χ ) decreases in this region is still unclear theoretically. One usually deals with
three kinds of spectra in the dissipation region. One obeys the χ −11/3 law, another
drops abruptly at χ = χm , implying that n (χ ) = 0 at χ = χm , and, finally, the
spectrum changes on addition of the factor exp[−(χ 2 /χm2 )].
The second case obeys Eq. (8.9) in practice. We have termed the respective model
spectrum Tatarsky’s model-I. It has been successfully employed in Reference 133 and
some other studies. In Reference 132, V. Tatarsky used the following expression for

zino: “chap08” — 2005/11/7 — 15:38 — page 160 — #4


Phase errors and improvement of image quality 161

the refractive index spectrum:


 
χ2
n (χ ) = 0.033Cn2 χ −11/3 exp − 2 (8.10)
χm
with χm /lo = 5.92 rather than 2π , as before. We have called the model for this case
Tatarsky’s model-II, which is fully valid in the inertia region but is approximate at
χ > χm .
It follows from the analysis of the two models that they can adequately describe
the statistical characteristics of the refractive index in the inertia region and are sat-
isfactory for the dissipation region. In the region of χ < (2π/Lo ), however, these
models do not undergo any modification, that is, the dependence n (χ ) remains to
be χ −11/3 . On the other hand, it is known from References 132 and 133 that the
spectral density curve n (χ ) at χ < (2π/Lo ) is not universal and may change with
the meteorological conditions. Therefore, the models of (8.9) and (8.10) are practi-
cally unable to evaluate the effects of this region on measurements. Besides, these
models describe well only small-scale turbulence, which is quite clear from Fig. 8.1.
In reality, however, most of the turbulence pulsation ‘power’ is accumulated in large
whirls, at χ ≤ (2π/Lo ). In such regions, the uniformity and isotropic character of
the random distribution of n(r , t) are also violated. Still, quantitative estimations can
be made from interpolation formulae describing approximately the structure function
behaviour at large Lo values, that is, in the range of small χ . One of these is Carman’s
function having the following spatial spectrum [133]:

δn21 L2o 2π
n (χ ) = 0.063 at χ , (8.11)
(1 + χ 2 L2o )11/6 Lo

where δn21 is the dispersion of refractive index fluctuations.


The spectral model of (8.11) known as Carman’s model works well for large-scale
turbulence (Fig. 8.1). One can see from Eq. (8.11) that it does not include explicitly
the constant Cn2 related to the dispersion δn21 by the expression
−2/3
Cn2 = 1.9δn21 Lo . (8.12)
Using Eq. (8.12), one can derive expressions for Tatarsky’s models I and II:
−2/3 −11/3 2π 2π
n (χ ) = 0.063δn21 Lo χ at χ , (8.13)
Lo lo
 
−2/3 χ2 2π 2π
n (χ ) = 0.063δn21 Lo χ −11/3 exp − 2 at χ . (8.14)
χm Lo lo
This representation is convenient when the refractive index fluctuations are given as
δn21 rather than through Cn2 .
The next point to discuss is the applicability of the spectra described by
Eqs (8.9), (8.10) and (8.11). When using this or that spectral model in problems of
parameter fluctuations of an electromagnetic wave in a turbulent medium, one should

zino: “chap08” — 2005/11/7 — 15:38 — page 161 — #5


162 Radar imaging and holography

bear in mind the following factors. First, the spectra are valid in the inertia region of
a locally uniform and isotropic turbulence. Sometimes, the turbulence spectrum may
strongly differ from the above models. Second, the spectrum at χ ≤ χo is, at best, an
approximation, even though one may use Carman’s spectra. At χ ≥ χm , the model
spectra are only good approximations. Note that the spectrum of the form (8.11)
transforms to that of (8.9) at χ 2 L2  1. In addition to the three types of spectra, there
is a spectrum of the form:
α exp(−χ 2 /χm2 )
n (χ ) = ,
(1 + χ 2 L2o )11/6

δn21 L3o (11/6)


α= C(χm Lo ),
π 3/2 (1/3)
 −1
(11/6) (−1/3)
C(χm Lo ) ≈ 1 + (χm Lo )−2/3 . (8.15)
(1/3) (3/2)
At χm Lo  1, the correction term C(χm Lo ) ≈ 1. Since lo ∼ (1 ÷ 10) mm and
Lo ≥ 1 m, we have
χm
= 5.92, χm = (5.92 ÷ 59.2),
lo

χm Lo ≥ 5.92 × 103 .
Keeping in mind this fact and
(11/6)
≈ 0.06,
π 3/2 (1/3)
we get
 
δn21 L χ2
n (χ ) = 0.06 exp − (8.16)
[1 + χ 2 L2o ]−11/6 χm2
or
11/3  
Cn2 Lo χ2
n (χ ) = 0.06 exp − . (8.17)
[1 + χ 2 L2o ]−11/6 χm2
It would be reasonable to call a spectrum of the type (8.16) or (8.17) Carman’s mod-
ified spectrum. If relation (8.12) is fulfilled, this spectrum will coincide with that
described by Eqs (8.10) and (8.14) at large values of χ . But in the χ range, it coin-
cides with the Carman spectrum shown in Fig. 8.1. The choice of a particular type
of spectrum varies with the problem to be solved. Fluctuations of some electromag-
netic wave parameters, such as phase and amplitude, are often sensitive to a certain
turbulence spectrum, or to large- or small-scale whirls. Keeping this important fact
in mind, one should analyse carefully the applicability of the chosen spectrum before
using it.

zino: “chap08” — 2005/11/7 — 15:38 — page 162 — #6


Phase errors and improvement of image quality 163

The best way of verifying a model is to compare the results obtained with available
experimental data. Although the models of (8.9) and (8.10) are rather approximate at
χ < (2π/Lo ), they still provide a good agreement with measurements (e.g. of phase
fluctuations). Moreover, they can give the results in an analytical form. On the other
hand, the models of (8.11) and (8.15) are more accurate for large whirls but they are
unable to give clear analytical results. These circumstances have predetermined the
applicability of the models of (8.8) and (8.10). In the study of phase fluctuations,
both models yield similar analytical expressions.
It is of importance to discuss in some detail a vertical profile model of the struc-
ture constant. This constant describes the degree of refractive index non-uniformity,
because it relates the quantities D(r) and r (see Eq. (8.8)). The structure constant Cn2
is related to the tropospheric parameters δn21 and r. For radiation propagation along an
oblique path, the turbulence ‘intensity’ changes with the altitude, and the Cn2 values
will be different at different altitudes. The structure function of n(r) will then be

Dn (r) = Cn2 (h)r 2/3 ,

where Cn2 (h) is a structure constant varying with altitude. To obtain quantitative
results, one is first to find the Cn2 (h) variation. The theoretical treatment of the problem
of parameter fluctuations for a plane wave in a turbulent troposphere [132] included
the following Cn2 (h) models:
 
h
Cn2 = Cn0 2
exp − , (8.18)
h0
1
Cn2 = Cn0
2
, (8.19)
1 + (h/h0 )2

where Cn0 2 is the structure constant of the refractive index near the earth surface, h is

the altitude of the point in question, and h0 is a constant.


However, the question whether Eqs (8.18) and (8.19) can really describe the
Cn2 (h) function in the microwave frequency band remains unanswered. In order to
find the exact form of this function, it is necessary to examine the microstructure of
the refractive index distribution in the microwave range and to design a Cn2 (h) model.
This became possible only after the publication of the work [134], which reported
measurements made in experimental flight conditions. The structure constant profile
of the refractive index was measured along an oblique microwave path. The results
of the Cn (h) measurement were summarised in table 1 of Reference 134. Yet, it
was impossible to plot the Cn (h) function from these data, because they were to be
statistically processed. This was accomplished by the authors of Reference 144.
Figures 8.2 and 8.3 show some Cn2 (h) plots for different seasons (for April and
November). The Cn2 values in these plots represent records averaged over several
runs of the squared structure constant measurement (the averaging was actually made
over the time of day). The confidence limit was taken to be 0.98. Some of the Cn
values presented in Reference 134 differ considerably from the average values and
do not seem to be due to a statistical spread. To reveal such data, the authors used a

zino: “chap08” — 2005/11/7 — 15:38 — page 163 — #7


164 Radar imaging and holography

C 2n (cm–2/3)

10–13

10–14

10–15

10–16

10–17
0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3.0
h
(km)

Figure 8.2 The profile of the structure constant Cn2 versus the altitude for April at
the SAR wavelength of 3.12 cm

criterion based on the assumption of a normal error distribution. The Cn records that
differed from the average by more than a possible maximum of the statistical spread
and were lying within the 0.98 error limit were eliminated from further analysis.
The plots thus obtained were approximated by exponential functions, using the least
square method. As a result, the following analytical dependencies were derived for
the structure constant profile at the wavelength of 3.12 cm:
(a) the Cn (h) model for April:
 
h
Cn2 (h) = Cn0
2
exp − (8.20)
h0
2 = 3.69 × 10−15 cm −2/3 and h = 2.17 × 105 cm;
with Cn0 0

zino: “chap08” — 2005/11/7 — 15:38 — page 164 — #8


Phase errors and improvement of image quality 165

C 2n (cm–2/3)

10–13

10–14

10–15

10–16

10–17
0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3.0
h
(km)

Figure 8.3 The profile of the structure constant Cn2 versus the altitude for November
at the SAR wavelength of 3.12 cm

(b) the Cn (h) model for November:


 
h
Cn (h) = Cn0 exp −
2 2
, (8.21)
h0
2 = 1.27 × 10−14 cm −2/3 and h = 8.89 × 104 cm.
with Cn0 0

We can see that the refractive index fluctuations decrease with altitude. The major
contribution to the fluctuations is made by a tropospheric stratum 3 km thick above the
earth. The contribution of the other 7 km thickness (the total thickness of the tropo-
sphere is taken to be 10 km) is five times smaller. It is known that the fluctuation
of n increases with rising humidity. The most intense fluctuations are observed at

zino: “chap08” — 2005/11/7 — 15:38 — page 165 — #9


166 Radar imaging and holography

the air–cloud interface and inside the clouds. This model, however, ignores these
effects because of the lack of experimental data. But some data are available on
the effect of humidity and clouds on the dispersion δn2 of the refractive index val-
ues. Therefore, the model of the vertical δn2 profile allows estimation, in a first
approximation, of the cloud effect on phase fluctuations.
To conclude, it seems reasonable to extend the results on λ = 3.12 cm waves to
other centimetre wavelengths, since the Smith–Wentraub formula (8.2) indicates only
a slight dependence of n on the wavelength λ within the centimetre frequency band.

8.1.2 The distribution of electron density fluctuations in the ionosphere


In contrast to the troposphere, the ionosphere is characterised by electron density
fluctuations. Let Ne(r ) denote fluctuations of the equilibrium electron density No ,
which is the average electron concentration. The variable ξ defined by the equality ξ =
Ne(r ) represents a uniform random distribution with a zero average and a standard
deviation σξ . By definition, the autocorrelation function of this distribution is
Bξ (r1 − r2 ) = ξ(r1 )ξ(r2 ),
where the angular brackets stand for the averaging over an ensemble.
According to the Wiener–Khinchin theorem, the autocorrelation function and the
spectrum create a Fourier transform pair:
 ∞
−3
ξ (χ ) = (2π) Bξ (r )e−jχ
r 3
d r, (8.22)

 ∞
−3 
Bξ (r ) = (2π) ξ (χ )e−jχ K d 3 χ , (8.23)
−∞

where χ is the spatial wave number.


Experimental investigations have shown [141] that both the phase fluctuation
spectra of a wave that has passed through a turbulent ionosphere and the amplitude
fluctuation spectra have an asymptotic power dependence. Hence, the spectra of
ionospheric whirls must also have a power dependence. Assuming the whirls to be
isotropic within a space scale from 70 m to 7 km, a 3D whirl spectrum will have the
form [141]:
ξ (χ ) ≈ χ −P , (8.24)
where P is the power index of the spectrum varying between 2 ≤ P ≤ 3.
C. L. Rino and various co-workers [113–116] have suggested a spectrum of the
electron density fluctuation:
 Ne (χ ) = CS χ −(2ν+1) at χo < χ < χi , (8.25)
where CS is the turbulence parameter, χo is the outer-scale size of ionospheric whirls,
χi is their inner-scale size, and 2ν = P is the spectral power index. We have mentioned

zino: “chap08” — 2005/11/7 — 15:38 — page 166 — #10


Phase errors and improvement of image quality 167

above that the power index varies between 2 < P < 3, whereas the power index for
the troposphere is P = 8/3 (Kolmogorov’s spectrum).
The turbulence parameter is described as
((P + 1)/2)
CS = 8(π )3/2 (χ )P−2  Ne2  (8.26)
((P − 1)/2)

where (·) is the gamma-function and  Ne2  is the mean square value of the fluc-
tuation component of the electron density. For a typical fluctuation distribution in the
ionosphere, CS ∼ 1021 (mKs). The quantity  Ne2  varies remarkably with the
ionospheric conditions, so CS fluctuates from 6.5 × 1019 (mKs) at P = 2.9 to
1.3 × 1023 (mKs) at P = 1.5 [22]. The ionosphere has a thickness of about 200 km.
The maximum electron density lies in the Nm F2 stratum at an altitude between 250
and 350 km.
The outer-scale size of a turbulent whirl along the shortest distance (ionospheric
whirls are anisotropic) is about 10 km. The respective value for a turbulent troposphere
is about 1 km.

8.2 A model of phase errors in a turbulent troposphere

When discussing a SAR in Chapter 3, we pointed out that a turbulent non-uniform


troposphere could be a source of spatial phase fluctuations. Let us consider a tur-
bulence model with reference to a particular type of SAR – a radar with a focused
aperture. Suppose a SAR is located along the carrier track (Fig. 8.4). For simplicity,
we shall assume that there is only one point scatterer A across the swath width. This
target is located at the point having an oblique range R and is scanned for the synthesis
time Ls , i.e.

Ls ≈ βH ,

where β is the aperture pattern width.


The equiphase surface of an echo signal represents a sphere with the centre at the
target location point. The track line is shown by the A1 –A2 line, and the thickness of
a turbulent tropospheric stratum is denoted as ht . The structure function of the phase
fluctuation for a spherical wave of the point target A is

Dϕ (ρ) = [ϕ(r + ρ) − ϕ(r)]2 , (8.27)

where ρ is the distance between the points, at which the phase fluctuations are to be
measured, for example, ρ = de . To find an analytical expression for D(ρ), consider a
2D spectrum of wave phase fluctuations in a turbulent troposphere. Using a gradual
perturbation approach, the authors of Reference 133 derived a simple formula relat-
ing the phase fluctuation parameters to the spectral density of the refractive index
fluctuations n (χ ). The 2D spectral density Fϕ (χ , 0) and n (χ ) have the simplest
relation, because the former is a 2D Fourier transform of the respective phase structure
function in the plane x = const. normal to the wave propagation direction. For a plane

zino: “chap08” — 2005/11/7 — 15:38 — page 167 — #11


168 Radar imaging and holography

line
th L s ier track
leng Carr
hesis
Synt
Z
A2
H
V

u n x
A1 ectio
Ro n e proj h
k li eart
R Trac to the
on

0 de

Lf
q
a
ht
Rq A

Point target

y
Swath width

Figure 8.4 A geometrical construction for a spaceborne SAR tracking a point


object A through a turbulent atmospheric stratum of thickness ht

wave with the cross section x = L, we have


 
k χ 2L
Fϕ (χ , 0) = π k 2 L 1 + 2 sin n (χ ), (8.28)
χ L k
where L is the distance covered by the wave passing through a non-uniform turbulent
medium.
Using Eq. (1.51) from Reference 133, we can now turn to the structure function
of phase fluctuations in the plane x = L:
 ∞
D(ρ) = 4π [1 − J0 (χρ)]Fϕ (χ , 0)χ dχ , (8.29)
0

where ρ is the distance between the points, at which the structure function is to be
measured in the plane x = L. It follows from Eq. (8.28) that the 2D spectrum of
Fϕ (χ , 0) is similar to the spectrum of the refractive index fluctuations n (χ ) multi-
plied by the filtering function (in square brackets). Therefore, the wave propagation
through a turbulent medium is similar to the linear filter effect in circuit theory.

zino: “chap08” — 2005/11/7 — 15:38 — page 168 — #12


Phase errors and improvement of image quality 169

The filtering function of phase fluctuations is only slightly sensitive to the parame-
ter variations. For example, at χ = 0, Fϕ (χ , 0) is equal to 2π k 2 L, changing smoothly
with increasing χ as far as π k 2 L. Therefore, the filtering occurs relatively uni-
formly. The maximum product of the filtering function and n (χ ) for typical SARs
is observed at small values of χ , or in large whirls. For this reason, phase fluctuations
and phase correlation are most sensitive to the outer-scale size of turbulence, Lo .
With Eq. (8.29) and the turbulence models of (8.9) and (8.10), we can arrive at
an expression for a uniform turbulence and a plane wave:
Dϕ (ρ) = αk12 L2 ρ 5/3 , (8.30)
where
 √
2.91, at ρ ≥ λL,
α= √
1.46, at lo ρ λL,

and L is the electromagnetic wave path in a turbulent medium.


In order to examine the effect of phase errors on the recording of 1D holograms
by a side-looking radar, it would be useful to try to extend the above result to the case
of a non-uniform turbulence and a spherical wave [144].
From Tatarsky’s non-uniform model-I, we have
L √
Dϕ (ρ) = 1.46k ρ 2 5/3
Cn2 (h) dh, (lo ρ λL), (8.31)
0
 L √
Dϕ (ρ) = 2.91k 2 ρ 5/3 Cn2 (h) dh, (ρ  λL). (8.32)
0

The last two expressions show that phase fluctuations are equally affected by all
whirls, irrespective√ of their distance to the observation point. Moreover, when ρ passes
through the value λL, which is usually somewhere at the beginning of the path,
the factor in front of Dϕ (ρ) increases 2-fold. Therefore,
√ the experimental structure
function Dϕ (ρ) must have a positive rise at ρ = λL.
It is interesting to follow how Dϕ (ρ) changes when a plane wave is replaced by
a spherical one. The formula relating the mean square value of the phase difference
fluctuation to the base ‘ρ’ for a spherical and plane wave [132] is

  1
(ϕ1 − ϕ2 )2sp = [Dϕ (ρ)]sp = Dϕ (ρt) dt.
0

For the plane wave Dϕ (ρ) = αk 2 Cn2 Lρ 5/3 , we have


1
3
[Dϕ (ρ)]pl = A0 D(tρ)5/3 dt = A0 ρ 5/3 ,
8
0

zino: “chap08” — 2005/11/7 — 15:38 — page 169 — #13


170 Radar imaging and holography

where A0 = αk 2 Cn2 L. Hence,


3
[Dϕ (ρ)]sp = [Dϕ (ρ)]pl . (8.33)
8
We can conclude that phase fluctuations for a spherical wave are not as large as
for a plane wave and that the structure functions for the former differ from those
of the latter only in numerical coefficients. For a medium with slowly changing
characteristics, we have

L √
3
[Dϕ (ρ)]sp = 1.46k 2 ρ 5/3 Cn2 (h) dh, (lo ρ λL), (8.34)
8
0

L √
3
[Dϕ (ρ)]sp = 2.91k 2 ρ 5/3 Cn2 (h) dh, (ρ > λL). (8.35)
8
0

The initial expression for the structure function evaluation in a SAR is Eq. (8.35),
because there is the relation

ρ = de > λL.

The Cn2 (h) function was shown above to be given by


 
h
Cn2 (h) = Cn02
exp − .
h0
As a result, we have the formula
   
L
Cn2 (h)dh = Cn0
2
h0 1 − exp − ,
h0
where L = ht cosec θ, θ is the angle between the wave propagation direction and the
skyline, and ht is the total altitude of the turbulent stratum.
A synthetic aperture is characterised by the equality ρ = de , where de is the
equivalent base at ht . It follows from Fig. 8.4 that
Ls ht cosec ϑ Ls ht
de = = .
Ro H
Thus, we eventually get the relation
 2  5/3   
2π Ls ht ht cosec ϑ
Dϕ (ρ) = βo Cn0 h0 1 − exp − , (8.36)
λ H h0

where Ls = V̄ Ts , Ts is the synthesis time, V is the track velocity of the radar carrier
and βo = 1.09.

zino: “chap08” — 2005/11/7 — 15:38 — page 170 — #14


Phase errors and improvement of image quality 171

Equation (8.36) also allows finding the standard deviation of the phase difference
fluctuations at the synthetic aperture ends:

σϕ (ρ) = Dϕ (ρ). (8.37)
We shall now examine how phase errors due to tropospheric turbulence affect the res-
olution limit and optimal length of a synthetic aperture. W. Brown and Y. Riordan [23]
have calculated both parameters for the case of phase errors, with the structure func-
tion obeying a power law. It was stated that the phase difference [ϕ(r + ρ) − ϕ(r)]
has a Gaussian distribution, and this is supported experimentally. For the above type
of phase errors, the expression for the aperture resolution along the track is found
to be
λR
ρx = (8.38)
4πρo
with ρo = 0.985b. The quantity b is to be calculated from the equation for the structure
function of a phase error:
Dϕ (ρ) = bn ρ n , n = 5/3. (8.39)
Then Eqs (8.38) and (8.39) yield
λR [Dϕ (ρ)]3/5
ρx = . (8.40)
4π ρ
Using the equation for the structure function of a phase error (8.36) and ρ = de ,
we get

  3/5
ht cosec ϑ
ρx = λ−1/5 RC0 (Cn0 ) (h0 )3/5 1 − exp −
2 3/5
, (8.41)
h0
where C0 = const. This equation shows that ρx varies but slightly with λ and increases
slowly with increasing λ.
The optimal synthetic aperture affected by a turbulent troposphere [23] can be
found as
13.4
Lopt = . (8.42)
b
Then Eqs (8.42) and (8.39) give
d0 λ6/5
Lopt = 3/5 (8.43)
2 )3/5 (h )3/5 1 − exp − ht cosec ϑ
(Cn0 0 h0

with d0 = const.
The mean square value of the phase error between the optimal aperture centre and
its extremal point is
σϕ = (Dϕ (Lopt /2))1/2 , (8.44)
where Dϕ and Lopt are to be calculated from Eqs (8.36) and (8.43).

zino: “chap08” — 2005/11/7 — 15:38 — page 171 — #15


172 Radar imaging and holography

Some other methods for reducing propagation-induced phase error in coherent


imaging systems were suggested in Reference 22 and 47.

8.3 A model of phase errors in a turbulent ionosphere

It was shown in the Appendix to Reference 114 that a good approximation for the
structure function of phase fluctuations is the expression:

D(y) ∼ 2
= Cδ |y|2ν−1 , 0.5 < ν < 1.5. (8.45)
2 is defined as
The phase structure constant Cδ

Cp 2 (1.5 − ν)
2
Cδ = , 0.5 < ν < 1.5, (8.46)
2π (ν + 0.5)(2ν − 1)22ν−1

where Cp = re2 λ2 lp CS , lp is the path length of an electromagnetic wave in the


ionosphere, re is the classical electron radius re = 2.81 × 10−15 m, λ is the transmit-
ter wavelength, and CS is the turbulence parameter in the ionosphere described by
Eq. (8.26).
Using the phase screen model of Reference 116 and Eq. (8.46), one can show that
the mean square value of the phase fluctuations along the path lp is defined as

√ χ −2ν+1 (ν − 1/2)
δ2  = 2 πre2 λ2 lp CS G o , (8.47)
4π (ν + 1/2)
where the factor G was borrowed from the Appendix to Reference 113. This factor
accounts for:

• the velocity of the scanning beam motion relative to electron density whirls (νo ),
• the geometrical parameter due to the electron density anisotropy (),
• the effective velocity of the scanning beam across the earth surface (Vef ),
• the synthesised aperture length Ls .

The factor G is defined as

G = (Vef Ls /νo )p−1 . (8.48)

The equations for  and Vef can be found in Reference 113.


All the fundamental concepts of the model we have just discussed were developed
by Rino, so we think this model should bear his name. It has been successfully
employed to analyse the effects of ionospheric turbulence on communication and
navigation device performance. But we also believe that this model can be useful
for the estimation of aperture performance in whirls and their effect on the azimuth
ambiguity function. The latter is important because one can then evaluate the aperture
resolution errors.

zino: “chap08” — 2005/11/7 — 15:38 — page 172 — #16


Phase errors and improvement of image quality 173

8.4 Evaluation of image quality1

Synthetic apertures were primarily designed for obtaining images to be used by a


human operator to solve research and applied problems. It is natural that the eval-
uation of aperture performance should largely be based on the analysis of image
characteristics. To do so, one needs to have at one’s disposal appropriate criteria for
a quantitative description of the performance characteristics of a particular type of
aperture to be able to compare them with those of other apertures and to suggest
appropriate improvements.
At present, there is no generally accepted criterion for evaluation of aperture
performance or image quality, though there have been some attempts made along
this line [99]. Difficulties involved in developing a reliable criterion are due not only
to the complex design and random behaviour of a synthetic aperture but also to the
diversity of their applications (e.g. a great variety of target aspect angles at which
imaging is made). Normally, potential characteristics or some individual parameters
are used as criteria for the evaluation of aperture performance.

8.4.1 Potential SAR characteristics


SAR designers and researchers often use the so-called potential characteristics, since
they describe the aperture response to an echo signal from a point scatterer and do
not contain micronavigation noise [53]. The following parameters may be referred
to as potential characteristics. We shall mostly list characteristics of apertures using
a digital signal processing and a digital image reconstruction.
1. The major lobe width of a synthetic antenna pattern (SAP) characterises the
potential resolving power of an aperture in azimuth ρβ . This parameter is deter-
mined by the width of the aperture response to a point target at zero noise. In
practice, a 3 dB SAP width is most often used as a criterion for evaluation of a
potential resolution, but there are other approaches, too. The potential resolution
is usually evaluated with a uniform weighting function H (t) ≡ 1 to get
ρβ ≈ λ/(2L sin γ ), (8.49)
where L is the projection of the synthesis step onto the normal to the view line
and γ is the incidence angle of microwave radiation. If the weighting function is
non-uniform, the major lobe width becomes 1.2–2.5 times larger, depending on
the type of the weighting function.
2. The integral level of side lobes
 
π ρβ /2  π
 
bi =  I (β) dβ −
2
I (β) dβ 
2
I 2 (β) dβ (8.50)
−π −ρβ /2 −π

1 Section 8.4 was written by E. F. Tolstov and A. S. Bogachev.

zino: “chap08” — 2005/11/7 — 15:38 — page 173 — #17


174 Radar imaging and holography

Table 8.1 The main characteristics of the synthetic aperture


pattern

Type of weighting function Relative SAP width bi 20 lg(bm )

Uniform 1.0 0.0705 −13.3


Parabolic 1.3 — −20.6
Henning’s 1.6 0.0103 −32.0
Hamming’s 1.45 0.178 −42.0

characterises the maximum SAP relative to the background created by the side
lobes.
3. The maximum side lobe level is
bm = Ims /Im , (8.51)
where Ims and Im are the maximum side and major lobe senses, respectively. This
parameter is effective in sensing microwave-contrast targets against a weakly
reflecting background. The integral and maximum senses of the side lobes, as
well as the major lobe width, vary with the weighting function used in the SAR
(Table 8.1). The relative width in the Table is the SAP width normalised to that
for a uniform weighting function.
4. The azimuthal sample characteristic is
ka = ρβ /ρ , (8.52)
where ρ is a step between the azimuthal counts of an image digital signal.
According to the theorem of samples, the sample characteristic must meet the
condition ρ < ρβ .
This parameter denotes the number of digital signal counts per azimuthal
resolution element and describes the radar capability to reconstruct an image. The
larger the sample characteristic, the greater the image contrast. However, a larger
coefficient entails a greater complexity of the image reconstruction design. The
optimal value of this parameter is taken to be ka = 1.2.
5. Image stability characterises the ability of an image digital reconstruction device
to sense and count the relative positions of partial frame centres and to provide the
proper scale over all the sample characteristics when partial frames are matched
and superimposed.
6. The gain in the signal-to-noise ratio in coherent and incoherent integration is cal-
culated from the variations of this parameter at the processor output. It is assumed
that the echo and image signals are integrated linearly in both coherent [17] and
incoherent integration [59], whereas noise is integrated in quadratures. Therefore,
the total gain in the signal-to-noise ratio Kg is

Kg = Nn, (8.53)

zino: “chap08” — 2005/11/7 — 15:38 — page 174 — #18


Phase errors and improvement of image quality 175

where n is the number of echo counts over a synthesis step in one range channel
and N is the number of incoherently integrated partial frames.
In real flight conditions, the actual aperture characteristics differ from the potential
ones. The reason for this is the noise from processing and micronavigation devices,
as well as the limitations of imaging systems.

8.4.2 Radar characteristics determined from images


The real performance characteristics of a radar system are evaluated from the results
of a statistical processing of image parameters registered during experimental flights
over a test ground (of the type of Willcox Playa in the United States). The radar
characteristics to be found experimentally are usually as follows.
1. The realistic aperture sharpness is taken to be the minimal distance between
two corner reflectors discernible on an image along the respective coordinate, if
the reflectors produce pulses of equal intensity and if the power of the reflected
signals is much greater than the noise. Note that the sharpness evaluation is
affected by the sample characteristic that can normally be varied by the operator
during the test.
2. The intensity of speckle noise on an image is defined as the ratio of the standard
deviation to the mean signal intensity on an image for a statistically uniform area
on the earth. The speckle arises from the presence of numerous point scatter-
ers in a resolution element, which have an approximately identical radar cross
section (RCS) and are produced by re-emission of the antenna pattern of ran-
dom geometry. The speckle effect can be reduced by filtering or by incoherent
integration of several independent images of the same region on the earth. Inde-
pendent images can be obtained at different radiation frequencies, polarisations
or aspect ratios. Depending on the SAR application, the number of such images
varies from 3 to 4 for military applications to 70 for resources survey tasks.
3. The dark level on an image is an average intensity of a signal from a region
of the lowest reflectivity. Sometimes, the dark level is taken to be the average
image intensity with a zero echo signal at the input (the noise dark level). This
parameter is related to the side lobe size in the synthesised antenna pattern and
to the processing noise.
4. The dynamic range is defined as the maximum-to-minimum signal intensity
ratio on an image. It depends on the design of the transmitter–receiver unit, the
processor characteristics, the receiver gain control, etc.
5. The contrast of adjacent samples is found as the ratio of the maximum signal
intensity from a point target (much above the noise level) to the average inten-
sity of the adjacent samples. This parameter characterises the SAR ability to
reconstruct the maximum space frequency on an image.
6. The mean image power is a parameter affected not only by the transmitter
power, the antenna gain, the receiver sensitivity and the signal-to-noise gain at
the processor output, but also by the post-processing before a signal is displayed
(especially, at the stage of defining its minimum threshold).

zino: “chap08” — 2005/11/7 — 15:38 — page 175 — #19


176 Radar imaging and holography

7. The intrinsic aperture noise level is the mean image signal level when there is
only noise at the aperture input and its gain corresponds to the mean image
signal. This parameter covers the total effect of the aperture noise during the
synthesis.
8. The radar swath width is determined by the screen parameters (the number of
lines and the number of pixels in a line) and by the discretisation step in range
and azimuth. An acceptable number of image pixels on a screen normally varies
from 512 × 512 to 1024 × 1024.
9. Geometrical distortions of an image are defined as the standard deviation of the
positions of reference scatterers relative to their actual positions. The central
reference mark is superimposed with the real reference. The standard deviation
value is affected by the range, the view angle, altitude, the distance between the
reference and the image centre, as well as by the imaging time.
10. The imaging time is an important parameter of an aperture operating in real time.
A typical test ground for the study of aperture characteristics is a statistically
uniform surface with three-edge corner reflectors (Fig. 8.5) arranged at different
distances from each other (for evaluation of the aperture sharpness). The reflec-
tors possess different reflectivities, so one can measure the dynamic range of
the system. In addition to a uniform background, a test ground usually includes
some common objects such as roads, fields, smooth surfaces, railway roads, etc.

In order to understand better the difference between the potential and real char-
acteristics of a synthetic aperture and a SAR as a whole, we shall make use of test
results with digital image reconstruction (the AN/APQ-102A modification) [53]. Its
potential resolution was 12.2 m along the azimuth and range coordinates. The dis-
cretisation step for evaluation of a real azimuthal resolution was taken to be 3.04 m.
Figure 8.6 shows an azimuthal signal from two corner reflectors. When the valley

Flight direction

1600 m
1600 m

Figure 8.5 A schematic test ground with corner reflectors for investigation of SAR
performance

zino: “chap08” — 2005/11/7 — 15:38 — page 176 — #20


Phase errors and improvement of image quality 177

1.0

Radar image intensity


0.8

0.5

0.25

242 246 250 254 258 262


Number of azimuth channel

Figure 8.6 A 1D SAR image of two corner reflectors

between their images was 2 dB, the azimuthal resolution was found to be 21.28 m, or
7 pixels in an image line.
Part of the test ground image was obtained by a 14-fold incoherent integration
with the mean signal value of 0.671 and a standard deviation of 0.201. The evaluated
speckle was found to be 0.3, which is a sufficiently low level.
The dark level was typically 23 dB of the grey-level value. Hence, the SAR
dynamic range is 33 dB, with the contrast of adjacent samples being 2.8 or 4.5 dB. For
a synthetic aperture with strongly suppressed side lobes, this parameter was 6–10 dB.
The large standard deviation in this case is due to the use of corner reflectors with a
large RCS.
The dynamic range is estimated from these data to be 33 dB, with the contrast of
adjacent samples being 2.8 or 4.5 dB. For a synthetic aperture with strongly suppressed
sidelobes, this parameter is 6–10 dB. The large mean square value of the image signals
is due to the application of corner reflectors with a large RCS.
Figure 8.7 shows a histogram of the noise distribution at the aperture output, and
one may suggest that the probability density has a Rayleigh pattern. The mean value
of 0.21 was taken to be the dark level. One of the dark regions exhibits a Rayleigh
distribution with a mean value of 0.42. A screen with 384 × 360 pixels covered a
view zone of 4.8 × 4.5 km. The errors in the measurement of the range positions of
the corner reflectors were 14 km and 18 m at a distance of 1600 m from the image
centre, whereas the radar was at 14.5 km from it. The azimuth measurement error
was ∼50 m under the same conditions.

8.4.3 Integral evaluation of image quality


The authors of Reference 99 have suggested a method of integral evaluation of radar
images. With this method one can compare images and establish a certain standard

zino: “chap08” — 2005/11/7 — 15:38 — page 177 — #21


178 Radar imaging and holography

Frequency

0 Radar image intensity

Figure 8.7 A histogram of the noise distribution in a SAR receiver

for the transformation of resolution to the number of incoherent integrations or to


a parameter related to the dynamic range of an image signal. It is shown that the
interpretability, or the operator’s ability to interpret an image, U , is related to the
SGL volume V as
U = U0 exp(−V /Vc ), (8.54)
where U0 is the maximum image interpretability and Vc is the critical grey-level
resolution.
It has been found empirically that the interpretability is related to the grey-level
volume defined as
V = pa pr pg , (8.55)
where pa , pr are the linear resolutions in azimuth and range, respectively, and pg
is the grey-level resolution (in half-tones). The new image parameter – grey-level
resolution – can be expressed as the ratio of a level a signal exceeds in 90 per cent of
cases to that in 10 per cent of cases for independent samples. This parameter can be
found from the formula:
√ √
pg ≈ ( N + 1.282)/( N − 1.282). (8.56)
However, the calculated value differs noticeably from the measurements made at
N < 4 (Fig. 8.8). The experimental interpretability scale ranged from 0 for an
uninterpretable image to 4 for a fully interpretable one. Therefore, the maximum
interpretability U0 should be 4.
The authors of Reference 1 have obtained a more complex equation for pg
 k+1 t]−1
1 + (N /e) N k=1 3k[(N − k)!N
pg = 10 lg N .
1 − (N /e) k=1 3k[(N − k)!N k+1 ]−1
Where e = 2.78, this result, however, is based on the information theory and addi-
tionally takes into account properties of photointerpreter’s visual analyser. It was
discovered that according to the criterion of the maximum image information capacity
N = 2 is optimal.

zino: “chap08” — 2005/11/7 — 15:38 — page 178 — #22


Phase errors and improvement of image quality 179

rg
Approximation

Experiment

1 10 100 N

Figure 8.8 The grey-level (half-tone) resolution versus the number of incoherently
integrated frames N

An important experimental finding was the critical volume Vc – for a single frame
synthesised by the aperture (N = 1). For the majority of frames, the length per square
resolution element in the case of a 37 per cent interpretability was found to be 9.14 m.
Such objects were vegetation and urban areas, low-contrast regions, communication
lines, city and country roads, etc. Exceptions were the boundaries of water bodies
and vegetation covers showing a 37 per cent interpretability even at the lowest linear
resolution in azimuth and range (13.72 m). Since the grey-level resolution at N = 1
(Fig. 8.8) is 22, it is easy to find the critical volume:
Vc = pa pr pg ≈ 9.142 × 22 ∼ 1850. (8.57)
With this, the final interpretability expression takes the form:
U = 4 exp{−pa pr pg /1850}. (8.58)
Note that the calculation of the critical volume used the linear resolution of 9.14 m.
Figure 8.9 shows the interpretability plotted against the linear resolution pa = pr = p
for different numbers of incoherent integrations.
When analysing the plots in Fig. 8.9, one should bear in mind that both the
measurements and the calculations were based on some a priori assumptions. For
example, the half-tone scale was chosen on the assumption that a photograph had the
maximum interpretability and that it had an infinite number of incoherent integrations
(N = ∞) and the half-tone resolution pg – (Fig. 8.8). An image synthesised without
incoherent integrations (N = 1) was thought to have the poorest half-tone resolution,
but the resolution was to be finite (pg < ∞), since the image preserved some, though
very low, interpretability. It was established experimentally that the poorest half-tone
resolution was equal to 22 (Fig. 8.8).
The interpretability was evaluated by three qualified and experienced interpreters
of radar and optical images, using the four-level scale (from 0 to 4) mentioned above.
The interpreters worked with prints of 20.32 cm × 25.40 cm in size. The resolution
elements varied in shape from square to rectangular (with the side ratio of 1:10) and
in the number of incoherent integrations varying from 1 to ∞. All the experiments

zino: “chap08” — 2005/11/7 — 15:38 — page 179 — #23


180 Radar imaging and holography

U/Uo

0.8
10 N=∞
1 3
0.6

0.4

0.2

0 25 50 p, m

Figure 8.9 The dependence of the image interpretability on the resolution versus
linear resolution pa = pr = p

were carried out using a quadratic detector because the detection was performed on a
quadratic film. It can be demonstrated theoretically, however, that experimental data
can also be useful in linear detection of image signals if the half-tone resolution is
calculated by another approximate formula:
√ √
pgl ≈ ( N + 0.6175)/( N − 0.6175). (8.59)

The major result of this series of investigations [99] was the experimental support of
the idea that image interpretability depended only on the half-tone volume resolution,
or on the product of the azimuthal, range and half-tone resolutions. Therefore, this
parameter varies with the area rather than the shape of a resolution element (square
or rectangular). On the other hand, it depends on the resolution element area and the
number of incoherent integrations. So one can make a compromise when choosing
the resolution in azimuth pa , in range pr and in half-tones pg [99]. Identical inter-
pretabilities can be achieved by using different combinations of these parameters.
This conclusion proved to be quite unexpected and may play an important role in
solving some applied problems when one has to choose between the complexity and
the cost of aperture processing techniques.
Indeed, if this conclusion is correct, it is worth making an effort to achieve a
high image interpretability by improving low-cost resolutions. To illustrate, a higher
range resolution and an incoherent integration in spaceborne SARs can be achieved
in a simpler way than a higher azimuthal resolution. For example, one can fix the
azimuthal resolution but improve the range resolution or increase the number of
incoherent integrations.
We shall give a good example to illustrate the effectiveness of resolution redistri-
bution with reference to a side-looking synthetic aperture. In this type of aperture, the
azimuthal resolution depends linearly on the number of incoherent integrations N :

pa (N ) = λro N /2Lm = po N , (8.60)

zino: “chap08” — 2005/11/7 — 15:38 — page 180 — #24


Phase errors and improvement of image quality 181

pg

20

10

0 1 2 3 4 5 6 7 8 9 N

Figure 8.10 The dependence of the half-tone resolution on the number of incoherent
integrations over the total real antenna pattern

where λ is the wavelength, ro is the oblique range, Lm is the maximum possible length
of the aperture, and po = λro (2Lm ) is the best aperture resolution. If we now fix the
range resolution, the minimum product of pa Npg will show the optimal combination
of azimuthal resolution and incoherent integration (Fig. 8.10). This optimum is found
to lie at N = 3; hence, pa = 3po .
The integral criterion for image evaluation from the half-tone volume resolution
is convenient and relatively simple. But when using it in practice, one should bear in
mind that the available amount of statistical data is insufficient, so the estimations of
image quality may be quite subjective.

8.5 Speckle noise and its suppression

Synthetic aperture radar remote sensing of the earth is becoming increasingly popular
in many areas of human activity (Chapter 9.1). The analysis of images may be made
in terms of a qualitative or quantitative approach [2].
A qualitative analysis is largely made by conventional methods of visual inter-
pretation of aerial photography, combined with the researcher’s knowledge and
experience. Although radar images have much in common with aerophotographs
(Chapter 1), the physical mechanisms of their synthesis set limits on the applicability
of interpretation methods elaborated for optical imagery. Additional difficulties arise
from the presence of speckle noise.
A quantitive analysis is based on the measurement of target characteristics for
various backgrounds and objects [2], followed by computerised processing of video
information. The latter is normally used to solve the following tasks. One often
has to improve image quality and interpretation procedures at the pre-processing
stage, which includes various corrections, noise reduction, contrast enhancement,
highlighting contours, etc. It may also be necessary to compress and code images to

zino: “chap08” — 2005/11/7 — 15:38 — page 181 — #25


182 Radar imaging and holography

be transmitted through communication channels. Besides, one may have to identify


some of the items on an image and classify various elements present on it. This is
usually done by image segmentation, cluster analysis and so on. Obviously, this kind
of image subdivision is always somewhat arbitrary.
Here we shall discuss methods of solving the first type of task with emphasis
on those techniques specific to radar imagery, such as speckle suppression. Some
others, like geometrical and radiometrical correction, have already been dealt with in
the literature [2,31]. Some of the image processing techniques are quite versatile and
have also been discussed in detail [2].

8.5.1 Structure and statistical characteristics of speckle


There has been much effort to understand the image speckle structure. The available
publications on this subject can be classified into two groups as for the specific
problems being tackled. The more extensive group covers work on speckle as a
noise, suggesting various ways of its filtering. The other group includes publications
on useful properties of speckle, in particular, on the possibility to derive from it
information about the area of interest. Naturally, there are problems in each trend that
remain poorly understood. A feature common to all the publications is the description
of statistical characteristics of speckle.
Let us consider the statistical characteristics of an echo signal in terms of a general
reflection model when a resolution element contains many echo signals from different
point scatterers. The signals are random, independent and have about the same inten-
sity. Then the total signal represents a Gaussian random quantity and its amplitude
has a Rayleigh pattern. This kind of reflection model is often termed the Rayleigh
model. When a synthetic aperture changes its position relative to a target, the intensity
fluctuations of the total echo signal give rise to a characteristic speckle pattern on an
image. Clearly, the intensity I of individual pixels will obey the exponential law of
the probability density distribution:
 
1 x
pI (x) = exp − (8.61)
2σo2 2σo2

with the mean value of Ī = 2σo2 and the dispersion σI2 = 4σo4 , while the phase θ of
the image pixels is equiprobable in the range from −π to +π .
Another reflection model is applied when a resolution element has one bright
point together with other point scatterers, such that the total echo signal contains
one dominant signal of much higher intensity along with many random independent
signals of nearly the same lower intensity. Then the amplitude of the total signal is
described by the Rice distribution, or by a generalised Rayleigh distribution. This
kind of model is called the Rice reflection model.
The distribution of the intensity probability density at single pixels is
  √ 
1 x − so xso
pI (x) = exp − Io (8.62)
2σo2 2σo2 σo2

zino: “chap08” — 2005/11/7 — 15:38 — page 182 — #26


Phase errors and improvement of image quality 183

with the mean value of Ī = 2σo2 + so and the dispersion σI2 = 4σo4 (1 + 2r), where so
is the square amplitude of the highest intensity component of the signal r = so /(2σo2 ),
Io (·) is a modified zero-order Bessel function of the first kind, and the distribution of
the phase probability density is
 2
1 a cos x x
pθ (x) = exp − + a √ (a cos x) exp −a2 sin2 , (8.63)
2π 2 2π 2
where
t  
√ 1 τ2
a = so /σo , (t) = exp − dτ
2 2
−∞

is the Laplace function.


Since the signal-to-noise ratio √is the ratio of the mean intensity Ī to the standard
deviation σI equal to 1 and (1+r)/ 1 + 2r for the Rayleigh and Rice models, respec-
tively, the intensity fluctuation amplitude in the speckle structure is commensurable
with the useful signal intensity for a complex target at r ≈ 1. For this reason, images
of such targets have a well-pronounced speckle structure. Since it is difficult to anal-
yse an echo signal from a target with the Rice reflection, most authors discuss targets
with that of the Rayleigh reflection.
It is worth noting that the above expressions for the probability density distribution
in the case of a uniform and isotropic background are valid for both an ensemble of
images at each resolution element and a single image over a multiplicity of resolution
elements. For a non-uniform background, however, these expressions are valid only
for an ensemble of image realisations.
When the N number of independent images of the same earth area are summed
up, the probability density distribution of the speckle structure takes the form:
xN −1 exp(−(x/2σo2 ))
pI (x) = (8.64)
(2σo2 )N (N )
with the value of Ī = 2N σo2 and the dispersion σI2 = 4N σo4 , where (·) is the gamma-
function√ described as (N ) = (N − 1)! for integer N . In this case, the signal-to-noise
ratio is N . The probability density distribution in Eq. (8.64) corresponds to the
gamma-distribution with the parameters equal to N and 1/(2σo2 ), or to χ 2 -distribution
with 2N degrees of freedom at σo2 = 1. A general expression for the initial moments
of distribution (8.64) has the form:
MkN = [(N + K − 1)!/(N − 1)!](2σo2 )k ,

where MkN is the kth initial moment.


Reference 2 presents the fluctuation spectrum of the speckle amplitude and its
autocorrelation function. Suppose a point scatterer is described by the Dirac δ-function
and F(k) is the transfer function of a synthetic aperture, where k = 2π/λ is the wave
number and λ is the wavelength of the echo signal. Then the amplitude spectrum of
the echo signal from a point scatterer located at a point with the coordinate x relative

zino: “chap08” — 2005/11/7 — 15:38 — page 183 — #27


184 Radar imaging and holography

to the SAR carrier track is F  = (1/2)F(k) exp(jxk). For randomly arranged point
scatterers, the signal received by the aperture is defined as

1 L
F  (k) = F(k) exp(jxl k).

l=1

At L → ∞, the speckle power density spectrum can be determined within the


accuracy of a constant factor:

S(k) = |F  (k)|2 = |F(k)|2 .


In other words, it is unambiguously dependent on the aperture transfer function.
The autocorrelation function of speckle is related to its spectral power density by a
Fourier transform. Therefore, the speckle autocorrelation function can be used to find
the aperture impulse response directly.
The statistical characteristics of speckle for a background represented as an array
of randomly moving point scatterers are considered in Reference 2. It is shown that
the concept of spatial resolution has no sense if the phase fluctuations of signals
from the point scatterers are large (the phase changes by 2π several times during the
synthesis).

8.5.2 Speckle suppression


The available methods of suppression or smoothing out of image speckle can be
subdivided into two groups. Some methods are based on the averaging of several
independent images of the same background. This group is not large but these methods
have been extensively used owing to their relative simplicity. The other group of
methods is much larger and includes so-called aposterior procedures when speckle is
suppressed by spatial filtering.
Independent images of the same earth area can be obtained in different ways based
on a common principle of image segmentation with respect to a particular parameter,
for example, the Doppler frequency, the carrier frequency or polarisation (i.e. sensing
a background at different polarisations of probing radiation). The first technique is
known as a multibeam processing and it is most commonly used in practice [99].
A specific feature of multibeam processing is a proportional decrease of the aperture
sharpness in track range when the Doppler frequency band is subdivided into N iden-
tical non-overlapping subbands. The specificity of speckle√ suppression procedures is
that the signal-to-noise ratio increases by a factor of N if N independent images
are averaged.
The methods of the first group can use other procedures, for example, median
filtering [2], in addition to the averaging of N independent images.
A wide application of aposterior techniques is primarily due to a rapid devel-
opment of image processing technology. The lack of an adequate model of speckle
structure and useful signal makes it difficult to design effective algorithms for speckle
suppression. Until recently, nearly all researchers working on speckle problems
have regarded speckle as a multiplicative noise to a useful signal. However, there are

zino: “chap08” — 2005/11/7 — 15:38 — page 184 — #28


Phase errors and improvement of image quality 185

more complex models. The authors consider the possibility of employing Wiener’s
and Calman’s filtering algorithms, homomorphic processing and various heuristic
techniques to suppress speckle.
However, a lack of objective criteria for evaluation of image quality by visual
perception creates additional difficulties. For this reason, nearly all the researchers
cited below compare the processing results with expertise, which makes a comparative
analysis of the suggested algorithms quite problematic.
The first attempts to suppress speckle by aposterior techniques used the Wiener
filtering algorithm which varies with the signal [2]. The workers analysed an additive,
signal-modelled noise approach and a multiplicative noise model. In the former,
a distorted image is described by the expression:

z(x, y) = s(x, y) ∗ h(x, y) + f [s(x, y) ∗ h(x, y)]n(x, y), (8.65)

where h(x, y) is the space impulse response, f is commonly a non-linear function and
n(x, y) is signal s(x, y) independent noise. By introducing the designations n (x, y) =
s (x, y) × n(x, y) and s (x, y) = f [s(x, y) ∗ h(x, y)], we transform Eq. (8.65) to

z(x, y) = s(x, y) ∗ h(x, y) + n (x, y).

In the second noise model, an image is described as

z(x, y) = n(x, y)[s(x, y) ∗ h(x, y)], (8.66)

where n(x, y) is signal-independent multiplicative noise. The Wiener’s filter has the
transfer function M (µ, ν) = zs (µ, ν)/zz (µ, ν) and minimises the standard devia-
tion of the filtering, provided that z(x, y) and s(x, y) are wideband spatially uniform
random fields, zs and zz are the respective power density spectra. With Eq. (8.65),
the first noise model gives the following transfer function of a Wiener’s filter:

ss (µ, ν)H ∗ (µ, ν)


M1 (µ, ν) = (8.67)
ss (µ, ν)|H (µ, ν)|2 + s s (µ, ν) ∗ nn (µ, ν)

on the assumption of n(x, y) = 0. Here n(x, y) is statistically independent of s(x, y),


s (x, y) is a uniform wideband field, and H (µ, ν) = F[h(x, y)] is the system’s
transfer function. At f [s(x, y) ∗ h(x, y)] = s(x, y) ∗ h(x, y), we have s s (µ, ν) =
ss (µ, ν)|H (µ, ν)|2 , and Eq. (8.67) can be re-written as

ss (µ, ν)H ∗ (µ, ν)


M1 (µ, ν) = . (8.68)
ss (µ, ν)|H (µ, ν)|2 + [ss (µ, ν)|H (µ, ν)|2 ] ∗ nn (µ, ν)

If the noise is uniform, wideband and signal-independent, the transfer function of a


Wiener’s filter in the second model will be
nss (µ, ν)H ∗ (µ, ν)
M2 (µ, ν) = . (8.69)
nn (µ, ν) ∗ [ss (µ, ν)|H (µ, ν)|2 ]

zino: “chap08” — 2005/11/7 — 15:38 — page 185 — #29


186 Radar imaging and holography

It is clear from (8.69) that at n(x, y) = 0 the filter transfer function is M2 (µ, ν) = 0.
Suppose we have n1 (x, y) = n(x, y) − n, then
ss (µ, ν)H ∗ (µ, ν)/n
M2 (µ, ν) = .
ss (µ, ν)|H (µ, ν)|2 + (1/n2 )n1 n1 (µ, ν) ⊗ [ss (µ, ν)|H (µ, ν)|2 ]
(8.70)

Obviously, at n = 1 filters with the transfer functions (8.68) and (8.70) are equiva-
lent. Modelling has shown that a Wiener’s filter for signal-dependent noise with the
characteristics M1 and M2 is better than that for additive, signal-independent noise.
But the essential limitations of the former are the need for a large amount of a priori
information about the signal and the noise, as well as vast computations. Calman’s
filtering algorithms [2] suffer from similar disadvantages.
The possibility of a homomorphic image processing is discussed in Reference 2.
A homomorphic processing is supposed to be any conversion of observable quantities
if the signal fluctuations are transformed to additive and signal-independent noise.
Within the multiplicative speckle model, Eq. (8.64) yields
 N −1  
NN I NI
p(I ) = exp − (8.71)
(N )I I I
2
with σI2 = I /N 2 . Then the homomorphic transformation reduces to taking the
logarithms. The distribution density of the quantity D = ln I is described as

p(D) = [N N / (N )] exp[−N (D − Do )] exp{−N exp[−(D − Do )]} (8.72)

with Do = ln I . Practically, the distribution of signal-dependent noise is often approx-


imated by a normal distribution with a signal-dependent dispersion. At any value of
N , the approximation accuracy for the normal distribution (8.72) is greater than that
for the distribution (8.71). The variable D can be processed by any algorithm available
in the model of additive and signal-independent noise. It is pointed out in Reference 2
that the application of Wiener’s filtering algorithm with a preliminary homomorphic
processing of an image provides better results than a separate application of each
algorithm.
The authors of Reference 2 believe that a homomorphic transformation is a rea-
sonable alternative to image processing in signal-dependent noise. On the other hand,
experience indicates that this does not give an essential advantage over heuristic meth-
ods to be discussed below. Moreover, the necessity to use both direct and inverse
transformations increases the computation costs considerably.
There is another way of suppressing speckle noise – a local statistics technique [2].
Within the multiplicative speckle model, every element zij on an image is represented
as the product of the signal sij and the noise nij . The noise has n = 1 and the
dispersion σn2 . On the assumption that the signal and the noise are independent, the
authors have derived the expressions

z = sn = s

zino: “chap08” — 2005/11/7 — 15:38 — page 186 — #30


Phase errors and improvement of image quality 187

and
σz2 = M [(sn − s n)2 ] = M [s2 ]M [n2 ] − s̄2 n̄2 .
If the signal intensity averaged over the processing window is constant, the
expressions are
M [s2 ] = s2 and σz2 = s2 (M [n2 ] − n̄2 ) = s̄2 σn2 or σn = σz /z̄.
This model is consistent with the data obtained from the analysis of uniform surface
imagery. The standard deviation σn is found to be about 0.28, which is due to a multi-
beam processing and the use of other algorithms for improving images synthesised
by the SAR SEASAT-A. Using the local statistics technique for a selected window
(usually with 5 × 5 or 7 × 7 resolution elements), one can find the moving local
average z̄ and the dispersion σ 2 . Then one gets
σz2 + z̄ 2
s̄ = z̄/n̄, σs2 = − s̄2 . (8.73)
σn2 + n̄2
The expansion of z into a Taylor series with the account of the first-order terms only
yields
z = n̄s + s̄(n − n̄). (8.74)
According to Eqs (8.73) and (8.74), the minimisation of the mean square error of
speckle suppression leads to the following formula for ŝ:
ŝ = s̄ + k(z − n̄s̄) (8.75)
with
n̄σs2
k= .
s̄2 σn2 + n̄2 σs2
Then at n = 1, one gets
σs2
ŝ = s̄ + k(z − s̄), k= . (8.76)
s̄σn2 + σs2
The heuristic algorithm derived from the local statistics approach is especially effec-
tive for speckle suppression on images of uniform and isotropic surfaces. It does not
remove the contours of extended proper targets. This algorithm has provided good
results when processing imagery from the SAR SEASAT-A. Its major advantages
are simplicity and adaptive properties associated with the computation of the local
statistics. It has, however, a serious limitation: it cannot predict the error behaviour
during the speckle suppression. Besides, the necessity of computing the local average
and, especially, the dispersion in a common 7 × 7 window considerably reduces the
algorithm efficiency.
In order to decrease the computational costs inherent in local statistics algo-
rithms, some workers have suggested using a sigma-filter. For a moving window

zino: “chap08” — 2005/11/7 — 15:38 — page 187 — #31


188 Radar imaging and holography

of (2m1 + 1) × (2m2 + 1) in size (m1 and m2 are integer numbers) with the central
resolution element zij , the signal ŝij is found from the formula:
 
m1 +i m2 +j
  m 1 +i m2 +j

ŝij = δkl zkl  δkl  , (8.77)
k=i−m1 l=j−m2 k=i−m1 l=j−m2

where

1, at (1 − 2σn )zij ≤ zkl ≤ (1 + 2σn )zij .
δkl =
0, otherwise.

It is clear that a filter with the characteristic (8.77) will be more cost-effective than that
with (8.76). A 11 × 11 window was used in Reference 2 to estimate σn . It was found
that two passes of a sigma-filter were sufficient to get a satisfactory suppression
of speckle noise without smearing the contours. When the number of passes was
increased to four and more, the image was damaged.
The following modification of the sigma-filter was discussed in Reference 2 for
filtering impulse noise together with speckle suppression. One chooses the thresh-
old B. If the number of elements to be removed in accordance with Eq. (8.77) is smaller
than or equal to the threshold B, the average of four neighbouring elements is ascribed
to the estimated position of the moving window. The choice of a threshold is critical
because it affects the contours. It is pointed out in this work that the threshold value
for a 7 × 7 window should be less than 4 and for a 5 × 5 window less than 3. The
use of a sigma-filter with a 11 × 11 window and then another sigma-filter with a
3 × 3 window at the threshold B = 1 proved to be most effective. A small window
allows suppression of impulse noise in the vicinity of sharp contours. Other filter
modifications are also possible. This type of filter was compared with a filter with the
characteristic (8.76) and with a median and an averaged filter. It was concluded from
the expertise that a sigma-filter provides better results. Its disadvantage is that one
cannot estimate a priori the behaviour of the speckle suppression error. An important
merit of this type of filter is its simplicity, a high computational efficiency and addi-
tive properties. These characteristics make the filter suitable for application in digital
image processing in a real-time mode.
The local statistics method can also be implemented with a linear filter minimising
the mean square error of the filtering. In addition to the algorithms described above,
there is a large number of heuristic algorithms for speckle suppression. Among these
are algorithms for median filtering, averaging over a moving window with various
weighting functions, algorithms for a nonlinear transformation of the initial image, the
reduction of an image histogram to a symmetric form, etc. Most heuristic algorithms
are simple to use and have a fairly high computation efficiency but all of them possess
a serious drawback – they practically ignore the specific process of SAR imaging:
while suppressing noise, they partly suppress the useful signal. It is usually hard to
estimate the speckle suppression error when using such algorithms.
To conclude, image processing covers a wide range of tasks and problems, many of
which have not been dealt with in this chapter. Among these are the processing based

zino: “chap08” — 2005/11/7 — 15:38 — page 188 — #32


Phase errors and improvement of image quality 189

on the properties of a human visual analyser, the criteria for image quality and image
optimisation, quantitative evaluation of information contained in an image, etc. Due to
a rapid development of cybernetics, information theory, iconics and computer science
and practice, these areas of investigation are constantly trying new approaches. For
example, they have tested some concepts of artificial intelligence in the processing of
data on remote probing of the earth, the use of radar imagery as a database for visual
interpretation and complexing of images obtained in different wavelength ranges.
The results obtained from such studies can provide more information about the earth
and other planets.

zino: “chap08” — 2005/11/7 — 15:38 — page 189 — #33


zino: “chap08” — 2005/11/7 — 15:38 — page 190 — #34
Chapter 9
Radar imaging application

9.1 The earth remote sensing1

9.1.1 Satellite SARs


Synthetic aperture radar imagery from satellites and aircraft has a high spatial
resolution and is independent of light and clouds. Nearly real-time information and a
comprehensive SAR image analysis is of importance not only for scientific studies,
but also because it has a practical significance providing information for companies
dealing with off-shore oil and gas exploration, deep-ocean mining, fishing, marine
transportation, weather forecast, etc. [65]. In 1972 the NASA Office of Applications
initiated the Earth and Oceans Dynamics Applications Program for the development
of techniques of global monitoring of oceanographic phenomena and the design of
an operational ocean dynamics monitoring system. Satellite SAR studies of the earth
environment began in 1978, when the first series of images was obtained by the
SEASAT during its 3 month’s operation. This L-band horizontally polarised radar
operated at a wavelength of 23 cm at an incidence angle of 20◦ . It was primarily
designed for ocean wave imaging, although SAR imagery was also acquired over
ice and terrestrial surfaces. It demonstrated the potential of satellite radar data in
scientific and operative applications. The SEASAT data supported the notion that
wind and wave conditions over the ocean could be measured from a satellite with
an accuracy comparable to that achieved from surface platforms [5]. Various SAR
instruments operating at different wavelengths, polarisations and incidence angles
were mounted on bound of Space Shuttles (Table 9.1). In November 1981 and
October 1984, the SIR-A and SIR-B radars, which used the SEASAT technology

1 Sections 9.1.1 and 9.1.2 were written by V. Y. Alexandrov, O. M. Johannessen and S. Sandven, Nansen
International Environmental and Remote Sensing Centre, St Petersburg, Russia Nansen Environmental and
Remote Sensing Centre, Bergen, Norway. Section 9.1.3 was written by D. B. Akimov, Nansen International
Environmental and Remote Sensing Centre, St Petersburg, Russia.

zino: “chap09” — 2005/11/7 — 15:38 — page 191 — #1


192 Radar imaging and holography

Table 9.1 Technical parameters of SARs borne by the SEASAT and Shuttle

Parameter SAR
SEASAT SIR-A SIR-B SIR-C/X X-SAR

Orbit inclination (◦ ) 108 38 57 57 57


Altitude (km) 800 260 225 225 225
Incidence angle (◦ ) 20–26 47–53 15–60 20–55 20–55
Frequency (GHz) 1.28 1.28 1.28 1.25 and 5.3 9.6
Polarisation HH HH HH HH, VV, VH, HV VV
Swath width (km) 100 50 30–60 15–90 15–45
Pixel size for 25 × 25 40 × 40 25 25 30 × (10 − 20)
four looks (m)

Table 9.2 Parameters of the Almaz-1 SAR

Parameter Value

Satellite altitude (km) 270–380


Orbit inclination (◦ ) 72.7
Wavelength (cm) 9.6
Polarisation HH
Radiometric resolution, one look (dB) 2–3
Swath width (km) 40
Spatial resolution, one look (m) 10–15

with the 23 cm wavelength and HH (Horizontal–Horizontal) polarisation, provided


data targeted at land applications [77]. The SIR-C mission using a two-frequency
multipolarisation SAR with a variable incidence angle, together with the X-band VV
(Vertical–Vertical) SAR, operated in three flights during the period of 1994–1996.
The SIR-C was of interest to ocean remote sensing, and its data were used to extend the
understanding of radar backscatter from the ocean and SAR imaging of oceanographic
processes [117].
The first USSR SAR mission started in July 1987 with a launch of the Cosmos-
1870 satellite equipped with a S-band SAR. Its operation ended in July 1989 and was
followed by the Almaz-1 satellite, which operated from May 1991 until October 1992
(Table 9.2). The raw data of 300 km long and 40 km wide stripes with a 10–15 m spatial
resolution (one look) could be stored aboard and transmitted to a receiving ground
station near Moscow as analogue radio holograms, with SAR images presented as
photographic hard copies. Applications of SAR data included studies of various ocean
phenomena and sea ice [36].

zino: “chap09” — 2005/11/7 — 15:38 — page 192 — #2


Radar imaging application 193

Table 9.3 The parameters of the ERS-1/2


satellites

Parameter Value

Satellite altitude (km) 785


Orbit inclination (◦ ) 98.52
Wavelength (cm) 5.66
Polarisation VV
Angle of incidence (◦ ) 20–26
Swath width (km) 100
Spatial resolution, three looks (m) 26 × 30

The first European Space Agency ERS-1 satellite with a C-band SAR aboard
operated successfully from its launch in July 1991 until 1996 and provided a large
amount of global and repeated observations of the environment. The focus was on
ocean studies and sea ice monitoring [62,64]. In the high-resolution imaging mode,
the ERS-1 SAR provides three-look, noise-reduced images with a spatial resolution of
26 m in range (across-track) and 30 m in azimuth (along-track) (Table 9.3). Because of
the absence of onboard data storage, a network of ground receiving stations enabled a
wide coverage by SAR images. ERS-2, a second satellite of this series, was launched
in April 1995 and since mid-August 1995 both satellites operated in a tandem mode,
when ERS-2 imaged the same area as ERS-1 one day later.
The RADARSAT launched by the Canadian Space Agency in November 1995
was the first SAR satellite with a clear operational objective to deliver data on various
earth objects. Using the onboard data storage, it provides a much wider coverage than
the ERS SAR [77]. Processed SAR data could be delivered to users within several
hours after acquisition. The RADARSAT operates in the C-band and HH-polarisation,
and in several imaging modes with different combinations of the swath width and
resolution (Table 9.4). One of its main applications is sea ice monitoring [42].
The advanced SAR (ASAR) onboard the European Space Agency ENVISAT
satellite, has been providing image acquisition since 2002 [43]. While its major
parameters are similar to those of the RADARSAT, the ASAR can also operate at
multipolarisation modes using two out of five polarisation combinations: VV, HH,
VV/HH, HV/HH and VH/VV. The five major modes are: global, wide swath, image,
alternating polarisation and wave modes (Table 9.5). In the image and alternating
polarisation modes the ASAR gives high-resolution data (30 m and 3 look) in a rela-
tively narrow swath (60–100 km), which can be located at different distances from the
subsatellite track at the incidence angles from 15◦ to 45◦ . The alternating polarisation
mode provides two versions of the same scene, at HH, VV and/or cross-polarisation.
The wide swath mode provides a 420 km swath with a spatial resolution of 150 m
and 12 looks. In the global monitoring mode, the ASAR continuously gives a 420 km
swath with a spatial resolution of 1000 m and 8 looks.

zino: “chap09” — 2005/11/7 — 15:38 — page 193 — #3


194 Radar imaging and holography

Table 9.4 SAR imaging modes of the RADARSAT satellite

RADARSAT-1 Beam modes Nominal Incidence Number Spatial


modes with swath angles to left of looks resolution
selective width or right side (approx.)
polarisation (km) (◦ ) (m)

Transmit H or V Standard 100 20–50 1×4 25 × 28


Receive H or V Wide 150 20–45 1×4 25 × 28
or (H and V) Small incidence 170 10–20 1×4 40 × 28
angle
High incidence 70 50–60 1×4 20 × 28
angle
Fine 50 37–48 1×1 10 × 9
ScanSAR wide 500 20–50 4×2 100 × 100
ScanSAR 300 20–46 2×2 50 × 50
narrow

Table 9.5 The ENVISAT ASAR operation modes

Operation mode Image mode Alternating/ Wide swath Global Wave mode
parameter cross- mode monitoring
polarisation

Polarisation VV or HH VV/HH, VV or HH VV or HH VV or HH
HH/HV or
VV/VH
Spatial resolution 28 × 28 29 × 30 150 × 150 950 × 980 28 × 30
(along-track and
across-track) (m)
Radiometric 1.5 2.5 1.5–1.7 1.4 1.5
resolution (dB)
Swath Up to 100 Up to 100 400 ≥400 5 KM
width (km) (seven (seven (five (five (vignette
subswaths) subswaths) subswaths) subswaths) seven
subswaths)
Incidence 15–45 15–45 15–45
angle (◦ )

At present, SAR data from the ERS, RADARSAT and ENVISAT satellites are
widely used in earth observations and monitoring of various natural objects and
phenomena. With its fine-scale resolution, a SAR is capable of observing a number
of unique oceanic phenomena [117]. These include wind and waves [46,75], ocean

zino: “chap09” — 2005/11/7 — 15:38 — page 194 — #4


Radar imaging application 195

circulation [63], internal waves [33], oil spills [40,41], shallow sea bathymetry [6], etc.
Imaging radars are also used in a number of land applications, such as the study of soil
moisture [84], forestry [97] and the studying and monitoring of urban areas [135]. The
use of satellite SAR data for monitoring the Arctic sea ice is briefly discribed below.

9.1.2 SAR sea ice monitoring in the Arctic


9.1.2.1 The use of satellite SAR for sea ice monitoring
The use of visible images for sea ice monitoring in the Arctic is limited by light in
winter, while the cloud cover precludes sea ice observations in the visible and infrared
ranges during approximately 80 per cent of time in summer [18,37,123]. Therefore,
the development of remote radar sensing is essential for the polar regions. The first
satellite SAR images were acquired by the SEASAT satellite which produced over
100 passes over the Beaufort Sea on nearly a daily basis for the analysis of sea ice
motion and changes in the ice distribution. The SIR-B SAR gave data on the Antarc-
tic sea ice margin for October 1984 [45]. Several SAR surveys were made over the
Antarctic and Arctic with the Kosmos-1870 and Almaz-1 SARs in spite of the fact
that the satellite orbits precluded coverage of the high-latitude northern and southern
regions. The Almaz-1 SAR data were used to support an emergency operation in
the Antarctic, when the research vessel Mikhail Somov got stuck in the ice. During
this operation, it was possible to detect icebergs and estimate their size, as well as to
derive several sea ice parameters, such as the ice extent, the boundaries of stable and
unstable fast ice, the ice types (nilas, young and first-year ice), prevailing ice forms,
ridges and areas of strongly deformed ice [3].
The SAR images obtained from ERS-1/2 were used in a number of sea ice studies
in the Arctic, Antarctic and in the ice-covered seas in different parts of the World
Ocean [48,68,76,93,120]. The ERS-1 SAR proved to be a very powerful instrument
for sea ice observations. Although the ERS satellite was not designed for operational
service, the data were applied in sea ice monitoring in the United States, Canada,
Finland and several other countries [18,27].
With the launch of the Canadian RADARSAT in 1995, the first satellite with
operational ice monitoring as a prime objective, ice monitoring in the United States,
Canada, Greenland, Norway, Finland, Sweden and some other countries entered
a new era. The ScanSAR mode with a swath of 450 km wide and with a 100-m
resolution at 8 looks allows daily mapping of the whole polar region north of 70◦ N,
and it is used for operational ice services in the Canadian Arctic, the Greenland Sea,
the Baltic Sea and other areas with ice [18,48,111]. With a systematic acquisition
of ScanSAR images over large Arctic sea ice areas and the use of the RADARSAT
geophysical processor, it was possible to estimate the sea ice motion, deformation
and thickness from sequential imagery for several years from 1996 [79]. Within 6 h,
the US National Ice Center routinely receives ScanSAR images from the Alaska SAR
Facility, the Gatineau and Tromsø Satellite Station almost, which provides total Arctic
coverage [18]. The sea ice analysis is made by integrating all available remote sensing
and in situ data, using the SUN SPARC and Ultra Workstations, and a system of
satellite image processing. The RADARSAT improved the Ice Patrol’s reconnaissance

zino: “chap09” — 2005/11/7 — 15:38 — page 195 — #5


196 Radar imaging and holography

efficiency, although the radar iceberg identification remains problematic even with
modern techniques. The RADARSAT ScanSAR wide data provide a daily coverage of
the Canadian Arctic, and higher resolution modes are used for sea ice monitoring near
the ports, in several selected routes and in the rivers. SAR images are synthesised
at the receiving stations Prince Albert and Gatineau and are transmitted to the Ice
Centre within 2.5 h to be processed and transmitted to the icebreakers of the Canadian
Coast Guard and the department of ice operations for visualisation and analysis. Sea
ice monitoring is the most successful online application of the RADARSAT data in
Canada, which provides the best combination of geographic coverage and resolution
to save about 6 million dollars annually, as compared with airborne radar survey [38].
From February 1996 until the end of 2003, CIS used approximately 25,000 scenes
for this purpose [42]. During 2003, a special service carried out iceberg detection
and monitoring from satellite SAR imagery, and the International Ice Patrol was the
user of this information [42]. Now the RADARSAT ScanSAR imagery is the main
data source for sea ice mapping in the Greenland waters. Wind conditions may be
an important limitation to the operational use of radar satellite imagery in this area.
Small (<50 m across) yet thick ice in concentrations less than 7/10 are frequently
undetectable on radar images as they are obscured by a strong backscatter from the
sea waves. Therefore, active research into filtering and enhancement techniques has
been undertaken to improve discrimination between ice and water [48,49].
The ENVISAT ASAR imagery with almost the same swath as that of the
RADARSAT ScanSAR in the VV- and HH-polarisations is an example of further
development of SAR technology. The wide swath mode of the ENVISAT satellite
is especially suitable for sea ice monitoring, providing a practically daily cover-
age of most of the Arctic with a high spatial resolution. In mid-2003, the Canadian
Ice service began to receive the ENVISAT ASAR data to be used as an additional
source to the RADARSAT-1 data for routine production of ice charts, bulletins and
forecasts [43].
The Nansen Centres in Bergen and St Petersburg, in collaboration with the
European Space Agency and Murmansk Shipping Company, have done a series of
projects to demonstrate the possibilities of SAR data for sea ice monitoring and for
supporting navigation in the Northern Sea Route (NSR) [64–66]. The NSR, which
is a major Russian transport corridor in the Arctic, includes routes suitable for ice
navigation confined to the entries to the Novaya Zemlya straits and to the meridian
north of Cape Zhelaniya in the west and to the region of the Bering Strait in the east.
In August 1991, just after the launch of the ERS-1 satellite, SAR imagery was trans-
mitted in near-real time aboard the French vessel L’Astrolabe via the INMARSAT
communication system during her voyage from Europe to Japan in selecting her route
in ice [66]. During the period from July 1993 to September 1994, the European Space
Agency provided approximately 1000 SAR scenes for sea ice monitoring. Three spe-
cific demonstration campaigns in the NSR in the periods of freeze-up, winter and
late summer, revealed the ERS SAR capability to map the key ice parameters. The
SAR imagery was successfully used to solve tasks of navigation through hard ice. In
1996 the ESA and the Russian Space Agency initiated their first joint project, named
ICEWATCH with an overall objective to integrate SAR data into the Russian sea

zino: “chap09” — 2005/11/7 — 15:38 — page 196 — #6


Radar imaging application 197

ice monitoring system to support ice navigation in the NSR [65]. During January–
February 1996, an experiment was made aboard the icebreakers Vaygach and Taymyr,
when the ERS-1 and ERS-2 SARs were operating in a ‘Tandem mission’, giving a
unique opportunity to have SAR coverage over the same area with only a 1-day inter-
val. However, the narrow 100 km swath of the ERS SAR resulted in a substantial
spatial and temporal discontinuity in coverage [64].
In August–September 1997, the RADARSAT ScanSAR data were used to sup-
port the icebreaker Sovetsky Soyuz operations in the Laptev Sea [119]. With its
wide swath, the ScanSAR provided a much better coverage than the ERS SAR, and
the selection of scenes along a given ship route was simplified significantly. The
ScanSAR data proved to be a very useful supplement to conventional ice maps and
could contribute significantly to the ice information. Starting from April 1998, the
ScanSAR and the ERS-2 SAR data were acquired and analysed to support the expedi-
tions aboard the icebreaker Sovetsky Soyuz from Murmansk to the Yenisey Gulf [4]
and the EC ARCDEV expedition with the Finnish tanker Uikku and the icebreaker
Kapitan Dranitsyn from Murmansk to Sabeta in the Ob River [107]. Throughout the
expedition, ScanSAR imagery, aboard the icebreaker was used to detect some impor-
tant ice parameters, such as the ice types, old and fast ice boundaries, flaw polynyas,
wide leads, single ice floes and large areas of rough ice and to solve tactical tasks
of navigation. Areas of level and deformed fast ice were identified in the Ob estu-
ary, and an optimal sailing route was selected through the areas with level ice [107].
These expeditions clearly showed that ScanSAR imagery is particularly important
for supporting navigation in difficult ice conditions, such as those in the Kara Sea
during April–May 1998.
During the summer of 2003, the ENVISAT Wide Swath ASAR imagery was
acquired and transmitted aboard the icebreaker Sovetsky Soyuz during her voyage in
the Kara Sea, together with visible AVHRR NOAA images. The satellite images and
ice maps were displayed in the electronic cartographic navigation system, such that
the navigator could see the current icebreaker location overlaid on a satellite image
and ice chart in order to select the sailing route.
A series of demonstration campaigns conducted in the NSR since 1991 have
shown that high-resolution light- and weather-independent SAR imagery can be effec-
tively used for sea ice monitoring. The sea ice conditions were interpreted and found
quite useful for selecting a sailing route. The speed of convoys significantly depends
on the ice conditions and varies from about 11–14 knots in polynyas to 4–6 knots in
areas with a medium and thick level FY ice and 2 knots in heavily ridged ice [4].
The onboard use of satellite SAR imagery significantly increases the convoy speed
in the pack ice (Fig. 9.1). High-latitude telecommunication systems are the main
‘bottleneck’ in using SAR imagery aboard the icebreakers operating in the NSR. It
must be averaged and compressed to about 100–200 kB for their digital transmission.
During the first half of 2004, the ENVISAT ASAR imagery was used for sea ice mon-
itoring of the NSR on an experimental basis. Preliminarily processed images were
transferred by e-mail to the Murmansk Shipping Company and then were transmitted
via the TV channels of the Orbita system to the nuclear icebreakers Yamal, Sovet-
sky Soyuz, Arktika, Vaygach and Taymyr. The icebreaker navigators could interpret

zino: “chap09” — 2005/11/7 — 15:38 — page 197 — #7


198 Radar imaging and holography

hi(M) VKN
V
2.5 1.5

hi

2 10

1 5
hi

I II III IV V VI VII VIII IX X XI XII


Open
Ice Ice
water

Figure 9.1 The mean monthly convoy speed in the NSR changes from V0 (without
satellite data) to V1 (SAR images used by the icebreaker’s crew to select
the route in sea ice). The mean ice thickness (hi ) is shown as a function
of the season. (N. Babich, personal communications)

them, adequately selecting the easiest sailing through level thin ice and along leads
and polynyas with prevailing nilas and grey ice. As a result, the speed of convoys’
steering increased by 40–60 per cent on average.

9.1.2.2 Interpretation of satellite SAR imagery of sea ice


A successful application of SAR imagery to support navigation required the ability to
recognise the major sea ice parameters and processes from them. Characteristic sig-
natures of major sea ice types and features in ERS, RADARSAT and ENVISAT SAR
imagery were described and validated with subsatellite data during field campaigns.
The major stages of ice development described in the WMO Ice Nomenclature
include new ice, nilas, young, first-year and old ice. The sea ice recently formed on the
water surface may have dark and light SAR signatures. The grease ice that represents
an agglomeration of frazil crystals into a soupy layer precludes the formation of short
waves (Fig. 9.2(a)) and can be detected as dark stripes and spots among bright SAR
signatures of wind-roughened water surface (Fig. 9.2(b)). Slush and shuga have a high
backscatter coefficient due to their rough surface and are seen in the SAR images as
bright elongated stripes. Nilas represents an elastic ice crust less than 10 cm thick,
bounding under the wave action (Fig. 9.3); it has a low backscatter coefficient and a
dark SAR signature (Fig. 9.4). Young ice represents the next stage of development;
it is subdivided into grey ice and grey–white ice with thicknesses of 10–15 cm and

zino: “chap09” — 2005/11/7 — 15:38 — page 198 — #8


Radar imaging application 199

(a)

Grease ice

Open water

(b)

Figure 9.2 (a) Photo of grease ice and (b) a characteristic dark SAR signature of
grease ice. © European Space Agency

zino: “chap09” — 2005/11/7 — 15:38 — page 199 — #9


200 Radar imaging and holography

Figure 9.3 Photo of typical nilas with finger-rafting

15–30 cm, respectively. During winter, young ice is quite common in polynyas and
fractures. It has a relatively high backscatter coefficient [102] and can be distinguished
from both nilas and first-year ice due to its bright SAR signature (Fig. 9.4). The first-
year ice, which is subdivided into thin (30–70 cm), medium (70–120 cm) and thick
(over 120 cm) first-year ice, has a typical dark tone. It is difficult to separate thin,
medium and thick first-year ice using only their SAR signatures, so knowledge of sea
ice conditions in different Arctic regions is used to partly solve this problem. Old ice
that has survived melting during at least one summer, is often reliably discriminated
from first-year ice due to its brighter tone, rounded floes and distinctive texture
(Fig. 9.5). When old and first-year ice breaks into small ice floes with size less than
the SAR spatial resolution, their separation is impossible. SAR signatures of second-
year and multiyear ice are quite similar, and it is hard to distinguish these types of
ice [102].
The backscatter from the ice of the same age depends on its prevailing forms (floe
size) and surface roughness. Pancake ice has a rough surface due to characteristic
raised pancake rims at the plate edges that lead to a high backscatter and a bright
tone in a SAR image (Fig. 9.6). Areas of small ice floes unresolved by radar may
have a specific bright SAR signature. When the size of ice floes greatly exceeds the
radar spatial resolution, they can be detected in SAR imagery. Single ice floes of
even relatively small size can be detected from the dark tone on a bright radar image
of wind-roughened water surface, whereas their detection in calm water surface is
more difficult. The analysis of ice floes becomes complicated when they touch each
other [120,128]. The backscatter of deformed ice is much higher than that of level ice,

zino: “chap09” — 2005/11/7 — 15:38 — page 200 — #10


Radar imaging application 201

E55° E60° E65° E70°


N77°

Nilas

N78°
N76°

N77°
Young ice
N75°

N76°
First-year ice
N74°

N75°
E60° E65° E70°

Figure 9.4 A RADARSAT ScanSAR Wide image of 25 April 1998, covering an area
of 500 km×500 km around the northern Novaya Zemlya. A geographical
grid and the coastline are superimposed on the image. © Canadian Space
Agency

therefore, areas of weakly, moderately and strongly deformed ice are detectable in
ERS, RADARSAT and ENVISAT SAR imagery (Fig. 9.7). Identification of strongly
deformed ice hazardous to navigation is particularly important.
Detection of open water areas among sea ice, such as fractures, leads and polynyas,
is necessary for selection of an icebreaker’s route. Shore and flaw polynyas can be
detected reliably, and their width, as well as the type of sea ice can be determined. For
example, flaw polynya along the western coast of Novaya Zemlya is clearly evident
in RADARSAT ScanSAR imagery (Fig. 9.4) together with a number of fractures
covered with nilas (dark tone) or young ice (light tone). It was found that the detection
of 100-m wide leads in compact first-year ice is feasible in ScanSAR images.
In winter, fast ice covers large areas in the coastal zones of the Eurasian Arctic
Seas. The SAR signature of fast ice is similar to that of drifting ice, and it changes with
the surface roughness and, to some degree, with salinity. Level fast ice has a uniformly
dark tone, and its boundary can often be identified in SAR images (Fig. 9.7).
The ice edge presents a boundary between open water and sea ice of any type and
concentration; it may be both compact and diverged, separating open ice from water.

zino: “chap09” — 2005/11/7 — 15:38 — page 201 — #11


202 Radar imaging and holography

Multiyear ice

First-year ice

Mainland

Figure 9.5 A RADARSAT ScanSAR Wide image of 3 March 1998, covering the
boundary between old and first-year sea ice in the area to north Alaska.
© Canadian Space Agency

The ice edge may be well-defined or diffuse, straight or meandering, with ice eddies
and ice tongues, extending into open water [67]. Ice tongues at the ice edge in the
Barents Sea are evident in ENVISAT ASAR imagery (Fig. 9.8). With frequent SAR
images, one can investigate the ice edge development in much detail [120]. The sea
ice concentration and ice edge location are the most important parameters during the
summer; they can be derived from SAR images together with large ice floes, stripes
of ice in water, ice drift vectors and areas of convergence/divergence [119].
A high-resolution SAR is considered to be an optimal remote sensing instrument
for detection of icebergs. Its backscatter coefficient significantly exceeds that of sea
ice and calm sea surface; icebergs that are much larger than the radar spatial resolution
are evident as bright spots. In some cases, iceberg shadows and tracks in the sea ice can
be detected [125]. Identification of smaller icebergs is complicated by speckle-noise
of SAR systems. Areas of iceberg spreading in Franz Josef Land, east of Severnaya
Zemlya, and in the northwest Novaya Zemlya have been identified from ERS and
RADARSAT SAR data. ERS-2 SAR imagery of Severnaya Zemlya (Fig. 9.9) shows
a number of icebergs as bright spots in the Red Army Strait.
Recent studies have shown that the sea ice classification can be improved by
using the ENVISAT alternating polarisation mode. Cross-polarisation will improve

zino: “chap09” — 2005/11/7 — 15:38 — page 202 — #12


Radar imaging application 203

(a)

(b)

Figure 9.6 (a) Photo of a typical pancake ice edge and (b) a characteristic ERS SAR
signature of pancake ice. A mixed bright and dark backscatter signature
is typical for pancake and grease ice found at the ice edge. © European
Space Agency

the potential for distinguishing ice from open water, which can sometimes be diffi-
cult to do only with HH or VV polarisation. In addition to the backscatter variation
in single polarisation data, a proper combination of VV and HH dual polarisa-
tion and cross-polarisation imagery provides additional information on the sea ice
parameters [54,101,122].

zino: “chap09” — 2005/11/7 — 15:38 — page 203 — #13


204 Radar imaging and holography

Moderately
hummocked ice

Strongly
hummocked ice

Fast ice

Open water

Figure 9.7 A RADARSAT ScanSAR Wide image of 8 May 1998, covering the south-
western Kara Sea. © Canadian Space Agency

Some of the sea ice parameters cannot be found from SAR imagery. For example,
it is quite difficult to distinguish thin, medium and thick first-year ice, or second-year
and multiyear ice types. It is impossible to determine the snow depth on sea ice and
some other parameters. In some cases large ridges and narrow leads covered with
grey ice may have similar SAR signatures.

9.1.2.3 Conclusions
The studies have clearly shown that a satellite SAR is a powerful instrument for sea
ice monitoring, and SAR data are widely used for this purpose in countries with a
perennial or seasonal ice cover. Modern SARs provide a practically daily coverage of
the Arctic regions. The most important sea ice parameters can be derived from SAR
imagery, and their use increases the safety of navigation and speeds of convoys in
severe Arctic ice conditions.

9.1.3 SAR imaging of mesoscale ocean phenomena


The SAR imagery allows a global view of most oceanographic phenomena: waves,
currents, fronts, eddies and slicks reveal hidden features (such as internal wave and
bottom topography). Although most of the imaging mechanisms are now well under-
stood, there are still gaps in our knowledge of certain details. Some aspects still
remain obscure, requiring further research efforts.
A high spatial resolution and sensitivity of modern satellite SAR systems makes it
possible to observe mesoscale and small-scale features of the sea surface. This allows
the use of SAR imagery for investigation of wind speed over the open ocean and

zino: “chap09” — 2005/11/7 — 15:38 — page 204 — #14


Radar imaging application 205

Figure 9.8 An ENVISAT ASAR image of 28 March 2003, covering the ice edge in
the Barents Sea westward and southward of Svalbard. © European Space
Agency

coastal zone, surface roughness characteristics and surface polluted zones of different
nature. SAR data help to monitor ocean dynamic processes, frontal boundaries,
convergence zones, etc.
The normalised radar cross-section (NRCS) is a measure of intensity of the echo
signal. In the range of the microwave frequencies, a radar is sensitive to small per-
turbations of the ocean surface. The NRCS is directly related to the sea roughness,
that is, to statistical properties of the sea surface. This allows a radar to detect a larger
number of near-surface phenomena than any other remote sensing tool. On the other
hand, this makes the radar data extremely hard to interpret, especially quantitatively,
and requires the use of sophisticated models.
When dealing with the ocean, one has to consider surface velocities. The motion
associated with travelling waves affects significantly the SAR imaging mechanisms.
In particular, an azimuthal image shift is due to the motion of the target in the range
direction. This motion has little effect on the radial velocities and is unaffected by
the pulse compression. It is intense enough to have an influence on the aperture.
The azimuthal shift and reduction in the signal amplitude are associated with the
motion of the target in the range direction. Wave motion in the azimuthal direction is
also a source of image degradation but is of less importance. It is known as azimuth

zino: “chap09” — 2005/11/7 — 15:38 — page 205 — #15


206 Radar imaging and holography

Outlet glacier

Red army strait

Figure 9.9 An ERS-2 SAR image of 11 September 2001, covering the Red Army
Strait in the Severnaya Zemlya Archipelago. © European Space Agency

defocusing and is due to the difference between the Doppler history of the target and
the reference signal.
A satellite-borne SAR can monitor large- and small-scale structural fluctuations
through the description of the energy distribution of the ocean waves in the spec-
tral domain. The latter is formally described by the wave action balance equation
for the spectrum evolution under the combined influence of wind forcing, dissipa-
tion, resonant wave–wave interaction, the presence of surfactants and surface current
velocity gradients. The possibility of identifying oceanic processes is directly related
to changes in the surface scattering characteristics which depend on these processes.
For this reason, the detection becomes impossible when no wind is present.
When these phenomena are known, an imaging model can be used to derive
the wave spectrum from the image spectrum. Unfortunately, the mechanisms respon-
sible for the spectrum modulation are not fully understood. The analysis of a SAR
image is always complicated by interpretation ambiguity. The reason is that one
and the same NRCS contrast may be caused by the variation in different physical
parameters. Moreover, one and the same phenomenon may manifest itself in some
observation conditions and not in others. One of the generally recognised features
of radar imagery is the fact that surface phenomena are more clearly observed in the
horizontal polarisation than in the vertical one.
A simultaneous study of synchronous SAR images and other data sources (e.g.
infrared and visible images, weather maps) helps in getting a correct interpretation.
It should be added that since the influence of current velocity gradients, sea surface
temperature, surfactant concentration and other environmental parameters on the

zino: “chap09” — 2005/11/7 — 15:38 — page 206 — #16


Radar imaging application 207

wind wave spectrum depends upon the wavelength, a radar using a combination of
different wavelengths may be quite useful in revealing the mechanisms responsible
for the NRCS contrast.
A number of mechanisms have been suggested which are responsible for man-
ifestation of dynamic ocean phenomena in radar images. It is assumed that the
wave–current interaction reveals most processes having the scale of the current non-
uniformity of about 0.1–10 km. The following phenomena fall into this category:
internal waves, current boundaries, convergence zones, eddies and deep-sea convec-
tion. The degree of the ocean front manifestation in a SAR is strongly determined by
the atmospheric boundary and by its transformation over the sea surface temperature
non-uniformities. In any case, the comparative significance of a mechanism depends
on the whole set of factors, including the observed process, wind conditions, regional
specificity and unknown circumstances (e.g. Reference 16).
Below we give several examples of how different ocean phenomena may become
apparent in SAR images. The ERS-2 SAR image in Fig. 9.10, taken on 24 June
2000 over the Black Sea (east of the Crimea peninsula), illustrates the manifestation
of temperature fronts, zones of upwelling and slicks of natural films. The fronts
are clear from both the bright and dark departures from the background NRCS. As
was mentioned before, a correct image interpretation needs additional information.
Figure 9.11 shows the sea surface temperature (SST) from the NOAA AVHRR data a
few hours after ERS-2 passage. It gives the temperature distribution helpful in image
interpretation. The spatial resolution of the infrared image is 1 km as compared with
100 m provided by a SAR. An upwelling is clearly visible in the upper right corner
black partially covered with clouds (with SST about 16◦ C). The black square is the
position of the SAR image and the black curved lines are the distinctive features
taken from the SAR image. There appears to be a remarkable correlation between the
features in the SST and NRCS fields. The insignificant shift is due to the difference
in the time of imaging.
The dark region in the upper left corner of the SAR image shows upwelling, when
strong winds force the warm water of the upper layer away from the shore and the cold
deep water comes up from below. Upwellings are known to occur quite often near
the region of the Crimean shoreline. A patch of cold water manifests itself through
a modulation of the so-called friction velocity. This quantity may be described as
‘effective wind’ because it is friction velocity determining the energy flux from the
wind to the waves. The stratification of the atmospheric boundary layer over cold
water is more stable than over the surrounding warm water. This results in a lower
friction velocity, which means that the wind of the same speed (at a given height)
would generate less waves in cold water than in warm water. Surface roughness of the
upwelling zone is decreased reducing its NRCS. Other conditions being equal, cold
water will appear darker than warm water on a radar image (e.g. Reference 16). This
feature allows a radar to sense the temperature non-uniformities of the sea surface in
general.
There are dark stretched features all over the SAR image. The accumulation of
surfactants is assumed to be the cause of these areas of low backscatter. It may take
place in regions of high biological activity. When natural (organic) substances reach

zino: “chap09” — 2005/11/7 — 15:38 — page 207 — #17


208 Radar imaging and holography

Figure 9.10 An ERS-2 SAR image (100 km × 100 km) taken on 24 June 2000 over
the Black Sea (region to the East Crimea peninsula) and showing
upwelling, natural films

the surface, they tend to be adsorbed at the air–water interface and remain there as a
microlayer. Waves travelling across a film-covered surface compress and expand the
film, giving rise to surface tension gradients, which lead to vertical velocity gradients
within the surface layers. This induces viscous damping and attenuation of short
Bragg waves. As a result, the scattered signal returning to the SAR is very much
reduced. Natural films are usually dissolved at wind speeds above 7 m/s. Because
currents easily redistribute them, such slicks often configure into spatial structures
related to the surface current circulation pattern.
Figure 9.12 illustrates how very long ocean waves, the swell, are imaged by a
SAR. This image was obtained on 30 September 1995 over the Northern Sea; the
land on the right is the Norwegian coast.
We have pointed out that ocean surface roughness of the centimetre scale is
due to the local wind (wind stress). Small-scale roughness is modulated by large-
scale structures (longer waves or swells). Three mechanisms are considered to be

zino: “chap09” — 2005/11/7 — 15:38 — page 208 — #18


Radar imaging application 209

Figure 9.11 SST retrieved from a NOAA AVHRR image on 24 June 2000.

responsible for the longer wave imaging: the tilt modulation, the hydrodynamic effect
and velocity bunching. The first mechanism is that long waves tilt the resonant ripples
so that the local incident angle changes, modifying the backscatter. The hydrodynamic
interaction between the long waves and the scattering ripples lead to the accumulation
of scatterers on the up-wind face of the swell. This effect is greatest (as for the tilt
modulation) for range travelling waves, and there is no modulation if the ripples
are perpendicular to the swell. These first two mechanisms, responsible for swell
manifestation, reveal themselves in both synthetic and real aperture imagery. The
latter – the so-called velocity bunching effect – is responsible for swell manifestation
in the case of long waves travelling close to the azimuthal direction; this effect is
observable only in SAR images.
A SAR creates a high-resolution image by recording the phase and amplitude
of the electromagnetic radiation reflected by the scatterers and by processing it with
a compression filter. The filter is designed to match the phase perfectly for a static
target. For the dynamic ocean surface, the motion of each scatterer within the scene
distorts the expected phase function with two important implications. First, the linear
component of the target motion shifts the azimuth of the imaged location of each

zino: “chap09” — 2005/11/7 — 15:38 — page 209 — #19


210 Radar imaging and holography

Figure 9.12 A fragment of an ERS-2 SAR image (26 km × 22 km) taken on


30 September 1995 over the Northern Sea near the Norwegian coast
and showing swell

target. This leads to a strong wave-like modulation in the SAR image due to a periodic
forward and backward shift of the scatterer positions. This mechanism is exactly
what is known as the velocity bunching. The other implication of the distorted phase
function is the degradation of the image azimuthal resolution due to higher order
components of the target motion (e.g. Reference 56).
The SAR image enables one to study swell transformation as it approaches the
coast. The wavelength decreases as the swell comes to shallow water, so the wave-
length is about 350 m at point A while near the coast at point B it is only 90 m
(Fig. 9.12). Another observable feature is the swell refraction on the sea bottom
relief. This effect is due to the fact that the wave velocity decreases with decreasing
depth. The wave crests rotate so as to be parallel to the isobaths. It is clearly visible at
points B and C that the swell goes parallel to the curved shore line, though initially it
was not. Finally, at point D we can see an interference pattern produced by two swell
systems going in approximately perpendicular directions.
Figure 9.13 shows the manifestation of the mentioned ocean features and
some new ones. This SAR image was acquired on 28 September 1995 over the
Northern Sea.

zino: “chap09” — 2005/11/7 — 15:38 — page 210 — #20


Radar imaging application 211

Figure 9.13 An ERS-2 SAR image (100 km × 100 km) taken on 28 September 1995
over the Northern Sea and showing an oil spill, wind shadow, low wind
and ocean fronts

The first distinctive feature marked as ‘A’ in Fig 9.13 can definitely be identified
as an oil spill. Oil slicks are seen as patches of different shapes with very low NRCS
and relatively sharp borders. Quite often, the spill source (ship or oil drill platform)
is visible nearby. As compared to natural films, oil films have a higher viscosity,
damping short waves more effectively and remaining observable at higher winds
when natural slicks would disappear. Another characteristic to distinguish between
oil and natural films is that the latter nearly never appear as single localised features
but tend to cover vast areas of intricate patterns produced by currents. Anthropogenic
oil spills on the sea surface may originate from leaks from ships, offshore oil plants
and ship wrecks. In the case of ship wreck, a SAR can contribute to oil spill detection
and monitoring, keeping track of the drift and spread of the slicks.
Usually, the shorter the radar wavelength, the more intense is the backscattering
reduction due to oil presence. The reduction in the radar backscattering also depends
on the incidence angle. Optimum range of angles is defined by the radar wavelength.

zino: “chap09” — 2005/11/7 — 15:38 — page 211 — #21


212 Radar imaging and holography

One of the strongest obstacles to oil spill detection is the state of the sea. At low
(2–3 m/s) wind speeds, SAR images of the ocean become dark because the Bragg
scattering waves are not present. In this case almost no features can be distinguished
on the sea surface. At high winds, most kinds of oil are dispersed into the water
column by the wind waves and also become unobservable (e.g. Reference 39).
The second feature in Fig 9.13 (‘B’) reveals a clearly lined dark zone near the shore
which seems to have the same direction as the dominating wind. The mountainous
coastal landscape and the sharp outline allow attributing this feature to wind sheltering
by land. It can be seen that the NRCS becomes larger as the distance from the shore
along the wind direction increases and the sea roughness becomes better developed.
The dark areas ‘C1’ and ‘C2’ have blurred contours and may be interpreted as low
wind zones.
Besides this, one can see numerous manifestations of the current boundaries
(‘D1’, ‘D2’, ‘D3’). At moderate wind speeds (3–10 m/s), the SAR is capable of
revealing the current boundaries, meanders and eddies. The NRCS variation in the
vicinity of the current boundary/front is associated with several phenomena, including
changes of the stability of the atmospheric boundary layer, wave–current interaction
and surfactant accumulation. The exact view of the ocean front on a radar image is
affected by many factors: the radar parameters, the observation geometry, the wind
conditions, surface current and temperature gradients, etc. Nevertheless, some simple
rules of thumb exist. One of them was already mentioned: cold water looks darker
than warm water. Another is that convergent current fronts usually appear bright,
while divergent fronts appear dark. It is assumed that the features ‘D1’ and ‘D3’
are the ocean fronts where the non-uniform current distribution is combined with
SST changes. Lack of additional sources of information (e.g. IR images) retains the
interpretation ambiguity since a dark area can also be associated with low winds.
Sometimes, atmospheric phenomena may be observable on SAR images, when
they affect the near-surface wind. Depending on the observation conditions, such
phenomena increase or decrease the radar backscattering by intensifying or damping
the Bragg waves. One example is present in the ERS-1 SAR image of Fig. 9.14, taken
on 29 September 1995 over the Northern Sea. There are several rain cells of different
size scattered throughout the scene. The falling rain drops entrain the air to form a
downward flux of cold air. When hitting the ocean surface, the flux transfers cold air
mass away from the cell centre to form a wind squall – a line of abrupt increase in
the wind speed. The rain cells become visible because the background wind at their
boundaries is summed with the wind due to the rain cold air motion. As a result, the
wind squall on the lee side of the cell increases the background wind, decreasing it on
the opposite side. Thus, one half of the rain cell becomes brighter than the background
while the opposite side becomes darker. The distinct boundaries between the wind
squalls and the surrounding background water are called squall lines. When the rain
is heavy, the centre of a rain cell may appear dark because the falling drops create a
turbulence in the upper water layer, damping the Bragg waves. Such phenomena are
typical of subtropical regions but may be encountered anywhere else [62].
Figure 9.15 shows a ERS-2 SAR image taken on 30 November 1995 over the
Northern Sea. Points ‘A’, ‘B’ and ‘C’ are examples of internal waves on the SAR

zino: “chap09” — 2005/11/7 — 15:38 — page 212 — #22


Radar imaging application 213

Figure 9.14 An ERS-1 SAR image (100 km × 100 km) taken on 29 September 1995
over the Northern Sea showing rain cells

imagery. Internal waves are one of the most interesting ocean features revealed by
SAR imagery. At the beginning of SAR history their detection was entirely unex-
pected. At present, they are found on SAR images in many regions of the World Ocean
at various wind speeds and water depths. They appear as dark crests (troughs) against
a lighter background or as light ones against a dark background. The crests always
occur as packets called trains. In this image, three trains can be observed. Often,
internal waves correlate (parallel) with the bottom topography, when they are caused
by the interaction between the tidal currents and abrupt topographic features. The
distance between individual dark and light bands varies from several hundred metres
to a few kilometres, decreasing from a leading wave to a trailing edge (e.g. [126]).
Orbital motions induced by an internal wave train generate an intermittent pattern
of convergent and divergent zones on the sea, which moves with the phase velocity
of the internal wave. Convergent zones are generated behind the internal wave crest
and divergent zones are behind the troughs. It is these zones that make internal
waves visible on radar imagery. There are few commonly accepted explanations about

zino: “chap09” — 2005/11/7 — 15:38 — page 213 — #23


214 Radar imaging and holography

Figure 9.15 An ERS-2 SAR image (18 km × 32 km) taken on 30 September 1995
over the Northern Sea showing an internal wave and a ship wake

how this may happen. According to one point of view, surfactants are accumulated
in the convergence zones, which results in short wave damping and makes these
zones appear dark on radar images. Another theory states that convergence zones
appear bright because these are zones of enhanced roughness due to intensified wave
breaking there. The question of which imaging mechanism dominates and under what
conditions is still open.
The next distinctive feature clearly observable on the image (‘D’), is a ship wake.
The ship itself is seen as an extremely bright spot because of many metallic structures
that serve as corner reflectors. The wake is a narrow V-shaped feature associated with
the ship mark. It appears on radar images only in low wind conditions due to the
short lifetime of the Bragg waves and the common ship speeds. The major result of
the ship movement is the appearance of the stern wake. This turbulent wake damps

zino: “chap09” — 2005/11/7 — 15:38 — page 214 — #24


Radar imaging application 215

the Bragg waves, producing an area of dark return, which is sometimes surrounded
by two bright lines. The lines of high backscatter originate from the Bragg waves
induced by vortices from the ship’s hull. However, there is generally a large diversity
of ship wake patterns including combinations of dark and bright stripes on the SAR
images and depending on the observational and sea conditions.
Thus, during the last decades the role of SAR data in earth observations has
increased considerably, and the SAR has become a major remote sensing tool for
environmental monitoring. Improvement of image interpretation techniques, automa-
tised data interpretation, improvement of high-latitude telecommunication systems
and a convenient presentation of the information products to the user are necessary
for further development of SAR earth monitoring.

9.2 The application of inverse aperture synthesis for radar imaging

The imaging techniques we have discussed in Chapters 5 and 6 did not use holographic
or tomographic principles but were developed within a purely radar approach in the
United States about 40 years ago. The first device was designed and constructed by
the Westinghouse company and represented a narrowband radar with a discrete vari-
ation of the carrier frequency and a synthesised spectrum. At about the same time, the
Willow Run Laboratory in the United States initiated work on constructing a radar for
aircraft imaging; the model radars were tested on an open test ground. Somewhat later,
two experimental types of radar were designed for spacecraft identification. One was
constructed at the US Air Force Research Center in collaboration with the General
Electric Company and the Syracuse Research Corporation (the design of the data pro-
cessor). The other type of radar was made by the Aerospace Corporation; it had the car-
rier frequency of 94 GHz, the radiation bandwidth of 1 GHz and the pulse base of 106 .
The first quality images of low-orbit satellites were obtained by ALCOR radar
with the range resolution of 50 cm in the early 1970s. Further efforts by the designers
(the Lincoln Laboratory, the Massachusetts Institute of Technology and the Syracuse
Research Corporation) to improve this system within a global program for space object
identification resulted in the creation, in the late 1970s, of a long-range imaging radar
(LRIR) [20,52,83] with better characteristics (Table 9.6).
The major advantages of this radar system are a high-frequency stability, a pulse
repetition rate higher than the maximum Doppler frequency of an echo signal, and a
controlled repetition rate necessary for time discretisation of transmitted and received
pulses. Besides, a LRIR system provides imaging of targets on far-off orbits (including
geostationary orbits) and having high rotation rates.
The Doppler-range method of echo signal processing for 2D imaging of the
Russian orbiting stations Salut-7 and Kosmos-1686 was implemented in a radar with
a 1 GHz probing pulse width [91]. A theoretical and experimental investigation of
the imaging of stabilised low-orbit satellites was described in Reference 124, using
narrowband probing pulses. The processing algorithms were based on holographic
principles. The authors believe that current interest in microwave holography is due
to the fact that many available radar systems can acquire a new function – 2D imaging

zino: “chap09” — 2005/11/7 — 15:38 — page 215 — #25


216 Radar imaging and holography

Table 9.6 The LRIR characteristics

Antenna type (primary reflector) Paraboloidal

Aperture shape Circular


Aperture diameter (m) 36.6
Wavelengths (GHz) K-band
Narrowband mode (NBM) 5.5–6.5
Wideband mode (WBM) 9.5–10.5
Aperture field distribution Cosine
Sidelobe level (dB) −22.4
Polarisation (in transmisson and Circular
in reception)
Frequency band (GHz) 1
Pulse duration (µs) 250
Transmitter pulse power in 0.5, 0.8
modes 1 and 2 (MW)
Average power (kW) 200
Secondary processing in WBM Coherent integration
Signal modulation Linear frequency type
Range gate (m) 30, 60, 120
(frequency filter band (MHz)) (0.8, 1.6, 3.2)
Pulse repetition rate (Hz) 1600 (determined by range
measurement unambiguity)
Pulse compressibility 250,000
Sidelobe level of matched filter 32
(in range) (dB)
Interpulse instability (◦ ) 3–2
Impulse filling (τimp /Trep ) (%) 50
Way of target tracking Single pulse
Reception loss (dB) 7.9
Aim of the mode
NBM Detection, tracking, range measurement
WBM-1 Target classification
WBM-2 Target classification (from images)
Frequency band (GHz) Up to 40
Possible radar frequency Up to 40
extension (GHz)
Radar location Westford, USA (Lincoln Laboratory,
space survey facilities)

of space targets – without being radically modernised. An echo signal in such radars
is processed by inverse synthesis of microwave holograms owing to the target angle
variation during the satellite motion along its orbit. The algorithm uses an original
technique for synthesising a 2D image, in the view-flight path plane, from 1D images
obtained along a lengthy target path. The summation of partial 1D images produces

zino: “chap09” — 2005/11/7 — 15:38 — page 216 — #26


Radar imaging application 217

intensity maxima at the beam interception points corresponding to various angles of


the target scatterers. A numerical simulation has shown that this algorithm provides
a resolution of about 10 cm for the viewing time of about 2 min.
The experiments on testing this type of radar used a radar interferometer consisting
of three antennas of 2.5 m in diameter with a base of 500 m [124]. The antennas were
co-phased to provide a coherent transmission and reception of quasi-monochromatic
signals with a 4 cm wavelength. The radiation power was 75 kW. The experiment
included several observation runs of the Progress spacecraft during its departure from
the Mir orbiting station. An optimal 2D image was obtained from 55 1D partial images,
each having the synthesis time of about 2 s. The time step between consecutive images
was 1.1 s, during which the vision line was rotated by about 0.01 rad. It appeared that
some of the scatterers of this nearly cylindrical target were not resolved well enough,
the boundaries between them were smeared, and the resulting image represented a
bright surface. Still, the image allowed evaluation of the target’s dimensions consistent
with the real ones.
Therefore, the available radars designed for entirely different applications can
be successfully used for spacecraft imaging. For example, the image reconstruction
algorithms can operate on the base of a phasometric device originally designed for
coordinate measurements. It is important to emphasise that inverse aperture synthesis
is also employed successfully in radar viewing of planets. In particular, a pioneering
experimental imaging of Venus was carried out by the specialists at the Jet Propulsion
Laboratory, California Institute of Technology, USA.

9.3 Measurement of target characteristics

Problems involving the analysis of radar performance require a priori informa-


tion about the scattering properties of a target. These properties are described
by a whole combination of independent radar responses to the target of interest.
Today, experimental and theoretical investigations of responses is a rapidly devel-
oping area of radar science and technology. It involves the search for new forms
of description of radiation scattering by various targets and novel methods of their
measurement [11,12,30,90,138].
The key position among the many radar responses is occupied by the scattering
matrix, which characterises the transformation of the amplitude, phase and polari-
sation of an arbitrary planar monochromatic wave scattered by a small-size (point)
object. The knowledge of the scattering matrix is important for the computation of
dynamic and static responses for many applications: the justification of the radar
design, the development of methods and devices for antiradar measures, designing
of processing algorithms, etc. Besides, a scattering matrix is necessary to go over to
responses which describe the target’s scattering of probing pulses having complex
spectra [138]. It is also indispensable in the computation of local responses to find
the scattering properties of individual parts of a target [12].
Theoretically, the exact values of matrix elements can be found only for targets
of simple geometry (spheres, cylinders, etc.). So a common way of determining radar

zino: “chap09” — 2005/11/7 — 15:38 — page 217 — #27


218 Radar imaging and holography

responses is by measuring the physical characteristics. For small-size targets, such


measurements are commonly made during flight and ground tests. Natural flight tests
provide the most complete and reliable data on the target in question but they are very
costly and need special equipment and testing conditions.
Radar responses are often measured in special setups on open and closed test
grounds. Open tests are carried out either with real targets or their models of natural
size. This allows a detailed study of the scattering characteristics and their behaviour
under different conditions. However, the response data are often affected by the
current weather conditions, background signals from the surrounding objects, natural
and artificial noise, etc. Common limitations of an open test ground are the lack of
an exact frame of reference for the angular position of the target under study, poor
coupling between normally polarised measurement channels, as well as a low data
accuracy because of the background effects. Moreover, a measurement run for one
target takes a long time, from 4 to 6 h, and is quite costly because of the necessity to
maintain the test equipment and facilities.
Closed tests are made in an anechoic chamber (AEC), whose inner walls are cov-
ered with a microwave-absorbing material, allowing simulation of wave propagation
in free space [98]. But two conditions are to be met in such experiments: the probing
wave front is to be planar near the target and the background noise is to be kept below
a permissible level. The measurements made in an AEC do not have the limitations
of an open ground and take 4–5 times shorter time for one run. Such chambers have
found a wide application because they are screened from outside noise, providing an
electromagnetic compatibility. Since the electromagnetic, mechanical and climatic
conditions in an AEC can be kept constant for a long time, the measurements can
be readily automatised and the targets used may be both real objects and models (of
natural size or diminished). The choice of the type of target is primarily determined by
the size ratio of the target and the so-called echo-free zone in the chamber, that is, the
zone where the incident field meets certain requirements as to the wave front geometry
and the background signal intensity. This ratio largely determines the response data
accuracy. The echo-free zone size is, in turn, determined by the chamber dimensions
and the way the wave front is collimated. When the target of choice is larger than the
echo-free zone, one usually employs a scaling method, using a smaller model object
and a shorter radiation wavelength. One serious disadvantage of this technology is the
difficulty of measuring radar responses to targets with absorbing or semiconducting
coatings and, sometimes, of making suitable model targets.
The measuring facilities using AECs have some common disadvantages:

1. The measurement accuracy is quite low because of a strong background signal in


the chamber working area, associated with the microwave-absorbing materials
of high reflectivity (−20 to −30 dB).
2. The echo-free zone is small because the collimators have a small aperture and
the chambers a small size; as a result, such measurements cannot be made with
real targets.
3. The frequency band of transmitted pulses is limited and bistatic measurements
are restricted.

zino: “chap09” — 2005/11/7 — 15:38 — page 218 — #28


Radar imaging application 219

It is clear from this analysis that a closed test ground is preferable for making response
measurements for various targets, especially for aircraft and spacecraft. These facil-
ities employ large AECs providing a high accuracy of all matrix elements for a real
target, and there is no need to use scaling.
On the other hand, many applied radar problems, especially the estimation of
efficiencies of methods and devices for target detection and recognition, often require
a numerical simulation of the whole radar channel, including the microwave path,
tracking conditions and so on. To do this, one should combine analogue and digital
simulation means, including a radar measurement ground (the analogue component)
and a computer with appropriate software packages (the digital component). If such
equipment is designed for the measurement of reflected signals with their amplitudes
and phases, it essentially represents a radar capable of microwave hologram recording,
in other words, of inverse aperture synthesis. For imaging, it is sufficient to include
in the software the image reconstruction algorithms described in this book.
The next procedure at the imaging stage is the measurement of local responses,
or scattering matrices and their elements, to obtain data on individual target scatter-
ers [12,138]. Objects of simple geometry, whose local responses can be calculated
precisely, can be used as standards for calibration of measuring devices. Practically,
it is reasonable to use cylinders as standard targets. An illustration of the calculation
of local responses for cylinders by the EWM suggested by P. Ufimtzev is given in
Chapter 2.
The typical measurement facilities include:

• an AEC;
• devices for pulse generation and transmission and for reception of echo signals
of various frequencies, including superwideband pulses;
• equipment for making measurements, such as a rotating support, a target rotation
control device, etc.
• hard- and software to control measurement runs, to keep records of the incoming
and operational data, processors, etc.

The body of work on the measurement of scattering parameters of targets consists of


five stages:

• preparatory operations
• preliminary measurements
• major measurements
• control measurements
• data processing.

The preparatory stage is aimed at preparing the measuring devices for a success-
ful performance. Preliminary measurements are to provide information on the device
ability to make the necessary measurements, to choose the appropriate operation
mode and to calibrate the devices. The aim of the major measurements is to produce
microwave holograms of the target with a prescribed accuracy. Control measurements
are made in order to check the validity of the data obtained. If the amplitude and phase

zino: “chap09” — 2005/11/7 — 15:38 — page 219 — #29


220 Radar imaging and holography

errors fit into the admissible limits for this particular run, the major measurements are
considered to be valid and are fed into a processor together with the calibration data.
Primary processing is performed to bring relative data to their absolute values,
that is, to calibrate the measurements and to evaluate the errors. The final results are
set into a local database for classified storage. Further processing can be made by
various algorithms for the reconstruction of images of different dimensionalities (by
using holographic and tomographic processing of the scattering matrix elements) in
order to analyse and measure the local responses.
However, the analogue–digital software can also be used for the following tasks:
• to process the results of measurement in order to get statistical data on the scatter-
ing characteristics of the target (average values, dispersion, integral distributions,
histograms and so on) for given target angles;
• to compute the angular positions of the target during its motion with respect to
the ground radar in order to simulate the dynamic behaviour of the echo signal
and the radar viewing devices;
• to simulate the target recognition devices by using various methods to find the
target recognition parameters (from images, too) and to design decision-making
schemes.
As a result, one can get online information about various probable characteristics
necessary for the target detection and recognition.
Methods for direct imaging and for measurement of local responses in an AEC are
described in detail in Reference 138. So we shall restrict ourselves to a brief review
of the measurement procedures and some of the results obtained.
The best way of producing an image in an AEC is to record multiplicative Fourier
holograms and to subject them to a digital processing. The recording can be based
on one of the schemes shown in Fig. 2.4, and the reconstruction can be made by the
algorithm presented in Fig. 9.16.
The input data are two quadrature components hr1 (ϕ) and hr2 (ϕ) of a 1D complex
microwave hologram hr (ϕ) and the calibration results (the calibration curve). The
sampling step for the functions hr1 (ϕ) and hr2 (φ) should meet the condition ϕ ≤
λ/lmax , where lmax is the maximum linear size of the target.
We can synchronise the quadrature components by using the subroutine for jus-
tifying the data file. Normally, a microwave hologram is recorded when the target is
rotated by 2 rad and further processing is performed for a sequence of samples, whose
number corresponds to the optimal size of the synthetic aperture and the position in
the data file corresponds to the required target aspect.
The chosen sequence is normalised, because a microwave hologram can be mea-
sured with different receiving channel gain, depending on the recorded signal value.
This should be taken into account when measuring a local response in the RCS
units. In order to visualise the scatterers and to measure their relative intensities at a
given aspect angle, we should reduce the domains of the functions hr1 (ϕ) and hr2 (ϕ)
to [−1,1].
For a direct image reconstruction, one is to use a fast Fourier transform (FFT),
which is simple to make when the number of initial readouts is 2m, where m is

zino: “chap09” — 2005/11/7 — 15:38 — page 220 — #30


Radar imaging application 221

Input of A, sin f, cos f Input of graduation data

Correction of measurement nonsynchronism

Formation of
quadrature
components of the
complex
radio-hologram
Choice of the synthesis interval and
object aspect angle

Normalisation

Interpolation

Multiplication by weight function

Finding the FFT of the function


hr = hr1 + ihr2 = Aexp(if)

Calulation of the image intensity

Determination of local response

Data output

Figure 9.16 The scheme of the reconstruction algorithm

a natural number. Their necessary number is made up of an arbitrary set of initial


samples, using an interpolation block. In order to minimise a measurement error in
the local response, the chosen sample is multiplied by any weighting function.
Having found the Fourier transform of the complex function hr (ϕ), we form the
files ReV and ImV , defining the complex amplitudes of the field V (ν) scattered by
the target surface. The image intensity, W , is found as the squared modulus of the
function V (ν). The image sample interval is ν = λ/2ψs , where ψs is the synthetic
aperture angle. With the calibration data, the image intensities of individual scatterers
can be represented in the RCS units.
Figure 9.17 illustrates typical 1D images of a perfectly conducting cylinder,
obtained in an AEC at the aspect angle φobs = 105◦ , with the image intensity plot-
ted along the ordinate and the normalised target size along the abscissa. The image
intensity peaks correspond to the projections of scatterers 1, 2 and 3 onto the normal

zino: “chap09” — 2005/11/7 — 15:38 — page 221 — #31


222 Radar imaging and holography

(a) E-polarisation (b) H-polarisation


W, rel. units l = 6l W, rel. units
1.0 a = l/2 1.0
Ψs= p/6
0.6 0.6
Theory
Experiment
0.2 0.2

–4 –2 0 2 4 –4 –2 0 2 4
v/l v/l

Figure 9.17 A typical 1D image of a perfectly conducting cylinder (l-length of


cylinder, a-radius of cylinder)

to the view line (Fig. 2.1). The analysis of these images has shown that the scatterers
are localised just at the cylinder edges. Scatterers 2 and 3 at the ends of the cylinder
generating line are well resolved. The images of 1 and 2 merge because they are sep-
arated by a distance smaller than the resolution limit of the method. The difference
in the intensities of individual points can be interpreted in terms of the EWM or the
GTD. The dashed lines in Fig. 9.17 are for the former intensities and the latter com-
putations yield similar results. Our findings agree well with experimental data. The
polarisation properties of the scatterers manifest themselves in the varying image
intensity due to the changes in the illumination polarisation. Such images can be
used to estimate the target size and, with a more detailed analysis, its geometry, the
‘brightest’ construction elements and surface patches.
Figures 9.18 and 9.19 present the measured local scattering characteristics for a
metallic cylinder, the RCS diagram for a selected scatterer, and the simulation results
(Sections 5.2 and 5.3). The estimated standard deviation for the experimental local
responses was 1.8 dB. In addition to a methodological error of 0.5 dB, the total error
includes components due to the background echo signals in the AEC, imperfect polar-
isation channel insulation, etc. It is obvious that the theory, simulation and experiment
gave similar results within the accuracy of the total measurement error. Such measure-
ments provide data on local scattering characteristics of targets of complex geometry.
The results presented can be used for calibration of measuring setups.

9.4 Target recognition

Recognition of targets is a very important task in radar science and practice. By recog-
nition we mean the procedure of attributing the object being viewed to a certain class
in a prescribed alphabet of target classes, using the radar data obtained. According to
the general theory of pattern recognition, radar target recognition should include the
following stages:
• compiling a classified alphabet of radar targets to be recognised;
• viewing of targets;
• determination (measurement) of some target responses from the recorded echo
signal parameters to compile target descriptions, or patterns;

zino: “chap09” — 2005/11/7 — 15:38 — page 222 — #32


Radar imaging application 223

snE
10 log , dB
pa2

0
1

–20

–40

–60

Scattering Experiment
centre 1 Simulation
3
–80
Scattering Experiment
centre 2 Simulation

110 130 150 170 wi, grad

Figure 9.18 The local scattering characteristics for a metallic cylinder


(E-polarisation). The subscripts 1, 2, 3 at σ denote scattering centres

• identification and selection of informative signs (features) from the compiled lists;
• target classification or attribution of a particular target to one of the classes on the
basis of discriminating signs.

The problem of making up an alphabet of target classes and selecting informative signs
to describe each class reliably is quite complicated and is to be solved by qualified and
experienced specialists. Of course, classification may be based on various principles.
One of them is to group targets in terms of their function and application. For example,
a successful management of air traffic needs a classification of aircraft: heavy and
light passenger planes, military planes, helicopters, etc.
Each class of radar targets can be described by a definite set of discriminating char-
acteristics to be used for classification: configuration, the presence of well-defined
and readily observable parts, dynamic parameters (e.g. altitude, flight velocity), etc.
A specific feature of all radar targets is that the radar input senses a target pattern in the
echo signal domain. The size scale of this domain and the physical meaning of each
of its components differ considerably from those of the parameter vectors of the target

zino: “chap09” — 2005/11/7 — 15:38 — page 223 — #33


224 Radar imaging and holography

(a) s1H
10 log , dB
pa2

= Experiment
= Simulation

10

–10

110 130 150 170 w, grad

(b) s2H
10 log , dB
pa2

= Experiment
= Simulation

10

–10

110 130 150 170 w, grad

(c) s3H
10 log , dB
pa2

= Experiment
= Simulation

110 130 150 170 w, grad

Figure 9.19 The local scattering characteristics for a metallic cylinder


(H-polarisation). The subscripts 1, 2, 3 at σ denote scattering centres

class and each characteristic individually. No matter how many identification signs
a target possesses, one can get information only about those characteristics that are
contained in the recorded echo signal parameters. We believe that a holographic

zino: “chap09” — 2005/11/7 — 15:38 — page 224 — #34


Radar imaging application 225

approach to designing target recognition radars is capable of removing this


limitation.
The target description (pattern) in a radio vision system is a microwave hologram
function, which is generally a vector, non-stationary random function. It is manifested
at the radar input as a pattern of a certain class of objects. Such patterns are practically
unsuitable for classification because they have a complex probabilistic structure,
a large and varying size, etc. Besides, the individual values of the hologram functions
may also include minor, unimportant details of a target that may introduce additional
recognition errors.
Like in many other target recognition problems, a key task is to reveal the most
informative, discriminating target signs. The subsystem of sign identification must
include compression and preliminary processing of the initial radar data [12], such
that the classification subsystem input would receive a size-fixed array of signs char-
acterising the essential, most typical properties of a particular target. The role of a
sign ‘identifier’ may be played by the operator of image reconstruction from a holo-
gram, which can generally be reduced to an integral Fourier transform. The distances
between individual scatterers and the local target characteristics measured from the
images will form a discrete vector domain of a relatively small size, whose elements
can be considered as recognition signs. They have a clear physical meaning, a factor
important for creating a library of standards for the classifier operation. The target
recognition then becomes a holographic process with a clear physical meaning. One
does not need a priori data on the statistical structure of the echo signal, and this
method of sign discrimination may be considered as distribution-free.
The final stage in the recognition process is to design a procedure for target clas-
sification, that is, finding the criteria for attributing a particular target to a class in
a given alphabet. The classification is based on a key rule attributing the array of
discriminating signs (i.e. the target itself) to one of the possible target classes. Mod-
ern pattern recognition theory has at its disposal a powerful mathematical apparatus
including deterministic, probabilistic and heuristic procedures, as well as various sets
of criteria for detecting similarities and differences between classes.
Therefore, radar target recognition can be represented as a block diagram that can
serve as the basis for a mathematical model of a radar recognition device (Fig. 9.20).
This idea has been tested using an analogue and a digital model of a recognition
radar. The simulation included the measurement of microwave holograms of different
model objects in an AEC, the mathematical modelling of the object motion and the
computation of dynamic realisations of the microwave hologram functions with static
measurements (for random initial conditions of motion), the modelling of the radar
receiving channel, image reconstruction and construction of sign vectors, as well as
the classification of the objects.
The relative positions of three ‘brightest’ scatterers (geometrical characteristics)
and their image intensities for each image were found to be
     
 kl kl   kl kl   kl kl 
Rkl
12 = R1 − R 2  , R kl
13 = R 1 − R 3  , R kl
23 = R2 − R 3 , Akl kl kl
1 , A2 , A3 ,

where k, l = 1, 2 are the polarisation indices.

zino: “chap09” — 2005/11/7 — 15:38 — page 225 — #35


226 Radar imaging and holography

Recognition device Features


Separation Decision
Targets Receptor of making
Target features (classifier)
descriptions

Figure 9.20 A mathematical model of a radar recognition device

The set of sign vectors was stored in the recognition device to be used for creating
a teaching or testing standard of sign vectors. The vectors were normalised such that
one could compare vectors made up of signs of different physical nature. Smaller-
scale sign vectors were created for further use. Table 9.7 presents the vectors for
the entire sign domain constructed to minimise the sign vectors and compare their
informative characteristics for further recognition. The minimum size was 3 and the
maximum 9.
A sequence of recognition sign vectors arrives at the classifier input. We had
employed a Bayes classifier and a nonparametric classifier based on the method of
potential functions. The former is optimal in the sense that it minimises the average
risk of wrong decisions. The teaching of the Bayes classifier included the evalua-
tion of unknown parameters of the conditioned probability distribution of the sign
vector x in the class Ai − p(x/Ai ), which was taken to be normal. This decision rule
is Bayes-optimal at the equal cost of errors for a more general distribution; in prac-
tice, however, the difference between the actual and normal distributions is usually
neglected if the former is smooth and has one maximum [12]. The other classifier
was used when there was no information on the sign vector distribution function. It
was assumed that the general decision function was known and its parameters were
estimated from the teaching samples [12].
Each experimental run provided a K × K matrix of decisions (K is the number of
classes) at the classifier output. The element kij of the matrix is the number of objects
in the ith class attributed to the jth class. From the matrix K, we can estimate the
probability of correct recognition events, the probability of a false alarm, etc.
The model suggested was used to test the recognition capabilities for various
objects. We also planned to estimate the efficiency of recognition, to compare
the information contents of different sign vectors and investigate the stability of
the classification algorithms in terms of the size of the teaching sample. For this, we
employed metallic cones with a spherical apex (class 1) and a spherical base (class 2) of
about the same length. The probabilistic structure of the sign domain was estimated by
constructing experimental holograms. Their unimodal character was tested to justify
the use of a Bayes classifier. An experimental series was equal to 100 in all the runs.
Table 9.8 compares the valid recognition probability for objects of both classes
and the size of the teaching sequence at different sign vectors for the case of a
Bayes classifier. One can see that the largest vectors made up of local responses are
most effective. The geometrical characteristics gave poorer results, as was expected,
because the objects in both classes were of about the same size. When the number of
teaching vectors is decreased, there is a tendency for a lower recognition efficiency.

zino: “chap09” — 2005/11/7 — 15:38 — page 226 — #36


Table 9.7 The variants of the sign vectors

Type of Polarisation
sign vector
1 2 3 4 5

AR A11 11 11 A22 22 22 A12 12 12 A11 11 11 22 22 22 A11 11 11 22 22 22 12 12 12


1 , A2 , R12 1 , A2 , R12 1 , A2 , R12 1 , A2 , R12 A1 , A2 , R12 1 , A2 , R12 A1 , A2 , R12 A1 , A2 , R12

A A11 11 11 A22 22 22 A12 12 12 A11 11 11 22 22 22 A11 11 11 22 22 22 12 12 12


1 , A2 , A3 1 , A2 , A3 1 , A2 , A3 1 , A2 , A3 A1 , A2 , A3 1 , A2 , A3 A1 , A2 , A3 A1 , A2 , A3

R R11 11 11 R22 22 22 R12 12 12 R11 11 11 22 22 22 R11 11 11 22 22 22 12 12 12


12 , R13 , R23 12 , R13 , R23 12 , R13 , R23 12 , R13 , R23 R12 , R13 , R23 12 , R13 , R23 R12 , R13 , R23 R12 , R13 , R23

zino: “chap09” — 2005/11/7 — 15:38 — page 227 — #37


228 Radar imaging and holography

Table 9.8 The valid recognition probability (a Bayes classifier)

Number of Type of Polarisation


teaching sign vector
vectors 1 2 3 4 5

50 AR 0.54 0.68 0.68 0.68 0.80


A 0.63 0.61 0.66 0.63 0.78
R 0.56 0.68 0.55 0.66 0.63
40 AR 0.56 0.64 0.63 0.61 0.77
A 0.60 0.61 0.67 0.63 0.78
R 0.56 0.68 0.55 0.66 0.63
30 AR 0.53 0.63 0.52 0.63 0.78
A 0.60 0.60 0.69 0.64 0.77
R 0.57 0.67 0.51 0.64 0.63
20 AR 0.53 0.63 0.51 0.64 0.71
A 0.57 0.58 0.68 0.63 0.73
R 0.52 0.70 0.52 0.62 0.63
10 AR 0.44 0.59 0.58 0.60 0.50
A 0.52 0.55 0.59 0.60 0.55
R 0.50 0.68 0.56 0.54 0.50

Table 9.9 The valid recognition probability (a classifier based


on the method of potential functions)

Number of Type of Polarisation


teaching sign vector
vectors 1 2 3 4 5

30 AR 0.88 0.90 0.87 0.83 0.81


20 0.82 0.71 0.71 0.78 0.72
10 0.75 0.63 0.67 0.76 0.71
30 A 0.87 0.90 0.90 0.89 0.94
20 0.67 0.82 0.85 0.81 0.84
10 0.64 0.80 0.72 0.75 0.82
30 R 0.80 0.84 0.69 0.72 0.80
20 0.62 0.66 0.54 0.53 0.68
10 0.55 0.60 0.56 0.52 0.63

Table 9.9 shows similar results for a classifier based on the method of potential
functions. The recognition efficiency is higher but the time necessary for the teaching
is an order of magnitude longer.
The sequence of operations in this model can be used as a procedure for an
estimation of recognition efficiency for various targets at the stage of designing the

zino: “chap09” — 2005/11/7 — 15:38 — page 228 — #38


Radar imaging application 229

radar or the targets. This model provides a greater efficiency of pre-tests at the device
designing stage because one can
• obtain statistical data on possible recognition of various targets in a short time at
lower cost;
• get teaching or experimental sequences of practically any size;
• evaluate the effective parameters of antirecognition devices during direct statis-
tical experiments, etc.

zino: “chap09” — 2005/11/7 — 15:38 — page 229 — #39


zino: “chap09” — 2005/11/7 — 15:38 — page 230 — #40
References

1 AKHMETYANOV, V. R., and PASMUROV, A. Ya.: ‘Radar imaging analysis


based on theory of information’. Proceedings of sixth All-Union seminar on
Optical information processing, Frunze, USSR, 1986, part 2, p. 59 (in Russian)
2 AKHMETYANOV, V. R., and PASMUROV, A. Ya.: ‘Radar imagery processing
for earth remote sensing’, Zarubezhnaya Radioelectronica, 1987, 1, pp. 70–81
(in Russian)
3 ALEXANDROV, V. Y., LOSHCHILOV, V. S., and PROVORKIN, A. V.:
‘Studies of icebergs and sea ice in Antarctic using “Almaz-1” SAR data’, in
POPOV, I. K., and VOEVODIN, V. A. (Eds): ‘Icebergs of the world ocean’
(Hydrometeoizdat, St Petersburg, 1996), pp. 30–36 (in Russian)
4 ALEXANDROV, V. Y., SANDVEN, S., JOHANNESSEN, O. M.,
PETTERSSON, L. H., and DALEN, O.: ‘Winter navigation in the Northern
Sea Route using RADARSAT data’, Polar Record, 2000, 36 (199), pp. 333–42
5 ALLAN, T. D. (Ed.): ‘Satellite microwave remote sensing’ (John Wiley & Sons,
New York, 1983)
6 ALPERS, W., and HENNINGS, I.: ‘A theory of the imaging mechanisms of
underwater bottom topography by real and synthetic aperture radar’, Journal of
Geophysical Research, 1984, 89, pp. 10529–46
7 ARSENOV, S. M., and PASMUROV, A. Ya.: ‘Investigation of local scattering
characteristics of lumped objects from their radar images’. Proceedings of All-
Union symposium on Waves and diffraction, Moscow, USSR, 1990, pp. 153–55
(in Russian)
8 ARSENOV, S. M., and PASMUROV, A. Ya.: ‘Signal processing for aircraft radar
imaging’, Zarubezhnaya Radioelectronica, 1991, 1, pp. 71–83 (in Russian)
9 ARSENOV, S. M., and PASMUROV, A. Ya.: ‘Tomographic signal processing
for ISAR’, in GUREVICH, S. B. (Ed.): ‘Optical and optico-electronic means of
data processing’ (USSR Academy of Sciences, Leningrad, 1989), pp. 258–66
(in Russian)
10 ARSENOV, S. M., and PASMUROV, A. Ya.: ‘Compensation of aircraft radial
motion for ISAR’. Proceedings of second All-Union conference on Theory and
practice of spatial-time signal processing, Sverdlovsk, USSR, 1989, pp. 217–19
(in Russian)

zino: “references” — 2005/11/7 — 15:38 — page 231 — #1


232 References

11 ASTANIN, L. Yu., and KOSTYLEV, A. A.: ‘Ultrawideband radar mea-


surements. Analysis and processing’ (The Institution of Electrical Engineers,
London, 1997)
12 ASTANIN, L. Yu., KOSTYLEV, A. A., ZINOVIEV, Yu. S., and
PASMUROV, A. Ya.: ‘Radar target characteristics: measurements and appli-
cations’ (CRC Press, Boca Raton, 1994)
13 AUSHERMAN, D. A., KOZMA, A., WALKER, J. L., JONES, H. M.,
and POGGIO, E. C.: ‘Development in radar imaging’, IEEE Transactions on
Aerospace and Electronic Systems, 1986, AES-20 (4), pp. 363–99
14 BAKUT, P. A., BOLSHAKOV, I. A., GERASIMOV, B. M. et al.: ‘Statistical
theory of radiolocation’ (Sovetskoe radio, Moscow, 1963, vol. 1) (in Russian)
15 BATES, R. H. T., GARDEN, K. L., and PETERS, T. M.: ‘Overview of com-
puterized tomography with emphasis on future developments’, Proceedings of
IEEE, 1983, 71 (3), pp. 356–72
16 BEAL, R., KUDRYAVTSEV, V., THOMPSON, D. et al.: ‘The influence of
the marine atmospheric boundary layer on ERS-1 synthetic aperture radar
imagery of the Gulf Stream’, Journal of Geophysical Research, 1997, 102 (C3),
pp. 5799–5814
17 BELOCERKOVSKY, S. M., KOCHETKOV, Yu. A., KRASOVSKY, A. L., and
NOVITSKIY, V. V.: ‘Introduction in aeroautoelasticity’ (Nauka, Moscow, 1980)
(in Russian)
18 BERTOIA, C., FALKINGHAM, J., and FETTERER, F.: ‘Polar SAR data for
operational sea ice mapping’, in TSATSOULIS, C., and KWOK, R. (Eds): ‘Anal-
ysis of SAR data of the polar oceans. Recent advances’ (Springer-Praxis, Berlin,
Heidelberg, 1998), pp. 201–34
19 BORN, M., and WOLF, E.: ‘Principles of optics’ (Pergamon Press, New York,
1980)
20 BROMAGHIM, D. R., and PERRY, J. P.: ‘A wideband liner FM ramp generator
for the long-range imagery radar’, IEEE Transactions on Microwave Theory and
Techniques, 1978, MTT-26 (5), pp. 322–25
21 BROWN, W. M., and FREDERICKS, R. J.: ‘Range-Doppler imaging with
motion through resolution cells’, IEEE Transactions on Aerospace and Elec-
tronic Systems, 1969, AES-5 (1), pp. 98–102
22 BROWN, W. M., and GHIGLIA, D. C.: ‘Some methods for reducing
propagation-induced phase errors in coherent imaging systems’, Journal of the
Optical Society of America, 1988, 5 (6), pp. 924–41
23 BROWN, W. M., and RIORDAN, J. E.: ‘Resolution limits with propagation
phase errors’, IEEE Transactions on Aerospace and Electronic Systems, 1970,
AES-6 (5), pp. 657–62
24 BROWN, W. M.: ‘Synthetic aperture radar’, IEEE Transactions on Aerospace
and Electronic Systems, 1967, AES-3 (2), pp. 217–30
25 BUNKIN, B. V., and REUTOV, A. P.: ‘Trends of radar develop-
ment’, in SOKOLOV, A. V. (Ed.): ‘Problems of perspective radiolocation’
(Radiotekhnika, Moscow, 2003), pp. 12–19 (in Russian)

zino: “references” — 2005/11/7 — 15:38 — page 232 — #2


References 233

26 BYKOV, V. V.: ‘Digital modelling for statistical radio engineering’ (Sovetskoe


Radio, Moscow, 1971) (in Russian)
27 CARSEY, F., HARFING R., and WALES, C.: ‘Alaska SAR facility: The US
center for sea ice SAR data’, in TSATSOULIS, C., and KWOK, R. (Eds): ‘Anal-
ysis of SAR data of the polar oceans. Recent advances’ (Springer-Praxis, Berlin,
Heidelberg, 1998), pp. 189–200
28 CHERNYKH, M. M., and VASILIEV, O. V.: ‘Experimental estimation of aircraft
echo signal coherence’, Radiotekhnika, 1999, 2, pp. 75–78 (in Russian)
29 COLLIER, R. J., BURCKHARDT, C. B., and LIN, L. H.: ‘Optical holography’
(Academic Press, New York, London, 1971)
30 CURLANDER, I. C., and Mc DONOUGH, R. N.: ‘Synthetic aperture radar
systems and signal processing’ (John Wiley & Sons, New York, London, 1991)
31 CURRIE, N. C. (Ed.): ‘Radar reflectivity measurement: techniques and
applications’ (Artech House, Norwood, USA, 1989)
32 CUTRONA, L. J., LEITH, E. N., PORCELLO, L. J., and VIVIAN, W. E.: ‘On
the application of coherent optical processing techniques to synthetic aperture
radar’, Proceedings of IEEE, 54 (8), 1966, pp. 1026–32
33 DA SILVA, J. C. B., ROBINSON, I. S., JEANS, D. R. G., and SHERWIN, T.:
‘The application of near-real-time ERS-1 SAR data for predicting the location
of internal waves at sea’, International Journal of Remote Sensing, 1997, 18
(10), pp. 3507–17
34 DESAI, M., and JENKINS, W. K.: ‘Convolution back – projection image
reconstruction for synthetic aperture radar’. Proceedings of IEEE International
symposium on Circuits and systems, Montreal, Canada, 1984, vol. 1, pp. 161–63
35 DESCHAMPS, G.: ‘About microwave holography’, Proceedings of IEEE, 55
(4), 1967, pp. 58–59
36 DIKINIS, A. V., IVANOV, A. Y., KARLIN, L. N. et al.: ‘Atlas of synthetic
aperture radar images of the ocean acquired by ALMAZ-1 satellite’ (GEOS,
Moscow, 1999) (in Russian)
37 DRINKWATER, M. R.: ‘Satellite microwave radar observations of Antarctic
Sea ice’, in TSATSOULIS, C., and KWOK, R. (Eds): ‘Analysis of SAR data of
the polar oceans. Recent advances’ (Springer-Praxis, Berlin, Heidelberg, 1998),
pp. 35–68
38 EDEL, H., SHAW, E., FALKINGHAM, J., and BORSTAD, G.: ‘The Canadian
RADARSAT program’, Backscatter, 2004, 15 (1), pp. 11–15
39 ERMAKOV, S. A., SALASHIN, S. G., and PANCHENKO, A. R.: ‘Film slicks
on the sea surface and some mechanisms of their formation’, Dynamics of
Atmosphere and Ocean, 1992, 16 (2), pp. 279–304 (in Russian)
40 ESPEDAL, H. A., and JOHANNESSEN, O. M.: ‘Detection of oil spills near off-
shore installations using synthetic aperture radar (SAR)’, International Journal
of Remote Sensing, 2000, 21 (11), pp. 2141–44
41 ESPEDAL, H. A., JOHANNESSEN, O. M., JOHANNESSEN, J. A. et al.:
‘COASTWATCH’95: A tandem ERS-1/SAR detection experiment of natural
film on the ocean surface’, Journal of Geophysical Research, 1998, 103 (C11),
24969–82

zino: “references” — 2005/11/7 — 15:38 — page 233 — #3


234 References

42 FLETT, D., and VACHON, P. W.: ‘Marine applications of SAR in Canada’,


Backscatter, 2004, 15 (1), pp. 16–21
43 FLETT, D., De ABREU, R., and FALKINGHAM, J.: ‘Operational experience
with ENVISAT ASAR wide swath data at the CIS’. Abstracts of ENVISAT
Symposium, Salzburg, Austria, 2004, Abstract No. 363
44 FREIDEY, A. I., CONROY, B. L., HOPPE, D. I., and BRANJI, A. M.: ‘Design
concepts of a 1-MW CW X-band transmit/receiver system for planetary radar’,
IEEE Transactions on Microwave Theory and Techniques, 1992, MTT-40 (6),
pp. 1047–55
45 FROM PATTERN TO PROCESS: The strategy of the Earth observing system.
EOS science steering committee report, vol. 2, NASA, 1988
46 FUREVIK, B. R., JOHANNESSEN, O. M., and SANDVIK, A. D.: ‘SAR –
retrieved wind in polar regions – comparison with in situ data and atmospheric
model output’, IEEE Transactions on Geoscience and Remote Sensing, 2002,
GE-40 (8), pp. 1720–32
47 GHIGLIA, D. C., and BROWN, W. D.: ‘Some methods for reducing propaga-
tion – induced phase errors in coherent imaging systems. II. Numerical results’,
Journal of the Optical Society of America, 1988, A5 (6), pp. 942–56
48 GILL, R. S., and VALEUR, H. H.: ‘Ice cover discrimination in the Greenland
waters using first-order texture parameters of ERS SAR images’, International
Journal of Remote Sensing, 1999, 20 (2), pp. 373–85
49 GILL, R. S., VALEUR, H. H., and NIELSEN, P.: ‘Evaluation of the RADARSAT
imagery for the operational mapping of sea ice around Greenland’. Proceedings
of symposium on Geomatics in the era of RADARSAT, Ottava, Canada, 1997,
pp. 230–34
50 GOODMAN, J. W.: ‘An introduction to the principles and applications of
holography’, Proceedings of IEEE, 1971, 59 (9), pp. 1292–304
51 GOODMAN, J. W.: ‘Introduction to Fourier optics’ (McGraw-Hill Book
Company, New York, 1968)
52 GOUDEY, K. R., and SCIAMBI, A. F.: ‘High power X-band monopulse tracking
feed for the Lincoln laboratory long-range imaging radar’, IEEE Transactions
on Microwave Theory and Techniques, 1978, MTT-26 (5), pp. 326–32
53 GRIFFIN, C. R.: ‘Image quality parameters for digital synthetic aperture radar’.
Proceedings of symposium on RADAR, 1984, pp. 430–35
54 HAAS, C., DIERKING, W., BUSCHE, T., HOELEMANN, J., and
WEGENER, C.: ‘Monitoring polynya processes and sea ice production in the
Laptev sea’, Abstracts of ENVISAT Symposium, Salzburg, Austria, 2004,
Abstract No. 137
55 HARGER, R. O.: ‘Synthetic aperture radar systems. Theory and design’
(Academic Press, New York, 1970)
56 HASSELMANN, K., RANEY, R. K., PLANT, W. J. et al.: ‘Theory of syn-
thetic aperture radar ocean imaging: A MARSEN view’, Journal of Geophysical
Research, 1985, 90 (10), pp. 4659–86
57 HERMAN, G. T.: Image reconstruction from projections. The fundamentals of
computerized tomography’ (John Wiley & Sons, New York, 1980)

zino: “references” — 2005/11/7 — 15:38 — page 234 — #4


References 235

58 ILYIN, A. L., and PASMUROV, A. Ya.: ‘Fluctuated objects and SAR charac-
teristics’, Izvestiya vysshykh uchebnykh zavedeniy – Radioelectronica, 1989, 32
(2), pp. 65–68 (in Russian)
59 ILYIN, A. L., and PASMUROV, A. Ya.: ‘Mapping of partial coherence
extended targets by SAR’, Zarubezhnaya Radioelectronica, 1985, 6, pp. 3–15
(in Russian)
60 ILYIN, A. L., and PASMUROV, A. Ya.: ‘Radar imagery characteristics of fluc-
tuated extended targets’, Radiotekhnika i Electronica, 1987, 31 (1), pp. 69–76
(in Russian)
61 IVANOV, A. V.: ‘On the synthetic aperture radar imaging of ocean surface
waves’, IEEE Journal of Oceanic Engineering, 1982, OE-7 (2), pp. 96–103
62 JOHANNESSEN, J., DIGRANES, G., ESPEDAL, H., JOHANNESSEN, O. M.,
and SAMUEL, P.: ‘SAR ocean feature catalogue’ (ESA Publications Division,
ESTEC, Noordwijk, The Netherlands, 1994)
63 JOHANNESSEN, J. A., SHUCHMAN, R. A., JOHANNESSEN, O. M.,
DAVIDSON, K. L., and LYZENGA, D. R.: ‘Synthetic aperture radar imaging
of upper ocean circulation features and wind fronts’, Journal of Geophysical
Research, 1991, 96 (9), pp. 10411–22
64 JOHANNESSEN, O. M., SANDVEN, S., PETTERSSON, L. H. et al.: ‘Near-
real time sea ice monitoring in the Northern Sea Route using ERS-1 SAR
and DMSP SSM/I microwave data’, Acta Astronautica, 1996, 38 (4–8),
pp. 457–65
65 JOHANNESSEN, O. M., VOLKOV, A. M., BOBYLEV, L. P. et al.: ‘ICE-
WATCH – Real-time sea ice monitoring of the Northern Sea Route using
satellite radar (a cooperative earth observation project between the Russian and
European Space Agencies)’, Earth Observations and Remote Sensing, 2000,
16 (2), pp. 257–68
66 JOHANNESSEN, O. M., and SANDVEN, S.: ‘ERS-1 SAR ice routing
of L’Astrolabe through the Northeast Passage’, Arctic News-Record, Polar
Bulletin, 8 (2), pp. 26–31
67 JOHANNESSEN, O. M., CAMPBELL, W. J., SHUCHMAN, R. et al.:
‘Microwave study programs of air–ice–ocean interactive processes in the sea-
sonal ice zone of the Greenland and Barents Seas’, in ‘Microwave remote sensing
of sea ice’ (American Geophysical Union, Washington, DC., 1992, Geophysical
Monograph No. 68), pp. 261–89
68 JOHANNESSEN, O. M., SANDVEN, S., DROTTNING, A., KLOSTER, K.,
HAMRE, T., and MILES, M.: ‘ERS-1 SAR sea ice catalogue’ (European Space
Agency, SP-1193, 1997)
69 KELL, P. E.: ‘About bistatic RCS evaluation using results of monostatic RCS
measurements’, Proceedings of IEEE, 1965, 53 (8), pp. 1126–32
70 KELLER, J. B.: ‘Geometrical theory of diffraction’, Journal of Optical Society
of the America, 1962, 52 (2), pp. 116–30
71 KOCK, W. E.: ‘Pulse compression with periodic gratings and zone plane
gratings’, Proceedings of IEEE, 1970, 58 (9), pp. 1395–96
72 KONDRATENKOV, G. S.: ‘The signal function of a holographic radar’,
Radiotekhnika, 1974, 29 (6), pp. 90–92 (in Russian)

zino: “references” — 2005/11/7 — 15:38 — page 235 — #5


236 References

73 KONDRATENKOV, G. S.: ‘Synthetic aperture antennas’, in


VOSKRESENSKY, D. I. (Ed.): ‘Phased antenna arrays design’ (Radiotekhnika,
Moscow, 2003), pp. 399–416 (in Russian)
74 KONDRATENKOV, G. S., POTEKHIN, V. A., REUTOV, A. P., and
FEOKTISTOV, Yu. A.: ‘Earth surveying radars’ (Radio i Svyaz, Moscow, 1983)
(in Russian)
75 KORSBAKKEN, E., JOHANNESSEN, J. A., and JOHANNESSEN, O. M.:
‘Coastal wind field retrievals from ERS synthetic aperture radar images’, Journal
of Geophysical Research, 1998, 103 (C4), pp. 7857–74
76 KORSNES, R.: ‘Some concepts for precise estimation of deformations/rigid
areas in polar pack ice based on time series of ERS-1 SAR images’, International
Journal of Remote Sensing, 1994, 15 (18), pp. 3663–74
77 KRAMER, H.: ‘Observation of the Earth and its Environment. Survey of
Missions and Sensors’ (Springer, Berlin, 1996)
78 KURIKSHA, A. A.: ‘Moving target 2D radar imaging by combination of the
aperture synthesis and tomography’, Radiotekhnika i Electronica, 1994, 39 (4),
pp. 613–18 (in Russian)
79 KWOK, R., and CUNNINGHAM, G. F.: ‘Seasonal ice area and volume pro-
duction of the Arctic Ocean: November 1996 through April 1997’, Journal of
Geophysical Research, 2002, 107 (C10), pp. 8038–42
80 LANDSBERG, G. S.: ‘Optics’ (Nauka, Moscow, 1970, 6th edn) (in Russian)
81 LARSON, R. W., ZELENKA, I. S., and IOHANSEN, E. L.: ‘A microwave holo-
gram radar system’, IEEE Transactions on Aerospace and Electronic Systems,
1972, AES-8 (2), pp. 208–17
82 LARSON, R. W., ZELENKA, I. S., and IOHANSEN, E. L.: ‘Microwave
holography’, Proceedings of IEEE, 1969, 57 (12), pp. 2162–64
83 LARUE, A., HOFFMAN, K. N., HURLBUT, D. E., KIND, H. J., and
WINTROUB A.: ‘94-GHz radar for space object identification’, IEEE Transac-
tions on Microwave Theory and Techniques, 1969, MTT-17 (12), pp. 1145–49
84 LE HEGARAT-MUSCLE, S., ZRIBI, M., ALEM, F., WEISSE, A., and
LOUMAGNE, C.: ‘Soil moisture estimation from ERS/SAR data: Toward
an operational methodology’, IEEE Transactions on Geoscience and Remote
Sensing, 2002, GE-40 (12), pp. 2647–58
85 LEITH, E. N.: ‘Quasi-holographic techniques in the microwave region’,
Proceedings of IEEE, 1971, 59 (9), pp. 1305–18
86 LEITH, E. N., and INGALLS, F. L.: ‘Synthetic antenna data processing by
wavefront reconstruction’, Applied Optics, 1968, 7 (3), pp. 539–44
87 LEITH, E. N.: ‘Side-looking synthetic aperture radar’, in CASASENT, D. (Ed.):
‘Optical data processing applications’ (Springer-Verlag, Berlin, Heidelberg,
New York, 1978) Chapter 4
88 LEWITT, P. M.: ‘Reconstruction algorithms: transform methods’, Proceedings
of IEEE, 1983, 71 (3), pp. 390–408
89 LIKHACHEV, V. P., and PASMUROV, A. Ya.: ‘Aircraft radar imaging under
signal partial coherence conditions’, Radiotekhnika i Electronica, 1999, 44 (3),
pp. 294–300 (in Russian)

zino: “references” — 2005/11/7 — 15:38 — page 236 — #6


References 237

90 MAYZELS, E. N., and TORGOVANOV, V. A.: ‘Measurement of


scattering characteristics of radar targets’ (Sovetskoe Radio, Moscow, 1972)
(in Russian)
91 MEHRHOLZ, D., and MAGURA, K.: ‘Radar tracking and observation of non-
cooperative space objects by reentry of Salut-7-Kosmos-1686’. Proceedings
of International workshop of European Space Operations Center, Darmstadt,
Germany, 1991, pp. 1–8
92 MEIER, R. W.: ‘Magnification and aberration three order of diffraction in
holography’ Journal of the Optical Society of America, 1965, 55 (7), pp. 987–91
93 MELLING, H.: ‘Detection of features in first-year pack ice by synthetic aper-
ture radar (SAR)’, International Journal of Remote Sensing, 1998, 19 (6),
pp. 1223–49
94 MENSA, D. L.: ‘High resolution radar cross-section imaging’ (Artech House,
Dedham, USA, 1991)
95 MERSEREA, R. M., and OPPENHEIM, A. V.: ‘Digital reconstruction of mul-
tidimensional signals from their projections’, Proceedings of IEEE, 1974, 62
(10), pp. 1319–38
96 MILER, M.: ‘Holography’ (SNTL, Prague, Czechoslovakia, 1974) (in Czech)
97 MILES, V. V., BOBYLEV, L. P., MAKSIMOV, S. V., JOHANNESSEN, O. M.,
and PITULKO, P. M.: ‘An approach for assessing boreal forest conditions based
on combined use of satellite SAR and multispectral data’, International Journal
of Remote Sensing, 2003, 24 (22), pp. 4447–66
98 MITSMAKHER, M. Yu., and TORGOVANOV, V. A.: ‘Microwave anechoic
chambers’ (Radio i Svyaz, Moscow, 1982) (in Russian)
99 MOORE, R. K.: ‘Tradeoff between picture element dimensions and noncoherent
overaging in side-looking airborne radar’, IEEE Transactions on Aerospace and
Electronic Systems, 1979, AES-15 (5), pp. 697–708
100 MUNSON, D. C., JR., O’BRIEN, J. D., and IENKINS, W. K.: ‘A tomographic
formulation of spotlight-mode synthetic aperture radar’, Proceedings of IEEE,
1983, 71 (8), pp. 917–25
101 NGHIEM, S.: ‘On the use of ENVISAT ASAR for remote sensing of sea ice’.
Abstracts of ENVISAT Symposium, Salzburg, Austria, 2004, Abstract No. 672
102 ONSTOTT, R. G.: ‘SAR and scatterometer signatures of sea ice’, in CARSEY, F.
(Ed.): ‘Microwave remote sensing of sea ice’ (AGU Geophysical Monograph
68, AGU, 1992), pp. 73–104
103 PAPOULIS, A.: ‘Systems and transforms with applications in optics’
(McGraw-Hill, New York, 1968)
104 PASMUROV, A. Ya.: ‘Aircraft radar imaging’, Zarubezhnaya Radioelectronica,
1987, 12, pp. 3–30 (in Russian)
105 PASMUROV, A. Ya.: ‘Microwave holographic process modelling based on the
edge waves method’, Radiotehnika i Electronica, 1971, 26 (10), pp. 2030–33
(in Russian)
106 PASMUROV, A. Ya.: ‘Tomographic methods for radar imaging’. Proceedings
of the first All-Union conference on Optical information processing, Leningrad,
USSR, 1988, pp. 85–86 (in Russian)

zino: “references” — 2005/11/7 — 15:38 — page 237 — #7


238 References

107 PETTERSSON, L. H., SANDVEN S., DALEN, O., MELENTYEV, V. V.,


and BABICH, N. G.: ‘Satellite radar ice monitoring for ice navigation of
the ARCDEV tanker convoy in the Kara sea’. Proceedings of the fifteenth
international conference on Port and ocean engineering under Arctic conditions,
Espoo, Finland, 1999, vol. 1, pp. 181–90
108 POLYANSKY, V. K., and KOVALSKY, L. V.: ‘Information content of opti-
cal radiation’. Proceedings of the third III All-Union School on Holography,
Leningrad, USSR, 1972, pp. 53–71 (in Russian)
109 POPOV, S. A., ROZANOV, B. A., ZINOVIEV, J. S., and PASMUROV, A. Ya.:
‘Basic principles of microwave holograms inverse synthesis’. Proceedings of the
eighth III All-Union School on Holography, Leningrad, USSR, 1976, pp. 275–89
(in Russian)
110 PORCELLO, L. J.: ‘Turbulence-induced phase errors in synthetic aperture
radars’, IEEE Transactions on Aerospace and Electronic Systems, 1970, AES-6
(5), pp. 634–44
111 RAMSAY, B. R., WEIR, L., WILSON, K., and ARKETT, M.: ‘Early results of
the use of RADARSAT ScanSAR data in the Canadian Ice Service’. Proceedings
of the fourth Symposium on Remote sensing of the polar environments, Lyngby,
Denmark, 1996, ESA SP-391, pp. 95–117
112 RANEY, R. K.: ‘SAR processing of partially coherent phenomena’, Interna-
tional Journal of Remote Sensing, 1980, 1 (1), pp. 29–51
113 RINO, C. L., and FREMOUW, E. J.: ‘The angle dependence of singly scattered
wave fields’, Journal of Atmospheric and Terrestrial Physics, 1977, 39 (5),
pp. 859–68
114 RINO, C. L., and OWEN, J.: ‘Numerical simulations of intensity scintilla-
tion using the power low phase screen model’, Radio Science, 1984, 19 (3),
pp. 891–908
115 RINO, C. L., GONZALEZ, V. H., and HESSING, A. R.: ‘Coherence band-
width loss in transionospheric radio propagation’, Radio Science, 1981, 16 (2),
pp. 245–55
116 RINO, C. L.: ‘On the application of phase screen models to the interpretation of
ionospheric scintillation data’, Radio Science, 1982, 17 (4), pp. 855–67
117 ROBINSON, I. S.: ‘Measuring the oceans from space. The principles and
methods of satellite oceanography’ (Springer-Praxis, Chichester, UK, 2004)
118 RULE, M.: ‘Radio telescopes of large resolving power’, Review of Modern
Physics, 1975, 47 (7), pp. 557–66
119 SANDVEN, S., DALEN, O., LUNDHAUG, M., KLOSTER, K.,
ALEXANDROV, V. Y., and ZAITSEV, L. V.: ‘Sea ice investigations in the
Laptev sea area in late summer using SAR data’, Canadian Journal of Remote
Sensing, 2001, 27 (5), pp. 502–16
120 SANDVEN, S., JOHANNESSEN, O. M., MILES, M. W., PETTERSSON, L. H.,
and KLOSTER, K.; ‘Barents sea seasonal ice zone features and processes from
ERS-1 synthetic aperture radar: Seasonal ice zone experiment 1992’, Journal of
Geophysical Research, 1999, 104 (C7), pp. 15843–57

zino: “references” — 2005/11/7 — 15:38 — page 238 — #8


References 239

121 SAPHRONOV, G. S., and SAPHRONOVA, A. P.: ‘An introduction to


microwave holography’ (Sovetskoe Radio, Moscow, 1973) (in Russian)
122 SCHEUCHL, B., CAVES, R., FLETT, D., DE ABREU, R., ARKETT, M., and
CUMMING, I.: ‘The potential of cross-polarization information for operational
sea ice monitoring’. Abstracts of ENVISAT Symposium, Salzburg, Austria,
2004, Abstract No. 493
123 SEA ICE INFORMATION SERVICES IN THE WORLD. WMO N 574.
Secretariat of the World Meteorological Organization, Geneva, Switzerland,
2000
124 SEKISTOV, V. N., GAVRIN, A. L., ANDREEV, V. Yu. et al.: ‘Low-orbit satellite
radar imaging with narrow-band signals’, Radiotehnika i Electronica, 2000, 45
(7), pp. 830–36 (in Russian)
125 SEPHTON, A. J., and PARTINGTON, K. C.: ‘Towards operational monitoring
of Arctic sea ice by SAR’, in TSATSOULIS, C., and KWOK, R. (Eds): ‘Analysis
of SAR data of the polar oceans. Recent advances’ (Springer-Praxis, Berlin,
Heidelberg, 1998), pp. 259–79
126 SHUCHMAN, R. A., LYZENGA, D. R., LAKE, B. M., HUGHES, B. A.,
GASPAROVICH, R. F., and KASISCHKE, E. S.: ‘Comparison of joint Canada–
U.S. ocean wave investigation project syntetic aperture radar data with internal
wave observations and modeling results’, Journal of Geophysical Research,
1988, 93 (C10), pp. 12283–91
127 SKADER, G. D.: ‘An introduction to computerized tomography’, Proceedings
of IEEE, 1978, 66 (6), pp. 5–16
128 SOH, L.-K., TSATSOULIS, C., and HOLT, B.: ‘Identifying ice floes and
computing ice floe distribution in SAR images’, in TSATSOULIS, C., and
KWOK, R. (Eds): ‘Analysis of SAR data of the polar oceans. Recent advances’
(Springer-Praxis, Berlin, Heidelberg, 1998), pp. 9–34
129 STEINBERG, B. D.: ‘Microwave imaging with large antenna arrays. Radio
camera principles and techniques’ (John Wiley & Sons, New York, 1983)
130 STEINBERG, B. D.: ‘Aircraft radar imaging with microwaves’, Proceedings of
IEEE, 1988, 76 (12), pp. 1578–92
131 STROKE, G. W.: ‘An introduction to coherent optics and holography’
(Academic Press, New York, London, 1966)
132 TATARSKY, V. I.: ‘Wave propagation in a turbulent atmosphere’ (Nauka,
Moscow, 1967) (in Russian)
133 TATARSKY, V. I.: ‘Wave propagation in a turbulent medium’ (McGraw-Hill,
New York, 1961)
134 THOMPSON, M. C., and JANES, H. B.: ‘Measurements of phase front distor-
tion on an elevated line-of-sight path’, IEEE Transactions on Aerospace and
Electronic Systems, 1970, AES-6 (5), pp. 645–56
135 TISON, C., NICOLAS, J.-M., TUPIN, F., and MAITRE, H.: ‘A new statisti-
cal model for Markovian classification of urban areas in high-resolution SAR
images’, IEEE Transactions on Geoscience and Remote Sensing, 2004, GE-42
(10), pp. 2046–57

zino: “references” — 2005/11/7 — 15:38 — page 239 — #9


240 References

136 TITOV, M. P., TOLSTOV, E. F., and FOMKIN, B. A.: ‘Mathematical modelling
in aviation‘, in BELOCERKOVSKY, S. M. (Ed.): ‘Problems of cybernetics’
(Nauka, Moscow, 1983), pp. 139–45
137 UFIMTZEV, P. Ya.: ‘Method of edge waves in physical diffraction theory’
(Sovetskoe Radio, Moscow, 1962) (in Russian)
138 VARGANOV, M. E., ZINOVIEV, J. S., ASTANIN, L. Yu. et al.: ‘Aircraft radar
characteristics’ (Radio i Svyaz, Moscow, 1985) (in Russian)
139 WIRTH, W. D.: ‘High resolution in azimuth for radar targets moving on a straight
line’, IEEE Transactions on Aerospace and Electronic Systems, 1980, AES-16
(1), pp. 101–3
140 WALKER, I. L.: ‘Range-Doppler imaging of rotating objects’, IEEE Transac-
tions on Aerospace and Electronic Systems, 1980, AES-16 (1), pp. 23–52
141 YEH, K. C., and LIN, C. H.: ‘Radio wave scintillation in the ionosphere’,
Proceedings of IEEE, 1982, 70 (4), pp. 324–60
142 YU, F. T. S.: ‘Introduction to diffraction, information processing, and hologra-
phy’ (The MIT Press, Cambridge, MA, 1973)
143 ZINOVIEV, J. S., and PASMUROV, A. Ya.: ‘Holographic principles appli-
cation for SAR analysis’, in POTEKHIN, V. A. (Ed.): ‘Image and signal
optical processing’ (USSR Academy of Sciences, Leningrad, 1981), pp. 3–15
(in Russian)
144 ZINOVIEV, J. S., and PASMUROV, A. Ya.: ‘Evaluation of SAR phase fluctu-
ations caused by turbulent troposphere’, Radiotehnika i Electronica, 1975, 20
(11), pp. 2386–88 (in Russian)
145 ZINOVIEV, J. S., and PASMUROV, A. Ya.: ‘Method for recording and pro-
cessing of 1D Fourier microwave holograms’, Pisma v Zhurnal Tekhnicheskoy
Fiziki, 1977, 3 (1), pp. 28–32 (in Russian)
146 ZINOVIEV, J. S., and PASMUROV, A. Ya.: ‘Methods of inverse aperture synthe-
sis for radar with narrow-band signals’, Zarubezhnaya Radioelectronica, 1985,
3, pp. 27–39 (in Russian)

zino: “references” — 2005/11/7 — 15:38 — page 240 — #10


List of abbreviations

1D One-dimensional
2D Two-dimensional
3D Three-dimensional
AB Adaptive beamforming
AEC Anechoic chamber
CAT Computer-aided tomography
CBP Convolutional backprojection method
CCA Circular convolution algorithm
CIS Canadian Ice Centre
DFT Discrete Fourier transform
ECP Extended coherent processing
ESA European Space Agencies
EWM Edge waves method
FCC Frequency contrast characteristics
FFT Fast Fourier transform
GSSR Goldstone solar system radar
GTD Geometrical theory of diffraction
IFT Inverse Fourier transform
ISAR Inverse synthetic aperture radar
LFM Linear frequency modulation
LRIR Long-range imaging radar
NRCS Normalised radar cross-section
NBM Narrowband mode
PH Partial hologram
PRR Pulse repetition rate
RCS Radar cross-section
RLOS Radar line of sight
SAP Synthetic antenna pattern
SAR Synthetic aperture radar
SCF Space carrier frequency
SCS Specific cross-section
SGL Spatial grey level
SST Sea surface temperature
WBM Wideband mode
WMO World Meterological Organization

zino: “abbreviations” — 2005/11/7 — 18:58 — page 241 — #1


zino: “abbreviations” — 2005/11/7 — 18:58 — page 242 — #2
Index

Abbe’s formula 21, 108 aperture synthesis 31–3


adaptive beamforming 33 aposterior techniques 184–5
adaptive beamforming algorithm 21 archaeological surveys 24
aerodynamic target 148, 151–2 Arctic 195–206
airborne radars 60, 79, 196 sea ice monitoring 195–8
aircraft 60 artificial reference wave 38–9, 51
aircraft imaging 21–3, 215 ASAR 193–4, 196–7, 205
algorithms operation modes 193–4
adaptive beamforming 21 aspect variation 126
Calman’s 186 autocorrelation coherence 84
circular convolution 34–5 autocorrelation function 84
convolution back-projection 18–19, 73–5, averaging of resolution elements 87, 94
118–19 azimuth ambiguity function 172
heuristic 187–8 azimuth defocusing 205–6
interpolation 18, 73–5 azimuthal resolution 180–1
processing 130–45 azimuth-range 49
range-Doppler 34–5
reconstruction 221 back projection 131
tomographic 70, 72–7 bathymetry 195
Wiener 185–6 Bayes classifier 226, 228
all-weather mapping 24 Bessel functions 29
Almaz-1 192, 195 bistatic radar 101–2
amplitude factor 12 bistatic scattering 28–9
anechoic camera 114, 116
anechoic chamber 30, 113, 218–22 Calman’s filtering algorithms 186
echo-free zone 218 Carman’s model 160–2
reconstruction algorithm 221 carrier track instabilities 57–8
Antarctic 195 CAT radar 76
antenna approach 33, 147 circular convolution algorithm 34–5
antenna arrays 20–2, 60 classification: see target classification
antirecognition devices 229 cloud effects 166
aperture angle 33 coherence 40
aperture characteristics 173–8 coherence length 40
aperture noise 176 coherence stability 40–1, 43
aperture performance 173 coherent imaging 47–8

zino: “index” — 2005/11/7 — 15:38 — page 243 — #1


244 Index

coherent radar 21–3 ERS-2 20, 193, 195, 197, 206, 208, 210–11,
holographic processing 36–41 214
tomographic processing 41–8 mesoscale ocean phenomena 208, 210–11,
coherent signal 40–1 214
coherent summation of partial components sea ice 206
126–31 extended coherent processing 35–6
1D 139 extended targets 28–9, 31, 79–85
2D viewing geometry 131–42 compact 28–9
3D viewing geometry 141–5 partially coherent 85–6
complexity 136–7, 140–2, 145 proper 28, 31
complex microwave Fourier hologram
110–15 fast ice 201, 204
complex targets 27–8 first-year ice 200–2
computer-aided tomography 74, 76 flop 136
computerised tomography 14–20, 48 focal depth 8–10, 14, 67–70
remote-probing 15 focal length 7
remote-sensing 15 focal point 7
see also tomographic processing focused aperture 54
contrast 94–9, 175, 177 focusing depth 59
convolution back-projection algorithm forestry 195
18–19, 73–5, 118–19 Fourier microwave hologram 39–40, 52–3
correlated processing 35, 49 complex 110–15
correlation function 96 rotating target 101–9
critical volume 179 simulation 112–16
cross range resolution 148–51 Fourier space 16, 18
cross-correlation approach 33–4 Fourier transform 18
cylinder 29–31, 219, 221–4 Fraunhofer microwave hologram 39–40, 52–3
local scattering characteristics 223–4 frequency stability 40–1
frequency-contrast characteristic 95–8
dark level 175, 177 Fresnel lens 49
deformed ice 201, 204 Fresnel microwave hologram 39–40, 52
density distribution 14–16, 19 Fresnel zone plate 33, 50
diffraction 29, 116 Fresnel-Kirchhoff diffraction formula 38
diffraction-limited image 127 friction velocity 207
digital processing 112–16, 145 front-looking holographic radar 60–70
direct synthesis 31–2 hologram recording 60–3
distortion 176 image reconstruction 62–7
Doppler frequency shift 27 resolution 61–2
Doppler-range method: see range-Doppler
method gain in the signal-to-noise ratio 174–5
dynamic range 175, 177 geological structures 24
geometric accuracy 80
earth surface imaging 20, 34, 60 geometrical theory of diffraction 29
satellite SARs 191–215 globules 158–9
earth surface survey 34, 70–1, 79 Goldstone Solar System Radar 41
grease ice 198–9
echo signal 27, 46, 148, 182–3
grey-level resolution 178–81
edge wave method 29
electron density fluctuations 166–7
ENVISAT 193–4, 196–7, 205 half-tone resolution 178–81
ERS-1 20, 193, 195, 197, 212 Hankel transform 75–6

zino: “index” — 2005/11/7 — 15:38 — page 244 — #2


Index 245

heuristic algorithm 187–8 image smoothing 88, 91, 93–4


hologram 11–14 image stability 174
real image 12–13, 49–50, 62–6 imaging radars 7
virtual image 12–13, 49–50, 62–6 imaging time 176
wideband 123–4 impulse response 152–3
see also microwave hologram incoherent signal integration 81, 87, 90–4
hologram function 11–12 inertia region 159–61
hologram modulation index 12 INMARSAT 196
hologram recording 10–11 integral image 132–5
1D 51 interference pattern 11, 13
front-looking holographic radar 60–3 internal waves 212–14
SAR 50–3 interpolation algorithm 18, 73–5
holographic image 14 interpretability 178–80
holographic processing intrinsic aperture noise level 176
coherent radar 36–41 inverse aperture synthesis 23, 147–8, 215–17
front-looking radar 60–3 inverse Fourier transform 118
ISAR 35–6 inverse source problem 16
rotating targets 101–16 inverse synthesis 31–2
SAR 33–4 rotating target 101–9
holographic technique 1–2 inverse synthetic aperture radar: see ISAR
holography 10–14 ionosphere 147
homomorphic image processing 186 electron density fluctuations 166–7
Huygens-Fresnel integral 53 turbulence 166–7, 172
turbulence parameter 167
ISAR 32–3, 148
ice edge 201–2, 205 instability 40–1
ice floes 200–2 signal processing 34–6
ice monitoring 195–8 tomographic processing 41
ice navigation 196–7, 201
ice parameters 195, 197
Kell’s theorem 102
icebergs 195–6, 202, 206 kernel function 139
icebreakers 197–8, 201 Kosmos-1870 192, 195
ICEWATCH 196
image
computerised tomography 14–20 linear filtering model 124
holographic 10–14 linear filtration theory 95, 98
microwave 20–5 linearly moving target 147
optical 7–10 local responses 30
thin lens 7–8 local statistics technique 186–8
image intensity 81, 87–96 long-range imaging radar 215–16
image interpretability 178–80 low contrast targets 94–9
image interpretation 80
image quality 24, 57–60, 77, 80–2, 173–81 magnification 8, 63–7
integral evaluation 177–81 mean image power 175
image reconstruction 11–14, 16, 36, 124 median filtering 184
coherent summation of partial components microholograms 13–14
126–30 micronavigation noise 173
digital simulation 112–16 microwave holograms 2, 36–40
front-looking holographic radar 62–7 1D 101–12
microwave hologram 53–6 amplitude-phase 38
spotlight SAR 72–7 Fourier 39–40, 46, 52–3, 101–16

zino: “index” — 2005/11/7 — 15:38 — page 245 — #3


246 Index

microwave holograms (continued) pancake ice 200, 203


Fraunhofer 39–40, 52–3 panoramic radars 20
Fresnel 39–40, 52 partial coherence 79, 84
multiplicative 37, 50 partial holograms 127–45
narrowband 131–2, 134, 136–7, 140, 145 spectral components 136
phase-only 37–8 partial images 130, 134, 137, 140–3, 145
quadrature 37–8, 106, 115 radial 142
wideband 133, 136–42 transverse 137, 140–3
microwave holographic receiver 38–9 partially coherent signal 40, 148–51
microwave image 7, 20, 23–4 radar image modelling 152–6
microwave imaging 20–5, 101 partially coherent target imaging 79
microwave radars 7 extended 83–6
microwaves 1 low contrast 94–9
monostatic scattering 28–9 mathematical model 85–6
moving targets 58–9 statistical image characteristics 87–94
rotating 101–45 path instabilities 151–6
straight line 147–56 pattern recognition theory 225
multibeam processing: see multi-ray phase 12, 14
processing phase errors 157–72
multiplicative noise 98 turbulent ionosphere 172
multi-ray processing 87, 94, 184 turbulent troposphere 167–72
phase fluctuations 167–70, 172
phase noise 152, 156
narrowband microwave hologram 131–2, phase-only hologram 37–8
134, 136–7, 140, 145 pixels 176
Newton’s formulae 7–8 planet surveys 41, 217
nilas 198, 200–1 plate contrast coefficient 12
noise dark level 175 point targets 27, 58
noise distribution 177–8 polar format processing 35–6
nonparametric classifier 226, 228 polar grid 18
normalised radar cross-section 205–7 potential functions 226, 228
Northern Sea Route 196–8 potential SAR characteristics 173–5
principal planes 7–8
ocean circulation 194–5 probing 14–18
ocean currents 94, 212 projection slice theorem 17–18, 47, 72, 74
ocean dynamics 191 pseudoscopic image 10, 12, 14, 64
ocean phenomena
mesoscale 204–15 quasi-holographic radar systems 2, 31, 49–60
surface velocity 205 hologram recording 50–3
see also sea surface imaging image reconstruction 53–6
ocean waves 191, 194–5
internal 212–14
see also wave imaging radar characteristics 175–8
oceanography 191–2 radar cross-section 28, 30
oil spills 94, 195, 211–12 radar data processing 124–6
old ice 200, 202 radar imaging 2
optical image 7–10, 23 basic concepts 7–25
real 10 methods 27–48
virtual 10 microwave 20–5
orthoscopic image 10, 12, 14 partially coherent signals 152–6

zino: “index” — 2005/11/7 — 15:38 — page 246 — #4


Index 247

radar interferometer 217 potential characteristics 173–5


radar responses 217–19 satellite SARs 191–5
closed tests 218–19 signal processing 33–4
open tests 218 spaceborne 167–8
RADARSAT 193–7, 201–2, 204 test ground 176
radio camera 21 turbulence 167–8
radio telescope 16 see also side-looking synthetic aperture
radio vision 1 radars
radiometric precision 80 see also spot-light SAR
radiometric resolution 81, 94 satellite imaging 79, 120–4, 129, 131, 143,
215
rain cells 212–13
aspect variation 120–1
range resolution 180–1
satellite SARs 191–5
range-Doppler algorithm 34–5
scaling 64–67, 69
range-Doppler method 1–3, 33, 215
ScanSAR 195–7, 201–2, 204
Rayleigh model 182–3
scatterers 28–31
real antennas 20 scattering matrix 217
real apertures 20–1 sea currents 94, 212
recognition: see target recognition sea ice 192–3
reconstruction algorithm 221 classification 198–204
reference voltage 22 imagery 198–206
reference wave 11–13 monitoring 195–8
refractive index, troposphere 157–66 parameters 198, 203–4
resolution 23, 33, 59–60, 177 sea surface imaging 79–80, 99, 204–5
azimuthal 180–1 rough sea surface 82–5
cross range 148–51 see also ocean phenomena
defocused microwave image 108–9 see also wave imaging
front-looking holographic radar 61–2 sea surface temperature 207, 209
grey-level 178–81 SEASAT 191–2, 195
half-tone 178–81 sharpness 175
path instabilities 153–6 ship wakes 214–15
potential 173 ship wrecks 211
radiometric 81, 94 side-looking radar 1–3, 21
range 180–1 side-looking synthetic aperture radars 31–2,
spatial 80, 94 49–60, 79
spotlight SAR 76 hologram recording 50–3
synthesised Fourier hologram 107–8 image reconstruction 53–6
resolution redistribution 180–1
resolving power: see resolution
see also SAR
Rice reflection model 182–3
sigma-filter 187–8
rotating target imaging
sign vectors 225–8
holographic approach 101–16
signal processing 33–6
tomographic approach 117–45
signal-to-noise ratio 88–90, 92–4
gain 174–5
SIR-A 191–2
sample characteristic 174 SIR-B 191–2, 195
sampling theorem 21 SIR-C 192
SAR 31–2, 85–6 Smith-Wentraub formula 157
holographic approach 33–4 soil classification 24
instability 40 soil moisture 195
low contrast targets 94, 97–8 space carrier frequency 12, 109

zino: “index” — 2005/11/7 — 15:38 — page 247 — #5


248 Index

space frequency 44–8 low contrast 94–9


Space Shuttle 191–2 moving 58–9
spaceborne SAR 167–8 moving in a straight line 147–56
spacecraft identification 79, 126, 215–17 partially coherent 79, 83–99
2D 215–17 point 27, 58
see also satellite imaging rotating 101–45
spatial resolution 80, 94 three-dimensional images 10, 12, 14, 20, 25,
specific cross-section 80–1, 95–6 69, 127
speckle 24–5, 80, 175, 177, 181–9 three-dimensional viewing geometry 119–26
statistical characteristics 182–4 tomographic algorithms
suppression 184–9 spot-light SAR 70, 72–7
speckle field 14 tomographic processing
spot-light SAR 32, 70–7 coherent radar 41–8
frequency domain 117–18
image reconstruction 72–7
ISAR 35–6, 41
resolution 76
rotating targets 117–45
squall lines 212
SAR 33
statistical image characteristics 87–94
space domain 118–19
structure function 159, 163–5
spot-light SAR 70–7
subsurface probing 24
see also computerised tomography
subwater imaging 24
tomographic techniques 2
surface hologram 124 tomography 14
swath width 176 transmittance 12
swell 208–10 troposphere 157–72
synthesis range 97–8 near-earth 159
synthetic antenna 21–2 phase errors 167–72
synthetic antenna pattern 173 refractive index distribution 157–66
synthetic aperture length 23 turbulence 158–72
synthetic aperture pattern 23, 33, 173–4 true image 14
synthetic aperture radar imaging 19 turbulence 158–63, 165–72
synthetic aperture radars: see SAR inner-scale size 159
synthetic apertures 20–3 ionosphere 166–7, 172
see also aperture synthesis isotropic 158
outer-scale size 158
troposphere 158–72
target characteristics 217–24 turbulent flows 148
two-dimensional image 127
target classification 222–8
two-dimensional viewing geometry 41–8
target models 27–31
rotating targets 131–42
target recognition 222–9
efficiency 226, 228
mathematical model 225–6 uncertainty function 61
probability 226, 228 unfocused aperture 54
sign vectors 225–8 unistatic radar 101–3
target reflectivity 43–5 upwelling 207–9
target viewing 42 urban area imaging 24–5
targets urban area monitoring 195
aerodynamic 148, 151–2
complex 27–8
extended 28–9, 31, 79–82 velocity bunching effect 209–10

zino: “index” — 2005/11/7 — 15:38 — page 248 — #6


Index 249

wave imaging 82–5, 205–6, 208–9 wind squall 212


see also ocean waves wind stress 208
whirls 159–61, 166–7, 172
wideband hologram 123–4
wideband microwave hologram 133, 136–42 X-ray imaging 2
processing algorithms 138–9 X-ray tomography 17–19
Wiener filtering algorithm 185–6
Wiener-Khinchin theorem 166
wind slicks 94 young ice 198, 200–1

zino: “index” — 2005/11/7 — 15:38 — page 249 — #7


Radar, Sonar and Navigation Series 19

Radar Imaging Radar Imaging

Radar Imaging and Holography


and Holography
and Holography
Increasing information content is an important scientific problem in modern observation Alexander Ja. Pasmurov is
systems development. Radar, or microwave, imaging, a technique which combines radar Executive Director of A.S. Popov
techniques with digital or optical information processing, can be used for this purpose. Institute of Radio Broadcasting
Drawing on their own research, the authors provide an overview of the field and explain Reception and Acoustics,
St Petersburg, Russia.
why a unified approach based on wave field processing techniques, including holographic
and tomographic approaches, is necessary in high resolution radar design. Such techniques Julius S. Zinoviev is the
Scientific Adviser of A.S. Popov
use the complex field incident on an observation surface to produce a hologram, which can Institute of Radio Broadcasting
be used to reconstruct an image of the object or to restore some of its physical parameters. Reception and Acoustics,
This makes it possible to extract the size, coordinates and radar cross-section of individual St Petersburg, Russia.
scattering centres.

A. Pasmurov and J. Zinoviev


The book focuses on holography and tomography for quasimonochromatic and broadband
signals, and gives detailed coverage of the basic physical methods, inverse problems and
mathematical principles. It also contains discussion of new areas in imaging radar theory,
holographic radar, the questions of estimation and improving radar image quality, and finally
various practical applications in the fields of space, airborne radar, air traffic control, medical
diagnostics and remote sensing.
and Zinoviev
Pasmurov

The Institution of Engineering and Technology


www.theiet.org
0 86341 502 4
978-0-86341-502-9

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy