Automatic Reacquisition of Satellite Positions by Detecting Their Expected Streaks in Astronomical Images
Automatic Reacquisition of Satellite Positions by Detecting Their Expected Streaks in Astronomical Images
net/publication/228961396
CITATIONS READS
15 802
1 author:
Martin Levesque
Defence Research and Development Canada
24 PUBLICATIONS 188 CITATIONS
SEE PROFILE
All content following this page was uploaded by Martin Levesque on 17 November 2016.
Martin P. Lévesque
Defence R&D Canada- Valcartier,2459 Boul. Pie XI North, Québec, QC, G3J 1X5 Canada,
martin.levesque@drdc-rddc.gc.ca
ABSTRACT
Artificial satellites, and particularly space junk, drift from their known orbits. In the surveillance-of-space context,
they must be observed frequently to ensure that their corresponding orbital elements are up-to-date. Autonomous
ground-based optical systems are regularly tasked to observe these objects, measure their positions, and then update
their orbital parameters accordingly. The real satellite positions are provided by the detection of the satellite streaks
in the astronomical images specifically acquired for this purpose. This paper presents the image processing
techniques used to detect and extract the satellite positions. The methodology includes several processing steps
including: image background estimation and removal, star detection and removal, an iterative matched filter for
streak detection, and finally false alarm rejection algorithms. This detection methodology is able to detect very faint
objects. Simulated data were used to evaluate the methodology’s performance and determine the sensitivity limits
where the algorithm can perform detection without false alarm, which is essential to avoid corruption of the orbital
parameter database.
1. INTRODUCTION
With the development of the space exploitation technologies, Earth is now surrounded by thousands of orbital
objects. Their number will increase in the future because new satellites are launched regularly. A high level of
situation awareness is required to minimize the risk of a space collision. The recent collision between Cosmos 1951
and Iridium 11 is a reminder of the need to this type of situational awareness.
The U.S. Space Surveillance Network (SSN) has the mission to maintain an up-to-date knowledge of the orbital
parameters of every Earth orbiting detectable object. External forces perturb object motion and change the orbital
parameters, thus the satellites must be periodically re-observed and orbital element updates must be performed to
maintain the currency of the database. These observations are performed with ground based telescopes, radars and
also with optical satellites.
Canada is currently developing remotely-operated low-cost Deep Space optical sensors (Ground Based Optical
project) along with automatic processing and reporting capabilities using commercially available sensors,
communication and computing technologies for Surveillance of Space [1-4]. An example of such a system is a
series of small COTS observatories [3] that each use 0.35m telescopes along with an Apogee CCD camera and
computer controlled robotic mounts manufactured by Software Bisque. Such a system can detect satellites to
magnitude 15 [5], and magnitude 16 under clear, moonless skies. This computer-controlled system can acquire
hundreds of images in a single tracking session. Initial developmental issues pertaining to automatic acquisition
have been recently solved and it remains to process and analyze the data collected by the sensors. The automatic
detection and reporting of satellite positions is the issue addressed in this paper.
2. ACQUISITION PROCESS
The sensor’s acquisition task is programmed with a list of resident space objects (RSO; active satellites or other
space debris) which require new observations. This list is usually made up of a selection of obsolete (i.e., becoming
erroneous or “stale”) TLEs (Two Line Element set; a data format that contains the description of the mean orbital
parameters used with Simplified General Perturbation theory). Usually the degradation rate of the orbital
parameters is known and a TLE is declared obsolete when the cumulative error is above a predetermined threshold.
This error must be maintained under a certain limit, ideally less than the sensor field-of-view (FOV) to ensure a
successful reacquisition.
Once the task planning is completed, the sensor starts the series of acquisition. It is pointed at the appropriate time
and position where the RSO is expected and one or several images are acquired. Then the algorithms detect the
RSOs in the images and measure and report angles-only positions. The position accuracy is assured by the
calibrated astrometry provided by the background image stars. Using this new information, new TLEs are generated
and the database is updated. This observation cycle is illustrated in Fig. 1.
This acquisition procedure offers an advantage that the processing algorithms can use; the seek RSO is already
known. Its exact position is not known but its angular rates and direction are. The satellite approximate brightness
is also known. The satellite rotation (tumbling) is easy to measured but it is not currently recorded in the database.
The angular speed and direction are used (along with the optical FOV, exposition time and pointing parameters), to
predict the length and direction of the satellite streak and this information is used to develop a corresponding
matched filter. Thus, the algorithms presented in this paper are algorithms developed especially for the reacquisition
purpose, they are not appropriate for the detection of unexpected objects with unknown orbital parameters.
An example of an image acquired for the detection of a satellite is presented in Fig. 2. In this image, the local
signal-to-noise ratio ‘SNRij’ (SNRij, is the SNR for a single pixel while SNR is evaluated over the entire object) of
the satellite streak is around 3. It is easily detected by the algorithms described below but a human observer may
fail to see it, particularly if hundreds of similar images have to be inspected every day.
The streak detection is performed using a matched filter technique [6-7]. However, the performance of a matched
filter without any pre-processing is quite limited and generates several false alarms. This is particularly true in
presence of stars that have the properties of Dirac’s delta, which always offer a good response with any matched
filter. However, the nature of these images allows us to use another strategy; the negative approach. One can detect
every non-streak object (i.e., a star) that is easy to detect and erase these objects until only the ‘yet undetected’
streak remains. This is the processing scheme illustrated in Fig. 3.
Observation
Astronomical
Observation image
schedule
Observation Data
planning and reduction
tasking (image
processing)
Extracted Update:
obsolete TLE Satellite
TLE
Database detection data
Fig. 1. Observation cycle showing how an old TLE generates an observation task that will be used to update the
database.
Fig. 2. An example of an image acquired for the detection of a satellite.
Background-
free image
3- Star removal
Convolved image
4b- Image clipping
Clipped image
3. DETECTION ALGORITHMS
The processing begins with the correction of the sensor artifacts (dead pixels). Then the accurate image background
is estimated and subtracted. After, the stars are detected and erased. This process is tricky because the streak must
not be altered during this step and the star-erase process must not leave residue artifacts. Once the image is clear of
undesirable objects, the matched filter is applied. This leaves false alarms that most of the time can be easily
discarded. Thus, the image is segmented into individual objects which are analyzed and rejected if they do not show
the expected characteristics. Finally, for the remaining alarms, the detection confidence levels are evaluated and the
ranked detections are reported. In this processing sequence, four steps require special attention because ad hoc
algorithms were developed specifically for this purpose. They are the background estimation, the star detection and
erasing process, the ‘iterative’ matched filter and finally the false alarm rejection algorithms (steps 2 to 5 in Fig. 3).
The image background is a nuisance for the detection algorithms. It must be removed. The most popular method
consists of acquiring and subtracting a dark frame, i.e., an image acquired with the same conditions of CCD
temperature and exposition time but without a signal (the shutter remains off). However, [8] demonstrates that this
method in not accurate enough because it leaves a background residue with an amplitude higher than the noise level.
The main cause is small temperature variations over the CCD. Furthermore, this method augments the dark frame
noise to the acquired image.
The iterative background removal method [6] and [8] offers better results. This method assumes that the
background is smooth while the other objects are sharp. The method may fail in presence of bright nebulae but it is
very accurate otherwise. In effect, the local image average pixel value is already a good background estimation.
This average can be obtained with polynomial fit [6] or with local statistics (local means and standard deviations)
[8]. However, this average is corrupted by the presence of bright objects. So, the image values above this average
(plus a noise tolerance margin) are clipped and the average is calculated again with a reduced corruption. After
several iterations, an excellent estimation of the background is obtained. It is estimated in [8] that the left-over
background residue is less than the 1/5 of the noise level, which is better than other tested commercial methods. The
background removal efficiency can be seen in Fig. 6B and 6C.
The next step consists of detecting and erasing stars, without affecting the streaks. As indicated in [6], the detection
is performed with the double gate filter illustrated in Fig. 4. The stars are detected when the average signal in the
inner window (μin) is high above the noise level (μin > 3σn) while the outer window measures only the background
(μout < 2σn). A streak is present simultaneously in both inner and outer windows and cannot respect these two
conditions. Hence, a streak is not detected, nor erased. However, to avoid the erasing a faint streak, one detected in
the inner window but not in the outer window, the sensitivity of the outer window (which contains more background
pixels) needs to be increased. The external window is separated into several windows with the same size as the
inner window. If any one of these outer windows detects a signal, then the detection is canceled, as this may be a
streak, not a star.
out
1 2 3 4
In(7x7)
12 5
In(3x3)
11 6
10 9 8 7
Fig. 4.- Star detection filter with streak rejection capability designed for a PSF width of 3 pixels at half height.
Some very bright stars may deceive this filter. Because of the shape of the PSF function, a part of the object energy
is spread into the outer window. Therefore, an additional detection rule is required; the PSF peak must be
significantly brighter than the average background, i.e.: μin > 10μout. The combination of these two rules produces
the following detection rules:
Star detected if: [μin > 10μout ] or [ (μin > 3σn) and (max(μout-k) < 2σn ) ] (1)
However, this filter is not perfected yet. Because of a sensor blooming artifact, the brightest stars have larger than
normal PSF. The size of the gap between the inner and outer windows must be increased for bright stars. For faint
stars, the central gap is twice the normal PSF width, i.e. it is the 7x7 pixels window indicated in Fig. 4. For very
bright stars (close to the saturation) the size of this gap is double, i.e., the central gap becomes 13x13 pixels and it is
surrounded by 20 3x3 outer windows. The result of this filter is illustrated in Fig. 6D where all detected stars are
copied into this image plane.
Once a star is detected, it must be erased from the image. Because of the PSF width and the area affected by the
star, the erasing process is not simple. The central group of pixels cannot simply be set to zero, this leave a halo
shape artifact, which is harmful for the detection [6]. The area where the pixels contain measurable signal from the
star extends several PSF widths from the central point. Thus, the star profile must be measured and subtracted from
the image. This measurement is done by applying a median filter on groups of pixels having an equivalent distance
‘r’ from the star central point. This provides the most probable value for the star intensity ‘I(r)’ as a function of the
radius. This function is subtracted as far as six PSF widths from the central point. Finally only the central pixels,
inside a radius of two-PSF, are set to zero. This method completely erases a star without leaving an artifact and
without affecting nearby objects. Once a maximum of stars are erased, the streak detection matched filter has a
much better chance of success. This can be seen in Fig. 6E where only the pairs of stars (or a combination of stars
and background or noise residue) remain.
With the observation cycle shown in Fig. 1, the speed and direction of the seek satellite are known parameters. A
matched filter can be design to detect the satellite. First, the image astrometry needs to be known. The image
pointing and orientation are approximately known with the telescope pointing parameters. But it is the recognition
of the stars in the observed field of view that provides the best pointing reference. This is done with commercial
software like PinPointTM developed by ‘DC-3 Dreams’ [9]. Once the astrometry is accurately known, the satellite
TLE is used to calculate two expected positions; at the instant the camera shutter opened and when it closed. The
positions are converted into pixel coordinates and they define the endpoints of the equivalent line segment that
represents the expected streak. This line segment is the template used for the detection (i.e., to design the matched
filter). It is generated with a normalized intensity (sum of all pixel values equal one), so it is easy to relate the
intensity of the convolution peak to the mean object intensity.
A first iteration is obtained by convolving the image (with background and stars removed) by the line-segment
template. This is done with the Fourier transform method. The result is illustrated in Fig. 6G. Unfortunately, some
very bright stars (brighter than the streak) produce strong false alarms. However, the inspection of values and
shapes of the convolution peaks is interesting. The streak (a rectangle function) convolved with the line segment
template (a normalized rectangle function) provides a peak with a triangular shape. The maximum intensity of the
convolution peak is the average streak intensity. This is illustrated in Fig. 5. But for a star (a Dirac’s delta
function), the result is completely different; the total intensity of the convolution peak is severely attenuated because
the convolution kernel overlaps a large background area around the star.
This suggests that the intensity of the convolution peak could be used to blindly clip the original signal; the streak
would be preserved while the remaining stars would be attenuated. After clipping, the signal could be convolved
again because the streak remains unchanged while the stars are attenuated. This idea is the principle of the iterative
matched filter developed for streak detection. Firstly, one can think of building a clipping mask where the
convolution peaks are replaced by calibrated line segments (with intensity set by the value of the corresponding
convolution peak). But there is a simpler solution. Fig. 5 suggest that twice the intensity of the convolved image
will do the job. The clipped image is simply the minimum between the image before convolution and the convolved
image multiplied by two, i.e.:
Fig. 5. Shape of the objects (star and streak), their convolution peaks and clipping functions.
After the application of the iterative matched filter, some false signals will always remain. Their morphologies
indicate clearly that most of them are false alarms, but a simple intensity threshold cannot extract the real detection
among them. Therefore, the alarms are individually extracted (Fig. 6L) and further analyzed. This is done with the
extraction mask of Fig. 6K, which is a threshold version of the last convolved image (Fig. 6I). The convolution
peaks (with their shape and size) indicate where the groups of pixels of interest must be isolated and extracted. A
certain number of sub-images are created, containing only one alarm each. Then a number of parameters are
evaluated to determine the nature of the objects. There are the moment of intensity (like the moment of inertia, but
calculated with the pixel intensity), the ratio between the moments of intensity of the two principal axis (parallel and
perpendicular to the streak direction), length and orientation of the object and the SNR. The details of the extraction
method and parameter evaluations are provided in [7].
For relatively bright objects (SNRij > 1), the object compactness given by the moment ratio is useful for the
discrimination between pairs of stars and a real streak. For a single star, this ratio is exactly one (a compact object
with no privileged direction). Pairs of stars (those that eluded the star detection filter) have small moment ratios,
typically between 1 and 10. Streaks always have higher moment ratios, typically above 100. In these cases, the set
level for the moment ratio threshold is obvious, at least for bright objects. However, because maximum sensitivity
is desired, fainter and fainter objects (corrupted by noise) need to be detected and the discrimination provided by the
moment ratio decreases with the SNR. In this case, at low SNR values, there is another parameter that has proven to
be more stable and straightforward to evaluate; this is the ratio between the length of the extraction mask (or length
of the convolution peak) and the expected length of the streak (defined in the matched filter). Thousand of analyzed
alarms showed that the streak extraction mask is always longer than the length of the expected streak (because of the
triangular shape of this convolution peak), which is never the case for the extraction mask of other objects. This can
be seen in Fig. 6K. Following a convolution, streaks tend to have triangular peaks twice as long as the initial streak,
while other objects (point sources or close-by star pairs) tend to have much shorter and flatter peaks. The
discrimination capability of this criterion reaches its limits when the SNR is so faint that it cannot realize the
difference between real streaks and random noise patterns. So the SNR of the extracted object is itself a criterion
that needs to be evaluated to limit the generation of false alarms. Also, for a stable and accurate measurement, the
SNR is evaluated over the standard object area (the same for all alerts). It is measured over an equivalent streak
area, so the SNR of different alerts can be compared. In summary, a combination of SNR, moment and length ratio
is used to determine when an alert is a streak or not.
A: original image
B: measured background
10.7
9.6 10.6
10.8
Very faint
8.6
streak
11.4
10.9
10.8
10.6
10.6
next page
10.6
Fig. 6.- Processing sequence for the detection of the satellite streak: A: original image, B: estimated background, C:
background free image, D: detected stars, E: image without stars, F: identified stars (for astrometric and photometric
calibrations).
G : first convolution
I: third convolution
K: Extraction mask
Fig. 6. (Continued) – G: first convolved image, H: first clipped image, I: third convolution iteration, J: third iteration
of image clipping, K: object extraction mask and L: individual extracted alerts.
5. PERFORMANCE EVALUATION
The detection algorithms were encoded in MatlabTM. The program reports the alarms along with several measured
parameters like the SNR, SNRij, length ratio, momentums and other parameters. It also includes a model of the
probability of detection functions (PDf) that were obtained by analyzing the detections performed with thousand of
simulated images (Fig. 7). When a detection is declared, an estimate of the PDf is provided, indicating the degree of
confidence in the result. The algorithm can detect a streak as faint as SNRij = 0.5, i.e., a streaks where the mean
intensity is only half the noise level. However, the streak is an extended object and after the integration of the
signal over the entire area, a streak with 100 pixels has a total SNR = 5, which is easily detectable with a matched
filter.
Thousand of simulated images (containing stars, pairs of stars and streaks of various length and intensity) were
generated for the performance testing. The objects were generated with controlled photon and CCD noise and
optical PSF. Because the characteristics of the simulated detected object are precisely known, it was possible to
draw exactly the figure of performance. Real astronomical image were also used to confirm the detection capability
and reliability of the algorithms, but because of the uncertainty on the measurement of the object brightness (when
SNRij is lower than 1), they were not used for the PDf evaluation.
Fig. 7 indicates the detection performance measured with these simulations. The PDf presented in Fig. 7 is the net
PDf, not the raw PDf. A raw PDf simply means the capability to detect a target when a target is present in the
image, without considering the confusion created by the presence of false alarms. In fact, the matched filter
technique is so sensitive that a streak with a SNRij as low as 0.1 is incredibly always present in the list of possible
detections, but the algorithm system fails to report it because there are always stronger false alarms. A net PDf is
the probability of declaring a target without confusion after the elimination of all other false alarms. This is the plot
reported in Fig. 7. Hence, the detection algorithms presented above have almost a PDf of 100% for a long streak
(130 pixels) with a SNRij > 0.5. This is very good performance. This is 6 times fainter than the streak presented in
Fig. 2. The performances are better for long streaks, which is not unexpected because this is the characteristic that
provides the discrimination capability of the matched filter.
0,9
0,8
0,7
Probability of detection
50 pixels
0,5 80 pixels
130 pixels
0,4
0,3
0,2
0,1
0
0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1
SNRij
The processing described in this paper identifies several processing algorithms used for the automatic detection,
which completes the automatic observation loop; tasking, acquisition, detection and TLE update. These detection
algorithms always detect satellite streaks without false alarm when they are brighter than the noise level (SNRij>1)
and when they are long enough (>80 pixels). For longer streaks (>130 pixels), they are reliable for SNRij> 0.5.
These calibrated results were obtained with simulated images, which included objects with known properties. They
were thereafter confirmed with the processing of real images [7]. The performance of the detection algorithms was
modeled and an estimation of the detection reliability (PDf) is reported along with the detection. Hence, the
detection of bright enough streaks is fully automatic because they are declared with a score of 100%. For streaks
with lower brightness, the report indicates the detection may need to be confirmed by the analyst. For very faint
streak, a cueing mode is also provided (for streaks as faint as SNRij= 0.2) where the detection is indicated along with
other false alarms. It is possible that an analyst could visually confirm such detections, but most of the time he
would not have himself noted the presence of such a faint satellite in the imagery. For streaks with lower SNRij, in
the range of 0.1 to 0.2, the simulation demonstrated that the algorithms are (most of the time) able to indicate the
target in a list of possible detections, but the analyst would not necessary be able to see and confirm them anymore.
7. REFERENCES
1. Earl., M and Racey, T. (2000). “The Canadian Automatic Small Telescope for Orbital Research (CASTOR) A
Raven System in Canada”, http://www.rmc.ca/academic/physics/castor/castor1_e.html , accessed April 2008.
2. Kervin, P.,. “RAVEN Automated Small Telescope Systems,” AFRL Directed Energy Directorate, Optical and
Imaging Division, Space Surveillance Systems Branch, 2000.
3. Wallace B., Rody J., Scott R., Pinkney F., Buteau S., Lévesque M. P., “A Canadian Array of Ground-Based
Small Optical Sensors for Deep Space Monitoring”, 2003 AMOS Technical Conference.
4. Wallace, B.,. “The DRDC Ottawa Space Surveillance Observatory”, AMOS Technical Conference 2007, Maui
HI.
5. Scott R. and Wallace, B., ‘Small Aperture Optical Photometry of Canadian Geostationary Satellites’, Canadian
Aeronautics and Space Journal, 2009.
6. Lévesque M. P., Buteau S., Image Processing Technique for Automatic Detection of Satellite Streaks. DRDC
Valcartier 2005 TR-386. Defence R&D Canada – Valcartier.
http://cradpdf.drdc.gc.ca/PDFS/unc64/p527352.pdf , accessed Feb. 2009.
7. Lévesque M. P., Lelievre M., Improving satellite-streak detection by the use of false alarm rejection algorithms.
DRDC Valcartier TR 2006-587. Defence R&D Canada – Valcartier.
http://pubs.drdc.gc.ca/PDFS/unc76/p530206.pdf , accessed Feb. 2009.
8. Lévesque M. P., Lelievre M., Evaluation of the iterative methods for image background removal in
astronomical images. DRDC Valcartier TN 2007-344. Defence R&D Canada – Valcartier.
http://pubs.drdc.gc.ca/PDFS/unc69/p52905 4.pdf, accessed Feb. 2009.
9. http://pinpoint.dc3. com/ , accessed Feb. 2009.