Image Processing Paper
Image Processing Paper
net/publication/228806073
CITATIONS READS
2 172
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Alvaro Garcia-Martin on 25 March 2015.
*
Work partially supported by the Spanish Government
under Project TIN2004-07860-C02-01 (Medusa).
2. ALGORITHM DESCRIPTION
This method is efficient and shows little false
This method is designed to work with different positives (background points classified as foreground),
point sizes, which can be either pixels or pixel blocks. but fails in homogeneous (low-textured) areas of both
Election of the size of these points will depend on the the foreground and the non-moving objects, where
requirements of the application. Grouping pixels in points are very similar between consecutive frames, and
blocks will provide greater efficiency and robustness frame difference is not able to robustly detect changes.
against noise, although obtained mask will not have In order to improve robustness against noise and
pixel-accuracy. Working with pixels will provide a finer parameter values, we perform background subtraction
segmentation, with loss of efficiency and robustness. with a single-gaussian model, instead of the fixed
The working flow of the full method is illustrated thresholding method used in [10]. The appearance of
in Figure 1. The first stage performs a temporal change speckles in classified areas usually associated with
detection, which will try to detect the moving gaussians in the background model is prevented by the
foreground. This first segmentation is detailed in combination with frame difference masks.
subsection 2.1. The mask obtained from this first stage
will be used as context information in the second stage. 2.2. Context-aware background subtraction
The second stage consists of a context-aware
background subtraction algorithm, which will yield the Assuming a non-complex background in the input
final segmentation. It is a modified running gaussian sequences (static camera, no moving elements in the
average algorithm that takes into account not only the background), a single gaussian model for each point is
background model and the incoming frames, but also sufficient. As explained in [7], a mixture of gaussians
external context information, currently representing a algorithm with a single gaussian in its model is
priori confidence about moving objects. This algorithm equivalent to a running average gaussian. In this way,
will be described in section 2.2. we are modelling the averaged value of each point
. (mean value) and an estimation of its noise over time
(the standard deviation).
Context information aims to provide confidence on
each point’s belonging to the foreground. In this sense,
it conforms an a priori confidence mask, represented by
Mconf. Currently, this mask results from the
segmentation performed in the temporal change
detection stage (i.e., Mconf=Mtc) . However, we are
testing the inclusion of other context based criteria in
the same framework. Luminance and texture
homogeneity, object connectivity and compactness, or
coherence of the object motion (extracted via a tracking
algorithm [9]), could be easily integrated into the
system, providing improvement in robustness and
sensitivity.
Our algorithm is based on the one described in [8],
modified in two ways in order to account for context
information both in the classification and in the
background model updating phases, as explained in the
following subsections..
Image size 352x288 640x480 [4] J. Rymel, J. Renno, D. Greenhill, J. Orwell, G.A. Jones,
Proposed method (pixels) 62 fps 20 fps "Adaptive eigen-backgrounds for object detection," Image
Proposed method (8x8 blocks) >300 fps 95 fps Processing, 2004. ICIP '04. 2004 International Conference on
GMM[3] 25 fps 8 fps , vol.3, no.pp. 1847- 1850 Vol. 3, 24-27 Oct. 2004
Statistical approach[11] 20 fps 5 fps
[5] M. Piccardi, T. Jan, "Mean-shift background image
Table 1: Efficiency for tested algorithms. modelling," Image Processing, 2004. ICIP '04. 2004
International Conference on , vol.5, no.pp. 3399- 3402 Vol. 5,
Typical values for execution parameters are shown in 24-27 Oct. 2004
Table 2: Ttc, is the threshold for frame difference, τacc
the length of the adaptation window for background [6] B. Han, D. Comaniciu, L. Davis, “Sequential kernel
subtraction operations[8], k0 the value for the density approximation through mode propagation:
confidence factor in the background subtraction applications to background modeling”, Proc. ACCV – Asian
described in section 2.1, and kmax and kmin are explained Conf. on Computer Vision, 2004.
in section 2.2.
[7] M. Piccardi, "Background subtraction techniques: a
Symbol Value Meaning review," Systems, Man and Cybernetics, 2004 IEEE
Ttc 15-30 Temporal change threshold International Conference on , vol.4, no.pp. 3099- 3104 vol.4,
10-13 Oct. 2004.
τacc 100-200 Length of bkg adaptation window
k0 2.5 Blind confidence factor
[8] S. Huwer, H. Niemann, "Adaptive Change Detection for
kmax 3-6 Non-changed point confidence factor Real-Time Surveillance Applications" Visual Surveillance,
kmin 1-2 Changed point confidence factor 2000. Proceedings. Third IEEE International Workshop on,
Table 2: Parameter values for a typical indoor application. vol., no.pp.37-46, 2000.
REFERENCES