Implementation and Comparative
Implementation and Comparative
5, October 2015
ABSTRACT
In remote sensing, images acquired by various earth observation satellites tend to have either a high
spatial and low spectral resolution or vice versa. Pansharpening is a technique which aims to improve
spatial resolution of multispectral image. The challenges involve in the pansharpening are not only to
improve the spatial resolution but also to preserve spectral quality of the multispectral image. In this
paper, various pansharpening algorithms are discussed and classified based on approaches they have
adopted. Using MATLAB image processing toolbox, several state-of-art pan-sharpening algorithms are
implemented. Quality of pansharpened images are assessed visually and quantitatively. Correlation
coefficient (CC), Root mean square error (RMSE), Relative average spectral error (RASE) and Universal
quality index (Q) indices are used to measure spectral quality while to spatial-CC (SCC) quantitative
parameter is used for spatial quality measurement. Finally, the paper is concluded with useful remarks.
KEYWORDS
Pansharpening, Multispectral image, panchromatic image.
1. INTRODUCTION
Nowadays, various earth observation satellites such as IKONOS, Quickbird, SPOT, Landsat, etc.
provide images at different spatial, temporal and spectral resolutions [1]. The spatial resolution of
image is expressed as area of the ground covered by one pixel of the image. As pixel size is
reduced, objects in the image are delineated with high accuracy. The instantaneous field of view
(IFOV) is the portion of the ground which is sensed by the sensor. Spatial resolution depends on
the IFOV. As finer the IFVO, spatial resolution is better, and objects in the image can be
classified with more accuracy [2]. For example, the LANDSAT-7 satellite has capability to
capture the image with 15m spatial resolution while GeoEye-I satellite provides 0.41m spatial
resolution. Normally, less than 4m pixel size is considered as high spatial resolution while pixel
size of more than 30m, is considered as low spatial resolution. Spectral resolution is characterized
by reflectance over a variety of signal wavelength [3]. Spectral resolution is higher if bandwidth
is narrower [3]. A Panchromatic (PAN) image contains one band of reflectance data that covers a
DOI : 10.5121/sipij.2015.6503
35
Signal & Image Processing : An International Journal (SIPIJ) Vol.6, No.5, October 2015
broad spectral range and while maintaining a high signal-to-noise ratio, this allows smaller
detectors to be utilized. Therefore, a PAN image has usually low spectral resolution, but high
spatial resolution [4]. The principal category of images obtained by remote sensing is the Multi
Spectral images. An MS image contains more than one band and for the most part this band uses
three-band. The spectral range of every band of a MS image is not as much as that of the PAN
image, resulting about high spectral resolution, but low spatial resolution [5]. Numerous remote
sensing satellites acquire images in one PAN band of high spatial resolution and a few MS bands
of high spectral resolution. Image fusion technology used effectively in a wide variety of this
field and has turned into an effective solution for expanding prerequisites for images of high
spatial and high spectral resolution at the same time and this method otherwise called as
pansharpening. However, Pansharpening is a procedure of merging high-resolution panchromatic
and lower resolution Multi Spectral images to make a single high-resolution color image. It
alludes to a sharpening procedure using the PAN band and a procedure of merging highresolution panchromatic. A variety of image fusion systems dedicated to combining multi
spectral and panchromatic images [6], [7], [8], [9]. Image fusion is the procedure of joining high
spatial resolution panchromatic (PAN) image and rich Multi Spectral (MS) image into a single
image. Motivation behind pansharpening is to obtaining information of greater quality and a vital
tool for information enhancement, spatial resolution improvement, multi-data integration, and
change detection. In recent years, numerous image fusion systems, such as, principal component
analysis, intensity-hue-saturation, Brovey transforms and multi-scale transforms, etc., have been
proposed to intertwine the PAN and MS images successfully. Most of the consideration paid to
image enhancement with distinctive remote sensing images. With such images, particularly for
image interpretation or classification, it would be vastly improved to utilize all the information
contained in the original data, instead of getting an optimum image display with other
extravagant high spatial resolution images. However, there is still no pertinent technique to
enhance the spatial information in these images, without losing their spectral resolution.
This paper is organized as follows. Pre-processing is discussed in Section 2 whereas different
state-of-the-art pansharpening techniques are described in Section 3. Subjective and objective
quality assessment parameters used to measure the spectral and spatial quality of the resultant
pansharpened image are presented in Section 4. Results are provided in Section 5. Finally,
conclusions are drawn in Section 6.
2. PREPROCESSING
PAN and MS images are to be pre-processed before pansharpening. Pre-processing may involve
image registration, resampling and histogram matching of the input images. Pre-processing
techniques before pansharpening is broad area of research [10]. Initially, in image registration,
input images are adjusted for spatial alignments such that pixels in the input images refer to the
same points and objects on the ground. It is followed by resizing of multispectral images to that
of panchromatic image using the interpolation or by different up-sampling techniques [11]. In
some cases, histogram matching is performed before applying pansharpening techniques.
Histogram matching of MS and PAN images may reduce spectral distortion in the resultant
pansharpened image [10, 11].
36
Signal & Image Processing : An International Journal (SIPIJ) Vol.6, No.5, October 2015
Carry out up-sampling to increase the size of MS image to that of PAN image.
2.
Perform the forward transform to separate spatial and spectral components of the MS
image.
3.
Substitute the spatial components of MS image with that of the histogram matched PAN
image.
4.
Perform the backward transform to obtain the MS image back with the improved spatial
resolution called pansharpened image.
The various pansharpening algorithms like Intensity-Hue-Saturation (IHS) [17], adaptive IHS
[18], and Principal Component Analysis (PCA) [19] are the examples of the components
substitute family.
In Intensity-Hue-Saturation, IHS colour space is used because it very well separates the intensity
component (I) and spectral components (H and S) from the input MS images. Intensity (I)
represents the total luminance of the image, hue represents the dominant wavelength contributing
to the colour, and saturation describes the purity of the colour relative to grey. The basic idea of
IHS transform is to replace the intensity component (I) of MS image by that of the histogram
matched PAN image. The RGB of the resultant merged MS image is obtained by computing
reverse IHS to RGB transform. The intensity band I calculated using following equation.
= i * M i
37
Here, are the multispectral image bands, and N is the number of bands. The value of
coefficient
is taken 1/3 for N=3. Image captured with more than three bands, like IKONOS
images, the value of is experimentally required to be determined. To calculate the adaptive
value of , based on the number of bands available for the MS image, an approach known as
adaptive IHS is proposed in [18]. In adaptive IHS approach, a value of is determined such that
the intensity (I) band approximates the corresponding PAN image as closely as possible. The
mathematical formulation to determine adaptive values of is as given below.
Signal & Image Processing : An International Journal (SIPIJ) Vol.6, No.5, October 2015
= i * Mi
Another method, principal component analysis (PCA) also falls into component substitute
pansharpening category. PCA is basically mathematical model transformation [19-21]. It is
widely used in the statistical application as well as signal processing area. In PCA, multivariate
data sets with the correlated variables are transformed into a data set with new uncorrelated
variables. Mostly, 1st principal component contains highest variance and it contains the
maximum amount of information from the original image. [20].
G =
B =
R
PAN
R+G+B
G
PAN
R+G+B
B
PAN
R+G+B
Brovey transform provides good contrast visibility, but it greatly distorts the spectral
characteristics [24]. It gives satisfactory performance when MS image contains only three bands.
Signal & Image Processing : An International Journal (SIPIJ) Vol.6, No.5, October 2015
3. Obtain the pansharpened image by adding resultant high-pass filtered PAN image to each
of the bands of MS image.
The mathematical model for this family is as below:
MS = MS
+ PAN
This method preserves a great amount of spectral characteristics of the MS image since spectral
information is associated with the low spatial frequency of the MS image. The cut-off frequency
of the high-pass filtering is influence the spectral information of the MS image. Some recently
reported methods are uses this high-frequency method as a predecessor to extract the spatial
detail from the PAN image which is not present in the MS image.
3.4. Statistical technique
The pansharpening techniques based on the statistics explores statistical characteristics of the MS
and PAN images. In [25], the price proposed a first statistical based approach called price method
for pansharpening. Later, it was improved by Park et al. [26] with a spatially adaptive algorithm.
In the price method, all high resolution (HR) pixels are modelled as a linear weight by some
factor to one low-resolution (LR) pixel and due to this assumption; sometimes it is producing
blocking artifacts effect. Spatially adaptive algorithm [26] was proposed to overcome this
limitation and it used the adaptive approach to finding out the local correlation between the pixels
resolutions in the input images. Besides these, Bayesian method was proposed based on the
probability theory for estimation the final pansharpened image [27]. In Frosti et al. [28],
pansharpening is considered as an ill-posed problem that needs regularization for optimal results.
Hence, they chose total variation (TV) regularization model which produces the pansharpened
image with preserving the fine details of PAN image.
Signal & Image Processing : An International Journal (SIPIJ) Vol.6, No.5, October 2015
1. Apply the forward transform to PAN and MS images using a sub-band and directional
decomposition wavelet/contourlet transform.
2. Apply the fusion rule onto transform coefficients.
3. Obtain the pansharpened image by the inverse transform.
Fusion rules in the step 2, involves the substitution of original MS coefficient bands by
coefficients of the PAN images or addition of these coefficients with some weight depending
upon the contribution of the PAN and MS bands. Based on the applied fusion rules, they are also
known as additive wavelet/contourlet or substitute wavelet/contourlet methods. Sometimes a
hybrid approach is also used by combining best aspects of various fusion rules.
4. QUALITY ASSESSMENTS
It is desirable to improve the spatial resolution of the MS image to that of PAN. Wald et al. [36]
formulated some useful properties to verify the quality of the pansharpened image. They are
1. If pansharpened image is downsampled to its original spatial resolution then it should be
similar to original MS image.
2. Pansharpened image should be as similar as possible to MS image which could be
captured by the sensor (assuming that it is available) having high spatial resolution
capacity.
The first property represents consistency property while, the second represents synthesis
property. Quality of pansharpened image is measured against ideal reference image if reference
image is available. Otherwise quality can be measured against input MS image called nonreference based quality assessment. Normally, the latter approach is followed.
Signal & Image Processing : An International Journal (SIPIJ) Vol.6, No.5, October 2015
comparing the pansharpened image with input MS image, it is possible to verify and observe the
spectral (color information) and spatial (sharpness) quality of the image. Visual assessment
technique is a subjective technique.
) &#F( F*&
( #R ( R
) are the mean value of the images F and R, respectively, while m and n is size of
Here, F* and R
images. Value of CC should be as close to 1 as possible.
2) Root mean square error (RMSE): It measures the changes in the radiance of the pixel values
for each band of the input MS image R and pansharpened image F. It is important indicator when
images under consideration contain homogeneous regions. It should be as close to zero as
possible. RMSE is computed as follows:
(
1
|R#i, j& F#i, j&|,
RMSE = .
mn
6
3) Relative average spectral error (RASE): It is computed using the root mean square error
(RMSE) as per the below given equation.
8
100 1
. RMSE , #B &
RASE =
N
Here, is the mean radiance of the N spectral bands and B represents ith band of input MS
image. The desired value of this parameter is zero.
4) Universal quality index (UQI): In [37], image quality index suggested for the final
pansharpened image F with respect to the input MS image R is as given below:
41
) F*
4<= R
,&
***, + F
*****
#,< + ,= &#R
Signal & Image Processing : An International Journal (SIPIJ) Vol.6, No.5, October 2015
Q#R, F&
) F*
<=
2R
2 < =
,
,
,
*****
***
< =
#< + ,= &
#R + F &
The first factor indicates the correlation coefficient (CC) between images R and F, while the
second factor indicates the luminance distance, and the third factor represents the contrast
distortion between two images. In above equation, <= denotes the covariance between images R
) and F* are the means while, ,< and ,= signify the standard deviation of R and F,
and F, R
respectively. The best value of Q is 1 and it can be achieved if R = F for all pixels.
5) Relative dimensionless global error in synthesis (ERGAS): The ERGAS is a global quality
index and sensitive to mean shifting and dynamic range change. The value of the ERGAS
indicates the amount of spectral distortion in the image.
8
h 1
RMSE#i& ,
.
C
D
ERGAS = 100
#i&
l N
Where, is the ratio of pixel sizes of input PAN and MS images, #i& is the mean of the iE band
while, N is the number of bands. The desired value of ERGAS is as close to zero as possible.
Signal & Image Processing : An International Journal (SIPIJ) Vol.6, No.5, October 2015
and objective assessment techniques are applied. Visual inspection technique is applied to
observe final pansharpened image for subjective assessment. But, it is difficult to match the
colors of pansharpened image to the original MS image. In objective assessment, Correlation
coefficient (CC), RMSE, RASE, universal quality index Q, and ERGAS parameters are
calculated to estimate spectral quality while spatial-CC (SCC) is computed to approximate spatial
quality. In our experiments, geometrically registered three data sets images are considered.
Several state-of-art pansharpening algorithms are implemented and results are observed. Image1(a) and (b) shows worldview satellite urban area and seaside MS images respectively while
Image-1(c) shows Quickbird satellite MS image. Image 2(a-c) shows the corresponding PAN
images.
In all three datasets, PAN images are having pixel size of 512 x 512 while MS images having size
of 128 x 128. In pre-processing MS images are resized to that of PAN images by using
interpolation technique. Brovey transform, IHS, adaptive-IHS, PCA and Discrete wavelet
transform (DWT) are implemented. Quantitative assessment parameters for various pansharpened
algorithms for all three datasets are calculated and shown in the Tables 1. In the calculation of
ERGAS parameter of each dataset, PAN and MS image pixel size ratio is considered as .
43
Signal & Image Processing : An International Journal (SIPIJ) Vol.6, No.5, October 2015
Table 1. Quantitative assessment results.
Image-1:Worldview urban area image
Brovey
IHS
CC
0.8909
0.8922
ERGAS
4.1140
7.1312
Quality
0.8904
0.8922
RASE
28.5199
28.5126
RMSE
26.4290
26.4381
SCC
0.9907
0.9986
Image-2:Worldview seaside image
Brovey
IHS
CC
0.8286
0.8288
ERGAS
8.1074
7.6741
Quality
0.8277
0.8288
RASE
31.0202
30.6398
RMSE
27.2404
26.9064
SCC
0.9963
0.9988
Image-3:Qickbird image
Brovey
IHS
CC
0.7335
0.7605
ERGAS
5.0523
5.1351
Quality
0.7237
0.7600
RASE
22.6808
20.3282
RMSE
13.9451
12.4986
SCC
0.9435
0.9761
Adaptive-IHS
0.8941
7.0991
0.8940
28.4608
26.3901
0.9815
PCA
0.8917
8.3854
0.7789
33.5569
31.1155
0.9862
DWT
0.9306
6.0464
0.9282
24.1675
22.4092
0.9095
Adaptive-IHS
0.8759
6.4912
0.8758
26.0137
22.8440
0.9170
PCA
0.8283
8.0171
0.7213
32.0135
28.1128
0.9877
DWT
0.9393
4.6738
0.9387
18.6590
16.3854
0.7236
Adaptive-IHS
0.8908
3.4613
0.8902
13.6959
8.4208
0.7287
PCA
0.8039
5.0980
0.7317
19.5795
12.0383
0.8942
DWT
0.9522
2.3865
0.9512
9.4678
5.8212
0.6996
The best values obtained for each parameter are highlighted in the tables. It is observed that,
multi-resolution approaches (DWT) are preserving better spectral information while component
substitution approach (IHS) improves spatial quality of input MS image. Resultant fused images
are shown in figure 2.
Signal & Image Processing : An International Journal (SIPIJ) Vol.6, No.5, October 2015
45
Signal & Image Processing : An International Journal (SIPIJ) Vol.6, No.5, October 2015
6. CONCLUSIONS
In this paper, various pansharpening techniques with their classification are discussed. They are
classified based on the approach they have been using. IHS is classical CS based pansharpening
technique, and its major drawback is the spectral distortion which it introduces during
pansharpening process. The reason for the spectral distortion appears to may be the large
radiometric difference between I and PAN bands. It could be overcome by computing the high
spatial resolution image I which will ultimately reduce the difference between I and PAN bands.
In PCA, first principal component of image is replaced with histogram normalized panchromatic
(PAN) image. The first principal component has a largest variance and therefore, it contains most
of the information. The remaining principal components possess band specific information and
they are kept unaltered. One of the possible research issues is optimal replacement of principal
component/s with PAN image. In spectral contribution-based approach, Brovey transform (BT)
works well when images contain three bands. It preserves spectral information in the resultant
pansharpened image. In statistical methods, it is desirable to estimate the accurate model for the
relationship between pansharpened image, and input MS and PAN images. It is observed that
multi-resolution based pansharpening approach generate better results.
ACKNOWLEDGEMENT
Author is thankful to Prof.Tanish zaveri, Associate professor, Nirma University for his support to
collect datasets.
REFERENCES
[1]
[2]
Signal & Image Processing : An International Journal (SIPIJ) Vol.6, No.5, October 2015
[12] C Pohl, JLV Genderen, Multi-sensor image fusion in remote sensing: Concepts, methods, and
applications, International journal of remote sensing, 19(5), 823854, 1998.
[13] RA Schowengerdt, Remote Sensing: Models and Methods for Image Processing, 3rd edn,
(Orlando, FL: Academic, 1997.
[14] T Ranchln, L Wald, Fusion of high spatial and spectral resolution images: The ARSIS concept and its
implementation. hotogramm Eng Remote Sens. 66, 4961 (2000)
[15] VK Sheftigara, A generalized component substitution technique for spatial enhancement of
multispectral lmages using a higher resolution data set. Photogramm Eng Remote Sens. 58(5), 561
567 (1992)
[16] W Dou, Y Chen, X Li, DZ Sui, A general framework for component substitution image fusion: An
implementation using the fast image fusion method. Computers & Geosciences 33, 219228 (2007).
doi:10.1016/j. cageo.2006.06.008
[17] TM Tu, SC Su, HC Shyu, PS Huang, A new look at IHS-like image fusion methods. Inf Fusion 2(3),
177186 (2001).
[18] Sheida R, Melissa S., Daria M, Michael M., Todd W., An adaptive IHS Pan-sharpening Method,
IEEE Geoscience and remote sensing letter 2010.
[19] VP Shah, NH Younan, RL King, An efficient pan-sharpening method via a combined adaptive PCA
approach and contourlets. IEEE Trans Geoscince Remote Sens. 46(5), 13231335 (2008)
[20] Lindsay I Smith. A tutorial on principal component analysis. Technical report,
http://www.cs.otago.ac.nz/cosc453/studenttutorials/principal components.pdf.
[21] S.Zebhi, M.R.Agha, M.T.Sadeghi, Image fusion using PCA in CS domain, an international journal
of signal & image processing (SIPIJ), vol.3, No.4, August 2012.
[22] A.Medina Javier Marcello and F.Eugenio, Evaluation of spatial and spectral effectiveness of pixellevel fusion techniques IEEE Geoscience and Remote Sensing Letters, 2012.
[23] Maryam Dehghani, Wavelet-based image fusion using a-trous algorithm,Technical report, Map
India Conference Poster Session.
[24] V Vijayaraj, CG OHara, NH Younan, Quality analysis of pansharpened images, Proc IEEE
Inernational Geoscience Remote Sens Symp IGARSS04. 1, 2024 (2004)
[25] JC Price, Combining multispectral data of differing spatial resolution, IEEE Trans Geosc Remote
Sens. 37(3), 11991203 (May 1999).
[26] J Park, M Kang, Spatially adaptive multi-resolution multispectral image fusion, International
Journal of Remote Sensing 25(23), 54915508 (2004).
[27] D Fasbender, J Radoux, P Bogaert, Bayesian data fusion for adaptable image pansharpening, IEEE
Transactions On Geoscience And Remote Sensing 46, 18471857 (2008)
[28] Frosti palsson, Johnnes R. Magnus O., A New Pansharpening Algorithm Based on Total Variation,
IEEE Geoscience and Remote sensing letters, 2013.
[29] SG Mallat, A theory for multi-resolution signal decomposition: The wavelet representation, IEEE
transactions on pattern analysis and machine Intelligence 11(7), 674693 (1989).
[30] AL da Cunha, J Zhou, MN Do, The non-sub sampled contourlet transform: Theory, design, and
applications, IEEE Trans. Image Process. 15(10), 30893101 (2006)
[31] PJ Burt, EH Adelson, The Laplacian pyramid as a compact image code, IEEE Transactions on
Communications. COM-3l(4), 532540 (1983)
[32] M Gonzlez-Audcana, X Otazu, Comparison between Mallats and the atrous discrete wavelet
transform based algorithms for the fusion of multispectral and panchromatic images, International
Journal on Remote Sens. 26(3), 595614 (2005).
[33] Minh Do, and Martin Vetterli, The contourlet transform: An efficient Directional Multi-resolution
Image Representation, IEEE Transaction on Image processing, 14(12), 2091-2106, 2005
[34] H.Yin, Shutao Li, Leyuan Fang, Simultaneous image fusion and super-resolution using sparse
representation, Information Fusion, Elsevier journal, 2012.
[35] Y.Zhang, Theory of compressive sensing via
l _1- minimization: A non-RIP analysis and
extensions, Department of computational and applied mathematics, RICE University, technical
report.
47
Signal & Image Processing : An International Journal (SIPIJ) Vol.6, No.5, October 2015
[36] L Wald, T Ranchin, M Mangolini, Fusion of satellite images of different spatial resolutions:
Assessing the quality of resulting images, Photogramm Eng Remote Sens. 63, 691699 (1997)
[37] Wang, Z., and A.C.Bovik, A universal image quality index, IEEE Signal Processing Letters,
9(3):81-84 (2002).
[38] J.Zhou, DL Civco, JA Silander, A wavelet transform method to merge Landsat TM and SPOT
panchromatic data, Int J remote Sens. 19(4), 743757 (1998).
[39] PS Pradhan, RL King, NH Younan, DW Holcomb, Estimation of the number of decomposition
levels for a wavelet-based multi-resolution multisensory image fusion, IEEE Trans Geoscience
Remote Sens. 44(12), 36743686 (2006).
48