Intensity Transformation of Images
Intensity Transformation of Images
1
Associate Professor, Department of Biomedical Engineering, Prince DR. K. Vasudevan College of Engineering and Technology, Chennai 600
127, Tamil Nadu, +91 India
2,3
Under Graduate Scholars (UGS) Students, Department of Biomedical Engineering, Prince DR. K. Vasudevan College of Engineering and
Technology, Chennai 600 127, Tamil Nadu, +91 India
Abstract: Intensity transformation is most known technique of image processing in the spatial
domain and is widely applied to enhance images in various domains. The visual appearance of an
image is generally characterized by two properties: brightness and contrast. Brightness refers to
the overall intensity level and is therefore influenced by the individual gray-level (intensity)
values of all the pixels within an image.
Keywords: Image, Image Processing, Gray Level, Image Intensity, Image Transforms,
Brightness and Contrast Adjustments.
I. Introduction
2.2 Image Negatives – Image negatives are discussed in this article. Mathematically, assume
that an image goes from intensity levels 0 to (L-1). Generally, L = 256. Then, the negative
transformation can be described by the expression s = L-1-r where r is the initial intensity level
and s is the final intensity level of a pixel. This produces a photographic negative.
III.Proposed Work
import cv2
import numpy as np
Gamma correction is important for displaying images on a screen correctly; to prevent bleaching
or darkening of images when viewed from different types of monitors with different display
settings. This is done because our eyes perceive images in a gamma-shaped curve, whereas
cameras capture images in a linear fashion. Below is the Python code to apply gamma correction.
import cv2
import numpy as np
Below are the gamma-corrected outputs for different values of gamma. Gamma = 0.1:
Gamma = 0.5:
Gamma = 2.2:
As can be observed from the outputs as well as the graph, gamma>1 (indicated by the
curve corresponding to ‘nth power’ label on the graph), the intensity of pixels decreases i.e. the
image becomes darker. On the other hand, gamma<1 (indicated by the curve corresponding to
'nth root' label on the graph), the intensity increases i.e. the image becomes lighter.
With (r1, s1), (r2, s2) as parameters, the function stretches the intensity levels by essentially
decreasing the intensity of the dark pixels and increasing the intensity of the light pixels. If r1 = s1
= 0 and r2 = s2 = L - 1, the function becomes a straight dotted line in the graph (which gives no
effect). The function is monotonically increasing so that the order of intensity levels between
pixels is preserved. Below is the Python code to perform contrast stretching.
import cv2
import numpy as np
# Define parameters.
r1 = 70
s1 = 0
r2 = 140
s2 = 255
Output:
V. Conclusions
Basic intensity transformations are essential tools in image processing that enable the
adjustment of pixel intensity values in images. These operations serve as the building blocks for
enhancing and manipulating the visual appearance of various types of images, including
grayscale and color ones. In the realm of image processing, basic intensity transformations play
a crucial role in tailoring images to meet specific needs and applications. They are particularly
useful when images have a limited intensity range, and they help to stretch or compress this
range to bring out finer details. These transformations can be used to modify overall image
brightness, create binary images, and even correct the gamma of monitors or cameras. Despite
their simplicity, basic intensity transformations are powerful tools for image enhancement and
manipulation, making them an integral part of various fields, including computer vision, medical
imaging, and digital photography.
Conflict of Interests
The authors are declare that there is no conflicts of the interests and there is any no ethical
issues are regarding the publication of this manuscript.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of
the authors.
Acknowledgements
The authors would like to thank to Trustee and Chairman Dr. K. Vasudevan., M.A.,
B.Ed., Ph.D., Treasury and Vice Chairman Dr. V. Vishnu Karthik, M.D., and Managing
Director Er. V.Prasanna Venkatesh, B.E., M.Sc.,(U.K)., Principal, Vice Principal, Dean, and
HODs of Prince Dr. K. Vasudevan College of Engineering and Technology, Chennai, India,
Faculty of Information and Communication Engineering (ICE) colleagues, Department of
Biomedical Engineering colleagues, the anonymous reviewers and associate editor for their
comments that greatly improved the manuscript.
References
[1] J. Liang, J. Cao, G. Sun, K. Zhang, L. VanGool, andR. Timofte, “SwinIR: Image restoration
using swin transformer,” inProc. IEEE/CVF Int. Conf. Comput. Vis. Workshops, Oct. 2021, pp.
1833–1844.
[2] H. Chen et al., “Real-world single image super-resolution: Abrief review,” Inf. Fusion, vol.
79, pp. 124–145, Mar. 2022.
[3] Z. Zhang, Z.Wang,W. He, andX. Tong, “An improved coastline inflection method for
geolocation correction of Microwave radiation imager data,” Int. J. Remote Sens., vol. 43, no.
12, pp. 4410–4435, Aug. 2022.
[4] S. Wu and J. Chen, “Instrument performance and cross calibration of FY-3C MWRI,” inProc.
IEEE Int. Geosci. Remote Sens. Symp., Jul. 2016, pp. 388–391.
[5] C. Ledig et al., “Photo-realistic single image super-resolution using a generative adversarial
network,” inProc. IEEEConf. Comput. Vis. Pattern Recognit., pp. 4681–4690, 2017.
[6] H. Chen et al., “Pre-trained image processing transformer,” inProc. Conf. Comput. Vis.
Pattern Recognit., pp. 12299–12310, 2021.
[7] A. Vaswani et al., “Attention is all you need,” inProc. Adv. Neural Inf. Process. Syst., 2017,
pp. 5998–6008.
[8] A. Dosovitskiy et al., “An image is worth 16x16 words: Transformers for image recognition
at scale,” in Proc. Int. Conf. Learn. Representations, 2021, pp. 1–12.
[9] Z. Z. Bi andQ.Meng, “Transformer in computer vision,” inProc. IEEE Int. Conf. Comput.
Sci., Electron. Inf. Eng. Intell. Cont. Technol., pp. 178–188, 2021.
[10] Z. Lu, J. Li, H. Liu, C. Huang, L. Zhang, and T. Zeng, “Transformer for single image super-
resolution,” inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops, Jun. 2022, pp.
457–466.
[11] M. V. Conde, U.-J. Choi, M. Burchi, and R. Timofte, “Swin2SR: SwinV2 transformer for
compressed image super-resolution and restoration,” in Proc. Eur. Conf. Comput. Vis.
Workshops, 2023, pp. 669–687.
[12] W. Shi et al., “Real-time single image and video super-resolution using an efficient sub-
pixel convolutional neural network,” inProc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016,
pp. 1874–1883.
[14] Y. Gao, H. Li, J. Dong, and G. Feng, “A deep convolutional network for medical image
super-resolution,” in Proc. Chin. Automat. Congr., Oct. 2017, pp. 5310–5315,
doi:10.1109/CAC.2017.8243724.
[15] D. Mahapatra and B. Bozorgtabar, “Progressive generative adversarial networks for medical
image super resolution,” Feb. 2019, Accessed: May 28, 2021. [Online]. Available:
http://arxiv.org/abs/1902.02144
[17] G. Zamzmi, S. Rajaraman, and S. Antani, “Accelerating super-resolution and visual task
analysis in medical images,”Appl. Sci., vol. 10, no. 12, Jan. 2020, Art. no. 4282, doi:
10.3390/app10124282.