End-to-End Image Steganography Using Deep Convolutional Autoencoders
End-to-End Image Steganography Using Deep Convolutional Autoencoders
ABSTRACT Image steganography is used to hide a secret image inside a cover image in plain sight.
Traditionally, the secret data is converted into binary bits and the cover image is manipulated statistically to
embed the secret binary bits. Overloading the cover image may lead to distortions and the secret information
may become visible. Hence the hiding capacity of the traditional methods are limited. In this paper, a light-
weight yet simple deep convolutional autoencoder architecture is proposed to embed a secret image inside a
cover image as well as to extract the embedded secret image from the stego image. The proposed method is
evaluated using three datasets - COCO, CelebA and ImageNet. Peak Signal-to-Noise Ratio, hiding capacity
and imperceptibility results on the test set are used to measure the performance. The proposed method
has been evaluated using various images including Lena, airplane, baboon and peppers and compared
against other traditional image steganography methods. The experimental results have demonstrated that
the proposed method has higher hiding capacity, security and robustness, and imperceptibility performances
than other deep learning image steganography methods.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
VOLUME 9, 2021 135585
N. Subramanian et al.: End-to-End Image Steganography Using Deep Convolutional Autoencoders
FIGURE 1. Overall workflow of the proposed method. The sending end consists of the preprocessing module and the embedding network. The
preprocessing module extracts features from the cover image and the secret image and merges them using the concatenation layer. The embedding
network reconstructs the stego image from the fused features. Finally, at the receiving end is the extraction network to extract the secret image from the
stego image.
should communicate in a way that is understandable to only in the raw images and uses only the most important
them. For Eve it should look like normal message [1]. features of the images.
Image steganography has attracted significant interests • A customized loss function is designed and exchanged
from the research community resulting in considerable as feedback to the training model to optimize the learn-
enhancements in line with the recent advances of digital ing function.
media technologies. Initially, traditional methods such as
Least Significant Bits (LSB) substitution, Pixel Value Dif- II. RELATED WORKS
ferencing (PVD), Discrete Wavelet Transformation (DWT) Steganography is not a new topic and has been in existence
were used for hiding the secret information inside a carrier since 440 BC. It has historically evolved from shaving the
image by exploiting the pixel value of the cover image [2]. heads of the sub-ordinates, engraving the secret message
However, the hiding capacity of the traditional methods is using invisible inks during the world war to digital steganog-
limited due to embedding capacity issues which can lead to raphy in recent times. Statistical methods including the LSB
the exposure of the presence of the secret media. Deep learn- substitution, PVD methods are used to hide secret messages
ing is another paradigm which is extensively used in com- in cover images. In the LSB method, the secret information
puter vision applications. Deep learning methods including is converted into binary bits which are embedded in the least
autoencoders and generative adversarial networks (GANs) significant bits of the carrier image [4]. It is assumed that
have been adapted and employed in image steganography. the resolution of the cover image is high and exploiting the
These methods have increased the hiding capacity, security three least significant bits for every byte in the image will
and robustness compared to traditional methods. Reversible not reduce the precision or arouse any suspicion [5] and [6].
image steganography is a technique where steganography Another variant of LSB uses Huffman encoding on the secret
and steganalysis are performed together [3]. Reversible information before embedding [7]. PVD is another statistical
image steganography using deep learning method has further method used to hide larger number of binary bits of the secret
increased hiding capacity, security and robustness. In this information in the edges of a cover image and smaller number
paper, an autoencoder-decoder architecture model is pro- in the smoother regions of a cover image [8].
posed for performing steganography and steganalysis. The Information hiding on quantum images with modification
main aim of this paper is to design a reversible end-to-end of the direction technique [9], [10], local binary patterns [11]
image steganography model. The contribution of this work is are some of the other techniques used. Most of the tradi-
listed as follows: tional methods can handle only text as the secret message
• A simple and light weight autoencoder-decoder model with reduced hiding capacity. Reverse engineering is possible
architecture is used for embedding and extracting the to attack the steganography as the embedding happens by
secret image inside the cover image. statistically exploiting the cover image. A combination of
• Instead of concatenating the raw images, features cryptography and steganography is also proposed to increase
extracted from the cover image and secret image are the overall anti-detection property [10]. Integer wavelet trans-
concatenated. This reduces the amount of redundant data form (IWT) is applied on the cover image and the chaotic
map is used to determine the pixels for embedding the binary steganalysis can be easily performed by reverse engineering.
bits of the secret message [12]. An inverse integer wavelet This will affect the security of the method. The quality of
transform is performed to produce the stego image. During the extracted secret information may also be subsided thus
the extraction, the chaotic map used for embedding is gener- affecting the robustness. Yet another drawback is that the
ated again and the secret message is extracted with the help media of the secret communication is mostly text. Even if an
of the generated chaotic map. Similarly, a 3D sine chaotic image is used, only grayscale images are used. Hiding the
map is used by Valandar et al. [13]. Framelet transformation pixel values of the three-channel secret image inside another
is applied on the cover image to recognize the transformer three-channel cover image can get quite difficult in traditional
coefficients to embed the secret message bits without causing methods. However, the hiding capacity, which is an issue with
any visual distortion. A novel method called the Pixel Density most of the traditional methods, can be solved using deep
Histogram (PHD) is proposed for halftone images [14], [15]. learning based methods. The capacity is increased at the cost
Deep learning (DL) methods, which have been widely used of the storage space, memory, and computation time because
recently, have emerged as a viable tool in image steganog- of the complexity of the models. In this paper, a simple
raphy applications showing attractive performance and effi- autoencoder architecture that can hide a secret image inside
ciency. Typically, CNN-based architectures and GAN-based a cover image is proposed with increased hiding capacity.
methods are used for performing end-to-end steganogra- In addition to an increased hiding capacity, security and
phy and steganalysis. The Autoencoder model is a popu- robustness of the proposed method are also higher than the
lar network architecture used in image steganography [3], traditional methods.
[16]–[18], and [19]. Three components are needed for an
end-to-end image steganography and steganalysis [20], [21]: III. PROPOSED METHODOLOGY
(i) the preparation of the input images - cover and secret The overall workflow of the proposed method is given
images, (ii) the hiding network for embedding the secret in figure 1 and consists of three modules - preprocessing
image inside the cover image and (iii) the reveal network for module, embedding network and the extraction network. The
extracting the secret image from the container steganography preprocessing module prepares the cover image and secret
image. Pixelwise CNN [22] and stylenet [23] are other net- image for the embedding network to reconstruct the stego
works used for the implementation of image steganography. image. The purpose of the embedding network is to recon-
To improve the anti-detection property of the steganogra- struct the stego image which hides the secret image inside the
phy algorithm, the secret image is first transformed using cover one. The extraction network recovers the hidden secret
DCT and then encrypted using Elliptic Curve Cryptogra- image from the container stego image. The preprocessing
phy(ECC) [24]. A SegNet architecture is used as the back- module together with the embedding network is placed at
bone for hiding and extraction networks. The input to the the sending end to produce the stego image. The extraction
hiding network is the encrypted secret image and the cover network is deployed at the receiving end to extract the secret
image and the output form the container image. The container image from the stego image. More details on each of the
is the input image for the extraction network and the output is modules are given in the below subsections.
the encrypted version of the secret image. Finally, the decryp- Mathematically, the proposed solution can be expressed
tion algorithm is carried out on the revealed image to obtain as follows. Let c be the cover image and s be the secret
the final secret image. A Pyramid Pooling layer is placed image, the preprocessing module produces features f(c) and
in between the down sampling and the up sampling block f(s) for the cover and the secret image. The final output of
and an ablation study is conducted to prove that the addition the preprocessing module is the aggregate of the features
of pyramid pooling layer increases the performance of the extracted f (c) + f (s). The main aim of the embedding method
model [25]. Generative Adversarial Networks (GANs) are is to produce a stego image c0 such that c0 ≈ c and extraction
also widely used in the field of image steganography [26] and network is to extract the secret image s0 which is s0 ≈ s.
various GAN architectures have been proposed; for example,
DCGAN [27], [28], WGAN [29], [30], cycleGAN [31]–[33]. A. PREPROCESSING MODULE
Embedding simulators in the place of steganalyzers are also Instead of processing the raw form of the cover and the
implemented [34], [35]. A sender-receiver scheme [36], [37], secret images, features are extracted from them using the
coverless steganography [3], [38], [39], and cryptography- preprocessing module. High resolution images often con-
scheme based [40], [41] are some of the other variations on tain redundant data and by extracting the most meaningful
the implementation of image steganography with GAN model features, the burden on the embedding network is reduced.
as the base. A detailed description of the recent advances in The input size should be of the format m × m × n, which
image steganography can be found in [2]. represents the three dimensions - width, height and depth.
The major shortcoming in the traditional image steganog- The width and height should be of the same size hence they
raphy method is the low hiding capacity. Trying to hide more are represented by m. After a thorough analysis of the existing
information by tweaking a greater number of bits in the literature [3], [21], [24], [39], the input size of the cover image
cover may expose the hidden secret information. Since the is fixed to be 256 × 256. The input secret image can be of
hiding happens by statistically exploiting the pixel values, any size, the preprocessing module resizes the secret image
FIGURE 2. The architecture of the preprocessing module and the embedding network of the proposed method.
to 256 × 256 since the cover image and the secret image dimensionality changes, the latent space should be the com-
should be of same size. The resize function from the skimage bined feature representation of the cover image and the secret
library is used to resize the cover image and the secret image image. The embedding network takes the concatenated fea-
to a fixed size of 256 × 256. Instead of representing the tures from the preprocessing module as the input to produce
input images as colour gradients, the preprocessing module a latent space and reconstruct the stego image (which is close
converts them into useful features that can be used by the in resemblance to the cover image) from the latent space.
embedding network. The preprocessing module consists of Every bit of the secret image is hidden across every available
one input layer and three convolutional layers with increasing bit of the cover image. The embedding network is designed
number of filters. The choice of the number of filters, filter with two convolutional layers with an increasing number of
size and the stride are purely dependent on the application. filters. The latent space at the end of the encoder represents
The main purpose of the preprocessing module is to extract the finer features of both cover image and the secret image
usable and meaningful features through convolutional layers concatenated. The decoder part of the embedding network
with different filter sizes. Initially, lower-level local features has five convolutional layers with a decreasing number of
such as edges are extracted by using smaller filter sizes. The filters since there is no need for any dimensionality change(s).
filter size is increased to help the model learn more sophisti- The number of filters in the encoder part are 64, 128 and
cated features. The number of filters used are 8, 16 and 32. the decoder part of the embedding network has 128, 64, 32,
The cover image and the secret image are passed through the 16 and 8 filters. ReLU activation is added at the end of the
preprocessing module in parallel. Finally, a merge layer is convolutional layers to introduce linearity by giving the max
designed which concatenates the features extracted from the value for positives and 0s for negatives. ReLU is used because
cover image and the secret image. it makes the training easier with better performance as it
overcomes the vanishing gradient problem which is common
in architectures with multiple layers. ReLU can be given
B. EMBEDDING NETWORK
as h(c) = max(0,c). A convolutional layer with 3 filters is
The preprocessing module and the embedding network
placed at the end of the embedding network to convert the
together are designed based on an auto-encoder architecture
256 × 256 × 8 feature vector into 256 × 256 × 3 stego
concept. The embedding network along with the preprocess-
image output. Figure 2 represents the architecture of the
ing module have a hourglass structure with an expanding
preprocessing module and the embedding network together.
phase and a contracting phase. The autoencoder network
takes the input and extracts the features using the encoder
part. The latent space in an autoencoder is the feature rep- C. EXTRACTION NETWORK
resentation of the input. The decoder part of the autoen- The extraction network aims to extract the secret image
coder is used to reconstruct the output image from the latent hidden inside the stego image. After conducting controlled
space. Image steganography applications does not require any experiments, an architecture identical to the embedding
network seems to give the best results in extracting the TABLE 1. Details on the datasets.
secret image with minimum information loss. The extraction
network has an expanding phase and a contracting phase.
The number of filters, filter size, stride and other hyper-
parameters are fine-tuned based on the experimental results.
The architecture which produced the best result is described
here. The expanding encoder part of the extraction network
has five convolutional layers with an increasing number of
filters (8, 16, 32, 64, 128). The decoder part has five con- IV. EXPERIMENTAL SETUP
volutional layers with an decreasing number of filters (128, The experiments were conducted on ASUS laptop with Intel
64, 32, 16, 8). Each layer is designed with an ReLU activa- CORE i7, NVIDIA GEFORCE graphical card. Python 3 with
tion. The decoder of the extraction network is followed by a Keras library using Tensorflow backend is the programming
convolutional layer with 3 filters to construct the extracted language employed throughout the experiments. Adam opti-
secret image. The extraction network architecture is given mizer is used and the model was trained for 5 epochs.
in figure 3. Three important factors have to be evaluated for analysing
the performance of the steganography model: hiding capacity,
D. CUSTOMIZED LOSS FUNCTION security and robustness and the imperceptibility. The hiding
Unlike conventional image reconstruction, image steganogra- capacity is defined as the amount of secret information that
phy process requires two input images and two output images. can be hidden without distorting the cover image. In other
Therefore regular loss function may not be suitable for this words, this is the capacity per pixel of the steganography
purpose. A customized loss function is introduced to increase model. Since the cover image and the secret image have the
the performance of the architecture. There are two losses to same size (256 × 256), the capacity of the proposed method
be calculated: the embedding loss and the extraction loss. is 1. The hiding capacity can be calculated using equation 4.
The embedding loss is calculated between the input cover
L
image and the output stego image produced by the embedding capacity = (4)
network. On the other hand, the extraction loss is calculated H ∗W ∗C
between the input secret image and the extracted secret image where L is the length of the secret information. In this case,
by the extraction network. The overall loss is the sum of the it is the product value of the height, width and channel of
embedding and extraction loss. the secret image. H,C and W represent the height, number of
Let i be the cover image and i0 the reconstructed cover channels and width of the cover image.
image with the secret image generated by the embedding Security is the ability to hide the data which can only
network. Also, let h be the secret image and h0 the extracted be accessed by authorized users while the robustness is the
secret image by the extraction network. The loss function ability of the model to embed and retrieve the secret media
has to be customized in such a way that it will help the without distortions. Peak Signal-to-Noise Ratio (PSNR) is
model to optimize the learning function. Loss is a feedback a popular metric used to measure the similarities between
measure given back to the model while training in each epoch two images. First, Mean Squared Error (MSE) is calculated
as a measure of how well the model is performing through and the PSNR value is calculated from MSE. Equations to
back-propagation. calculate the MSE and PSNR are given in equation 5 and 6
The loss of the embedding network, Lemb , is given by respectively.
equation 1 and the loss of the extraction network, Lext , is given P
R,C [I1 (r, c) − I2 (r, c)]
by equation 2. Finally, the overall loss, L, is calculated using MSE = (5)
R∗C
equation 3. E2
PSNR = 10 ∗ log10 (6)
Lemb = |i − i| (1) MSE
Lext = |h−h0 | (2) Imperceptibility is a measure used to verify the visibility
L = Lemb + α ∗ Lext = |i − i0 | + α ∗ |h − h0 | (3) of the secret message hidden. Image results produced by the
proposed model against the test set of each dataset is given to
where α is the error adjustment and is fixed to 0.3. evaluate the imperceptibility.
Initial experiments were conducted by varying the val- Three datasets are used for training and testing the perfor-
ues of α from 0.3, 0.6 and 0.9. Increasing the value of α mances of the proposed architecture. Table 1 provides the
increased the loss and 0.3 value produced optimal loss value. necessary details on the datasets used. 45000 image pairs
The embedding network’s loss function is given back to the from the datasets are taken for training and 5000 image pairs
embedding network and the overall loss is given to the extrac- are used for testing. From the training images pair, 80 %
tion network to minimize the distortions of the extracted is chosen for training and 20% is chosen for validation in
secret image. random.
TABLE 2. Result comparison between the proposed method and other deep learning methods.
TABLE 3. Result comparison between the proposed method and other traditional methods.
VI. CONCLUSION
In this paper, a light-weight but simple architecture is pro-
posed to achieve end-to-end image steganography. The pro-
FIGURE 5. Critical time analysis on (a) Embedding network, and
posed architecture is inspired from the deep convolutional
(b) Extraction network. variation of the autoencoder. The whole system comprises of
preprocessing module, embedding network and the extrac-
TABLE 4. MSE and PSNR values of the proposed method.
tion network. The preprocessing module prepares the input
images for the embedding network to hide the secret image
inside the cover image. The extraction network extracts the
secret image from the stego image produced by the embed-
ding network. The PSNR value of the proposed method is
higher showing the higher security and robustness of the
proposed method compared to other traditional and deep
learning image steganography methods. The capacity of
the proposed method is 1 and is the highest when com-
pared with the traditional methods. The proposed method
the proposed model and PSNR value of the stego image has an upper hand in terms of invisibility as well and
generated for each image is calculated. The average value of can produce stego images very similar to the input cover
the PSNR is calculated and used for comparison against other image.
ACKNOWLEDGMENT [24] X. Duan, D. Guo, N. Liu, B. Li, M. Gou, and C. Qin, ‘‘A new high capacity
This work was made possible by NPRP11S-0113-180276 image steganography method combined with image elliptic curve cryptog-
raphy and deep neural network,’’ IEEE Access, vol. 8, pp. 25777–25788,
from the Qatar National Research Fund (a member of Qatar 2020.
Foundation). The findings achieved herein are solely the [25] X. Duan, W. Wang, N. Liu, D. Yue, Z. Xie, and C. Qin, ‘‘StegoPNet:
responsibility of the author. Open Access funding was pro- Image steganography with generalization ability based on pyramid pooling
module,’’ IEEE Access, vol. 8, pp. 195253–195262, 2020.
vided by the Qatar National Library. [26] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley,
S. Ozair, A. Courville, and Y. Bengio, ‘‘Generative adversarial nets,’’ in
REFERENCES Proc. Adv. Neural Inf. Process. Syst., 2014, pp. 2672–2680.
[1] R. Böhme, Principles of Modern Steganography and Steganalysis. Berlin, [27] D. Volkhonskiy, I. Nazarov, and E. Burnaev, ‘‘Steganographic genera-
Germany: Springer, 2010, pp. 11–77. tive adversarial networks,’’ in Proc. 12th Int. Conf. Mach. Vis. (ICMV),
[2] N. Subramanian, O. Elharrouss, S. Al-Maadeed, and A. Bouridane, ‘‘Image Jan. 2020, Art. no. 114333.
steganography: A review of the recent advances,’’ IEEE Access, vol. 9, [28] D. Volkhonskiy, B. Borisenko, and E. Burnaev, ‘‘Generative adversarial
pp. 23409–23423, 2021. networks for image steganography,’’ in Proc. ICRL Conf., France, 2016.
[3] X. Duan, K. Jia, B. Li, D. Guo, E. Zhang, and C. Qin, ‘‘Reversible image [29] H. Shi, J. Dong, W. Wang, Y. Qian, and X. Zhang, ‘‘SSGAN: Secure
steganography scheme based on a U-Net structure,’’ IEEE Access, vol. 7, steganography based on generative adversarial networks,’’ in Proc. Pacific
pp. 9314–9323, 2019. Rim Conf. Multimedia. China: Springer, 2017, pp. 534–544.
[4] O. Elharrouss, N. Almaadeed, and S. Al-Maadeed, ‘‘An image steganogra- [30] H. Shi, X.-Y. Zhang, S. Wang, G. Fu, and J. Tang, ‘‘Synchronized detection
phy approach based on k-least significant bits (k-LSB),’’ in Proc. IEEE Int. and recovery of steganographic messages with adversarial learning,’’ in
Conf. Informat., IoT, Enabling Technol. (ICIoT), Feb. 2020, pp. 131–135. Proc. Int. Conf. Comput. Sci. Portugal: Springer, 2019, pp. 31–43.
[5] N. F. Johnson and S. Jajodia, ‘‘Exploring steganography: Seeing the [31] P. G. Kuppusamy, K. C. Ramya, S. Sheebha Rani, M. Sivaram, and
unseen,’’ Computer, vol. 31, no. 2, pp. 26–34, Feb. 1998. V. Dhasarathan, ‘‘A novel approach based on modified cycle generative
[6] S. Gupta, G. Gujral, and N. Aggarwal, ‘‘Enhanced least significant bit adversarial networks for image steganography,’’ Scalable Comput., Pract.
algorithm for image steganography,’’ Int. J. Comput. Eng. Manage., vol. 15, Exper., vol. 21, no. 1, pp. 63–72, Mar. 2020.
no. 4, pp. 40–42, 2012. [32] R. Meng, Z. Zhou, Q. Cui, X. Sun, and C. Yuan, ‘‘A novel steganogra-
[7] R. Das and T. Tuithung, ‘‘A novel steganography method for image based phy scheme combining coverless information hiding and steganography,’’
on Huffman encoding,’’ in Proc. 3rd Nat. Conf. Emerg. Trends Appl. J. Inf. Hiding Privacy Protection, vol. 1, no. 1, p. 43, 2019.
Comput. Sci., Mar. 2012, pp. 14–18. [33] C. Chu, A. Zhmoginov, and M. Sandler, ‘‘CycleGAN, a master of
[8] H.-S. Huang, ‘‘A combined image steganographic method using multi-way steganography,’’ 2017, arXiv:1712.02950. [Online]. Available: http://arxiv.
pixel-value differencing,’’ in Proc. 6th Int. Conf. Graphic Image Process. org/abs/1712.02950
(ICGIP), Mar. 2015, pp. 267–271. [34] J. Yang, D. Ruan, J. Huang, X. Kang, and Y.-Q. Shi, ‘‘An embedding
[9] S. Wang, J. Sang, X. Song, and X. Niu, ‘‘Least significant qubit (LSQb) cost learning framework using gan,’’ IEEE Trans. Inf. Forensics Security,
information hiding algorithm for quantum image,’’ Measurement, vol. 73, vol. 15, pp. 839–851, 2020.
pp. 352–359, Sep. 2015. [35] W. Tang, S. Tan, B. Li, and J. Huang, ‘‘Automatic steganographic distortion
[10] N. Patel and S. Meena, ‘‘LSB based image steganography using dynamic learning using a generative adversarial network,’’ IEEE Signal Process.
key cryptography,’’ in Proc. Int. Conf. Emerg. Trends Commun. Technol. Lett., vol. 24, no. 10, pp. 1547–1551, Oct. 2017.
(ETCT), Nov. 2016, pp. 1–5. [36] X. Zhao, C. Yang, and F. Liu, ‘‘On the sharing-based model of steganogra-
[11] A. Qiu, X. Chen, X. Sun, S. Wang, and W. Guo, ‘‘Coverless image phy,’’ in Proc. Int. Workshop Digit. Watermarking, Feb. 2021, pp. 94–105.
steganography method based on feature selection,’’ J. Inf. Hiding Privacy [37] J. Zhu, R. Kaplan, J. Johnson, and L. Fei-Fei, ‘‘Hidden: Hiding data
Protection, vol. 1, no. 2, p. 49, 2019. with deep networks,’’ in Proc. Eur. Conf. Comput. Vis. (ECCV), 2018,
[12] M. Y. Valandar, P. Ayubi, and M. J. Barani, ‘‘A new transform domain pp. 657–672.
steganography based on modified logistic chaotic map for color images,’’ [38] M.-m. Liu, M.-q. Zhang, J. Liu, Y.-n. Zhang, and Y. Ke, ‘‘Coverless
J. Inf. Secur. Appl., vol. 34, pp. 142–151, Jun. 2017. information hiding based on generative adversarial networks,’’ 2017,
[13] M. Y. Valandar, M. J. Barani, P. Ayubi, and M. Aghazadeh, ‘‘An integer arXiv:1712.06951. [Online]. Available: http://arxiv.org/abs/1712.06951
wavelet transform image steganography method based on 3D sine chaotic [39] X. Duan, H. Song, C. Qin, and M. K. Khan, ‘‘Coverless steganography for
map,’’ Multimedia Tools Appl., vol. 78, no. 8, pp. 9971–9989, Apr. 2019. digital images based on a generative model,’’ Comput., Mater. Continua,
[14] W. Lu, Y. Xue, Y. Yeung, H. Liu, J. Huang, and Y. Shi, ‘‘Secure vol. 55, no. 3, pp. 483–493, Jul. 2018.
halftone image steganography based on pixel density transition,’’ IEEE [40] Z. Wang, N. Gao, X. Wang, X. Qu, and L. Li, ‘‘SSteGAN: Self-learning
Trans. Dependable Secure Comput., vol. 18, no. 3, pp. 1137–1149, steganography based on generative adversarial networks,’’ in Proc. Int.
May/Jun. 2019. Conf. Neural Inf. Process. Cambodia: Springer, 2018, pp. 253–264.
[15] C. Kim, D. Shin, L. Leng, and C.-N. Yang, ‘‘Separable reversible data hid- [41] J. Hayes and G. Danezis, ‘‘Generating steganographic images via adversar-
ing in encrypted halftone image,’’ Displays, vol. 55, pp. 71–79, Dec. 2018. ial training,’’ in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 1954–1963.
[16] P. Wu, Y. Yang, and X. Li, ‘‘Image-into-image steganography using deep [42] K. Alex Zhang, A. Cuesta-Infante, L. Xu, and K. Veeramachaneni,
convolutional network,’’ in Proc. 19th Pacific-Rim Conf. Multimedia Hefei ‘‘SteganoGAN: High capacity image steganography with GANs,’’ 2019,
China Adv. Multimedia Inf. Process. (PCM), Sep. 2018, pp. 792–802. arXiv:1901.03892. [Online]. Available: http://arxiv.org/abs/1901.03892
[17] P. Wu, Y. Yang, and X. Li, ‘‘StegNet: Mega image steganography capacity [43] T. Rabie and I. Kamel, ‘‘High-capacity steganography: A global-adaptive-
with deep convolutional network,’’ Future Internet, vol. 10, no. 6, p. 54, region discrete cosine transform approach,’’ Multimedia Tools Appl.,
Jun. 2018. vol. 76, no. 5, pp. 6473–6493, 2017.
[18] R. Rahim and S. Nadeem, ‘‘End-to-end trained cnn encoder-decoder net- [44] C.-C. Lin and P.-F. Shiu, ‘‘High capacity data hiding scheme for dct-
works for image steganography,’’ in Proc. Eur. Conf. Comput. Vis. (ECCV), based images,’’ J. Inf. Hiding Multimedia Signal Process., vol. 1, no. 3,
2018, pp. 1–6. pp. 220–240, 2010.
[19] N. Subramanian, O. E. Harrouss, and S. El-Seoud, ‘‘Image steganography [45] B. Yang, M. Schmucker, W. Funk, C. Busch, and S. Sun, ‘‘Integer DCT-
using auto encoder-decoder based deep learning method,’’ in Proc. Int. based reversible watermarking for images using companding technique,’’
Conf. Interact. Collaborative Blended Learn., Feb. 2021, pp. 520–530. Proc. SPIE, vol. 5306, pp. 405–415, Jun. 2004.
[20] S. Baluja, ‘‘Hiding images in plain sight: Deep steganography,’’ in Proc. [46] T. Rabie, I. Kamel, and M. Baziyad, ‘‘Maximizing embedding capacity and
Adv. Neural Inf. Process. Syst., 2017, pp. 2069–2079. stego quality: Curve-fitting in the transform domain,’’ Multimedia Tools
[21] R. Zhang, S. Dong, and J. Liu, ‘‘Invisible steganography via gener- Appl., vol. 77, no. 7, pp. 8295–8326, 2018.
ative adversarial networks,’’ Multimedia Tools Appl., vol. 78, no. 7, [47] R. Nur, ‘‘An approach of securing data using combined cryptography
pp. 8559–8575, Apr. 2019. and steganography,’’ Int. J. Math. Sci. Comput., vol. 6, no. 1, pp. 1–9,
[22] K. Yang, K. Chen, W. Zhang, and N. Yu, ‘‘Provablysecure generative Feb. 2020.
steganography based on autoregressive model,’’ in Proc. Int. Workshop [48] G. Swain, ‘‘Very high capacity image steganography technique using
Digit. Watermarking. South Korea: Springer, 2018, pp. 55–68. quotient value differencing and LSB substitution,’’ Arabian J. Sci. Eng.,
[23] Z. Wang, N. Gao, X. Wang, J. Xiang, and G. Liu, ‘‘STNet: A style vol. 44, no. 4, pp. 2995–3004, Apr. 2019.
transformation network for deep image steganography,’’ in Proc. Int. Conf. [49] S. K. Ghosal, A. Chatterjee, and R. Sarkar, ‘‘Image steganography based
Neural Inf. Process. China: Springer, 2019, pp. 3–14. on kirsch edge detection,’’ Multimedia Syst., vol. 27, pp. 73–78, Feb. 2020.
NANDHINI SUBRAMANIAN (Member, IEEE) SOMAYA AL-MAADEED (Senior Member, IEEE) received the Ph.D. degree
received the bachelor’s degree in electrical and in computer science from Nottingham, U.K., in 2004. She is currently the
electronics engineering from the PSG College of Coordinator of the Computer Vision and AI Research Group. She enjoys
Technology, India, and the master’s degree in excellent collaboration with national and international institutions and indus-
computing from Qatar University, Doha. She is try. She is the principal investigator of several funded research projects gen-
currently working as a Research Assistant with erating approximately five million. She has published extensively in the field
Dr. Somaya Al-Maadeed at Qatar University. Her of pattern recognition and delivered workshops on teaching programming for
interests include computer vision, artificial intel- undergraduate students. She attended workshops related to higher education
ligence, machine learning, and cloud computing. strategy, assessment methods, and interactive teaching. In 2015, she was
She won the first rank (Track-2) in the National- elected as the IEEE Chair for Qatar Section.
Level Artificial Intelligence Competition (Qatar).