0% found this document useful (0 votes)
20 views13 pages

Arda Journal 15946

Uploaded by

harika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views13 pages

Arda Journal 15946

Uploaded by

harika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Journal of Survey in Fisheries Sciences 10(4S) 1567-1579 2023

Performance Analysis of Autoencoders in


Wireless Communication Systems with Deep
Learning Techniques
K.Srinivasa Rao1, R.K. Goswami2, S.V.Rama Rao3,Koteswararao Seelam4
1
Professor, ECE, Dhanekula Institute of Engineering and Technology, Vijayawada, A.P.
2
Professor, ECE, Gayatri Vidya Parishad College of Engineering for Women, Visakhapatnam, A.P.
3
Associate Professor, ECE, NRIIT, Vijayawada, A.P.
4
Professor, ECE, Kallam Haranath Reddy Institute of Technology, Guntur, A.P.

Abstract: Wireless experts worldwide have become interested in using Autoencoders (AEs) for modelling
communication systems as an end-to-end reconstruction task. This approach optimizes both the transmitter and
receiver components simultaneously, offering flexibility and convenience for representing complex channel
models. Traditional communication systems rely on conventional models and assumptions that limit their
utilization of limited frequency resources and hinder their ability to adapt to new wireless applications.
However, with the rise of Artificial Intelligence, new wireless systems are capable of learning from wireless
spectrum data and optimizing their performance. In this paper, the use of deep learning with autoencoders is
explored to create an end-to-end communication system that replaces traditional transmitter and receiver
activities. The autoencoder architecture effectively addresses channel impairments and enhances overall
performance. Simulation results indicate that autoencoders surpass conventional communication systems in
terms of Block Error Rate performance, even when facing impairments in the autoencoder's channel layer and
using different neural network optimization algorithms.
Index Terms—Deep learning, autoencoders, wireless systems, physical layer, channel estimation.

1. Introduction wireless communication [1, 2].


The transformational impact of Researchers have investigated the
wireless communication and related applications of ML algorithms in channel
services on modern digital society cannot coding, decoding, MIMO detection, and
be overstated. However, emerging communication systems [3-9]. The
technologies such as smart cities, communication field has a wealth of expert
autonomous vehicles, and remote medical knowledge in information theory,
diagnosis pose challenges to traditional probability, statistics, and mathematical
communication methods in terms of modelling, with many approaches
reliability, flexibility, energy efficiency, demonstrated for the physical layer,
latency, and connection density. To meet channel modelling [10], and optimal
these challenges, novel architectures, signalling [11]. The main purpose of a
approaches, and algorithms are necessary communication system is to send a
at all layers of the communication system. message, like a stream of bits, accurately
In the past decade, machine learning, from the source to the destination through
particularly deep learning, has been widely a channel, using a transmitter and receiver.
applied in various fields, including For optimal performance, the transmitter

1567
K.Srinivasa Rao.et.al., Performance Analysis of Autoencoders in Wireless Communication Systems
with Deep Learning Techniques

and receiver are divided into multiple Furthermore, the study demonstrates that
independent blocks, each responsible for a the proposed model has a better BLER
specific task like channel coding, compared to previous studies (references
modulation, demodulation, or channel 12, 14, and 15). These findings
estimation [11]. While this block-based demonstrate the possibility of using AE-
approach allows for individual based end-to-end communication systems
optimization and control of each block, it as a substitute for standard block-based
may not always lead to optimal wireless communication systems.
performance. According to [12], the block- The paper is structured as follows:
based approach is sub-optimal in certain Section 2 examines relevant literature.
cases. However, a communication system Section 3 offers a concise introduction to
based on deep learning optimizes the the AE-based communication system and
transmitter and receiver, without the need examines regularization. Section 4 outlines
for separate blocks, following the the proposed model, while Section 5
traditional design of the communication details the simulations and performance
system [12][13]. evaluation of the implemented AE system.
In this paper a novel approach to Lastly, Section 6 concludes the paper.
communication systems that utilizes deep
learning has been introduced. Instead of 2. Related works
employing separate encoding and decoding T. O'Shea et al. introduced the idea of
modules, this approach utilizes an employing autoencoders (AEs) in
autoencoder, which is a deep neural communication systems, as stated in their
network comprising of an encoder and a works [12] and [13]. In [12], the authors
decoder. The encoder learns a latent view the communication system as an AE
representation of the data, which is and propose an approach to design a
subsequently used by the decoder to communication system as an end-to-end
reconstruct the input data. The authors reconstruction task, which involves
suggest using an autoencoder to jointly optimizing both the transmitter and
optimize the communication between the receiver components simultaneously in a
transmitter and receiver, rather than single process. They utilize a feedforward
optimizing their individual modules. The neural network to replace the functions of
proposed design uses a convolutional the transmitter and receiver. In [13], the
encoder-decoder that considers channel primary approach for developing end-to-
impairments and optimizes the transmitter end radio communication systems is
and receiver operations jointly for a single- through the utilization of the AE channel.
antenna system. We evaluated how well The authors tackle the task of learning as
our end-to-end AE performs in terms of an unsupervised machine learning problem
block error rate (BLER) on an additive and concentrate on enhancing the
white Gaussian noise (AWGN) channel. reconstruction loss by introducing
The simulation results indicate that the synthetic impairment layers. They include
AE-based model proposed has a Block various regularizing layers that simulate
Error Rate (BLER) that is similar to the the typical impairments encountered in
conventional models that use modulation wireless channels. Additionally, [17]
methods like BPSK and 16PSK. examines an optical wireless

1568
Journal of Survey in Fisheries Sciences 10(4S) 1567-1579 2023

communication system that serves a single in a vector x = f(s) ∈ Rn that is transmitted.


user, utilizing AEs. In conditions where This modulation maps the input symbols
the channel response is unknown or not from a discrete alphabet to complex
easily modelled, the authors in [19] numbers that indicate points on the
proposed an extended channel AE model constellation diagram. The transmitter
for end-to-end learning. An adversarial imposes power constraints on x, such as an
approach was used to approximate the energy constraint ‖𝑥‖22 ≤ n or an average
channel response and encode information, power constraint E[|xi|2] ≤ 1 for all i. Each
allowing both tasks to be learned message s can be represented using k =
simultaneously over a wide range of log2(M) bits, so the system operates at a
channel conditions. The authors communication rate of R = k/n, measured
demonstrated the effectiveness of this in bits/channel use. The channel introduces
model in an over-the-air system through distortions to the transmitted symbols.
training and validation. Another study in Upon reception, the receiver produces an
[20] investigated the impact of optimizers estimate ŝ of the originally transmitted
on AE convergence speed for high- message s. The Block Error Rate (BLER)
mobility and short-coherence channel Pe can be defined as the probability that ŝ
applications. End-to-end learning has also does not match s as given below (1).
been applied in molecular and optical 1
𝑃𝑒 = 𝑀 ∑𝑠 𝑃𝑟 (𝑠̂ ≠ 𝑠/𝑠) (1)
communications with promising
As shown in Fig. 1, the conventional
performance, indicating the potential of
communication system consists of
deep learning in complex communication
multiple independent blocks. The source
scenarios [21, 22].
encoder compresses the input data and
Furthermore, we assess various channel
eliminates redundancy, while the channel
uses and modulation techniques in our
encoder adds controlled redundancy to the
design and the constellations produced by
output of the source encoder. Channel
the autoencoder. The findings highlight the
coding, also known as forward error
effectiveness of optimizing with deep
correction, is typically used in wireless
learning techniques in creating innovative
communication systems to ensure that the
methods for wireless communication
received data is the same as the transmitted
design.
data, as wireless links are prone to fading
and interference, which can lead to errors.
3. Channel autoencoder in wireless
To overcome this, the transmitter adds
communications
extra information to the data before
3.1. Conventional wireless
sending it, a process called coding. This
Communication system
helps to mitigate the adverse effects of the
A wireless communication system
communication medium. An uncoded
typically includes three components: a
communication system, on the other hand,
transmitter, channel, and receiver. The
does not include additional information to
transmitter sends a message s, selected
mask the data being sent. The modulator
from a set of M possible messages s ∈ M =
block changes the characteristics of the
{1, ..., M}, to the receiver through n uses
signal based on the selected data rate and
of the channel. The message s is subjected
the signal level received at the receiver,
to digital modulation f: M ↦ Rn, resulting

1569
K.Srinivasa Rao.et.al., Performance Analysis of Autoencoders in Wireless Communication Systems
with Deep Learning Techniques

provided that the modulation method used form y, and a decoder function r = g(y)
at the transmitter is adaptable. The channel that produces a reconstruction r from y.
distorts and weakens the transmitted signal The simplest form of an AE consists of
before additional noise is added due to one hidden layer and is defined by two
hardware impairments when the signal weight matrices W and two bias vectors b.
reaches the receiver. Each communication
block at the transmitter prepares the signal 𝑦 = 𝑓(𝑥) = 𝑠1 (𝑊 (1) 𝑥 + 𝑏 (1) ),
to withstand the effects of the (2)
communication medium and receiver noise 𝑟 = 𝑔(𝑦) = 𝑠2 (𝑊 (2)
𝑦+𝑏 (2)
),
while maximizing system efficiency. The (3)
receiver performs similar operations in where 𝑠1 and 𝑠2 represents the activation
reverse order to reconstruct the transmitted functions, which are generally nonlinear.
information. Thus, it is possible to view the
communication system as an Autoencoder
(AE) that aims to reconstruct the
transmitted messages at the receiver with
minimal error. The encoder and decoder
can be seen as performing the functions of
the transmitter and receiver, respectively.
A typical AE structure that can be utilized
for end-to-end learning of a
communication system is illustrated in Fig.
2, as proposed by [12]. In this system, the
transmitter is represented as a Feedforward
Neural Network (FNN) with dense layers
Fig. 1. A conventional wireless and a normalization layer that is set to
communication system model illustrating meet the physical constraints of the
channel coding and modulation blocks. transmit vector x.

3.2. An End-to-end Optimization Process


with Autoencoder
An autoencoder (AE) is a type of
Feed-forward Neural Network (FNN)
where the input and output are equivalent.
The original AE is an unsupervised deep
learning algorithm that compresses the
inputs to learn a reduced representation,
which can be used to reconstruct the
original inputs at the output layer [23]. The
AE has a hidden layer that represents the
input code. The network typically consists Fig. 2. Structure of a wireless
of two parts: an encoder function y = f(s) communication system represented as an
that converts the input s into a compressed autoencoder.

1570
Journal of Survey in Fisheries Sciences 10(4S) 1567-1579 2023

As previously discussed, a number of input bits, and is equal to


conventional communication system is log2(M), where M is the number of
comprised of different blocks for channel possible messages that can be sent. The
coding/decoding and parameter n includes both the input bits
modulation/demodulation functions. In and additional redundant bits used to
contrast, the AE-based system does not reduce channel effects. The (n, k) notation
have explicit blocks but aims to optimize indicates that the system sends one
the system in an end-to-end process, which message from M possible messages (k bits)
aligns with the system parameters such as over n channel uses. Fig. 2 shows a block
the input message size, the number of diagram of the channel autoencoder, which
channels uses per message, and transmit learns from the distribution of the
signal power constraints. These parameters communication channel data to
are utilized to implement the autoencoder compensate for impairments. The
models, which are similar to the standard communication channel is defined by the
communication channel coded and density of the conditional probability
uncoded communication systems, and their p(y|x), where y ∈ Rn represents the signal
performance is compared over an AWGN at the receiver. The message is detected as
channel. The input, encoder layer, channel, y at the receiver, and the operation r : Rn
and decoder layer are represented as s, →M is applied to estimate the value of the
enc(s), cha(s), and dec(s), respectively. transmitted message s. The channel
The AE is trained using Adam (Adaptive autoencoder is optimized to map x to y,
moment estimation) optimizer to produce which allows s to be recovered by
an output 𝑑𝑒(𝑐ℎ𝑎(𝑒𝑛𝑐(𝑠))) that minimizes minimizing the probability of error. The
an arbitrary loss function L (𝑠, autoencoder components are summarized
𝑑𝑒𝑐(𝑐ℎ𝑎(𝑒𝑛𝑐(𝑠)))) [24]. as follows: -
The constellation diagrams produced by a a) Input: The symbol s is transformed
single-antenna autoencoder system are not into a one-hot vector, meaning that it
predetermined, but are instead learned can only have valid combinations of
based on the desired performance metric to values where one bit is set to '1' and
be minimized at the receiver, such as all the others are set to '0'. This
symbol error rate, coherence time, particular encoding enables a state
distance, and propagation loss. The machine to operate at a faster clock
transmitter's hardware imposes specific rate compared to other encodings.
limitations as outlined in reference [25]. Moreover, determining the state of a
The transmitter enforces the constraints one-hot vector requires accessing only
mentioned below. one flip-flop, which has a low and
(a) An energy constraint ‖𝑥‖22 ≤ n, consistent cost.
(b) An amplitude constraint |xi| ≤ 1 ∀i, b) Transmitter: The transmitter consists
(c) An average power constraint E [|xi|2] ≤ of a feedforward neural network
1 ∀i on x. which has several dense layers. The
The data rate of the system is calculated output of the last dense layer is
using the formula R = k/n [bit/channel modified to represent two complex
use]. The parameter k represents the numbers for every modulated input
symbol. These numbers represent the

1571
K.Srinivasa Rao.et.al., Performance Analysis of Autoencoders in Wireless Communication Systems
with Deep Learning Techniques

real (in-phase, I) and imaginary externally labelled, the autoencoder is


(quadrature, Q) parts. The classified as an unsupervised learning
normalization layer is added to system. This approach enables the
guarantee that the physical restrictions autoencoder to acquire knowledge without
on x are satisfied. any prior information. The input message
c) Channel: The channel layer is fixed is represented as a vector with only one
and cannot be trained. It is modelled element being "1" and the rest being "0."
as a layer of Additive White Gaussian This is known as a one-hot vector. The
Noise (AWGN) that is added to the channel through which the message passes
signal. The variance of this noise is is an Additive White Gaussian Noise
determined by a parameter 𝛽 = (AWGN) channel. The AWGN channel
2𝑅𝐸𝑏 −1 adds noise to the message in order to
( ) ,which is calculated using the
𝑁0
𝐸𝑏
achieve a specific energy per bit to noise
ratio of energy per bit (Eb) to noise power density ratio. In [26], researchers
𝑁0
power spectral density (N0). The value presented a (7,4) autoencoder network that
of β changes for each training utilizes energy normalization and has a
example, and the noise is only applied training of 3 dB. To achieve optimal
during the forward pass to simulate results with minimal complexity, both the
signal distortion, but it is not encoder (transmitter) and the decoder
considered during the backward pass. (receiver) consist of two fully connected
d) Receiver: similar to transmitter, it is layers. The input layer (featureInputLayer)
constructed using a Fully connected accepts a one-hot vector of length M. The
Neural Network (FNN). Its final layer first fully connected layer of the encoder
employs the softmax activation has M inputs and M outputs, followed by a
function to generate a probability ReLU layer. The second fully connected
vector p ∈ (0, 1)M representing all layer has M inputs and n outputs, followed
potential messages. The value in p by a normalization layer. After the encoder
with the greatest probability is layers, the AWGN channel layer is
designated as 𝑠̂ . applied. The channel's output is then fed
e) Training: The Adaptive Moment into the decoder layers, starting with a
(Adam) optimizer is used to train the fully connected layer that has n inputs and
autoencoder and modify the weights M outputs, followed by a ReLU layer. The
of the FNN, and the performance is second fully connected layer has M inputs
evaluated. The training batch and M outputs, followed by a softmax
comprises all potential messages s ∈ layer (softmaxLayer), which produces the
M. probability of each M symbol. Finally, the
classification layer determines the most
4. Simulation Results and likely transmitted symbol from 0 to M-1.
Performance Evaluation A (2,2) autoencoder is trained
The autoencoder operates by using below specific parameters, including
utilizing the data created during energy normalization.
transmission and the identical data at the • Adam (adaptive moment
receiving end. As the data used is not estimation) optimizer,
• Initial learning rate of 0.01,

1572
Journal of Survey in Fisheries Sciences 10(4S) 1567-1579 2023

• Maximum epochs of 15, 7. Compute the bias-corrected first


• Minibatch size of 20*M, moment estimate 𝑚 ̂ 𝑡 and second
• Piecewise learning schedule moment estimate 𝑣̂𝑡 using the
with drop period of 10 and drop following equations:
factor of 0.1. 𝑚𝑡
𝑚̂𝑡 =
The Adaptive Moment Estimation (Adam) 1 − 𝛽1𝑡
optimizer algorithm can be used to train 𝑣𝑡
𝑣̂𝑡 =
the parameters of the autoencoder to 1 − 𝛽2𝑡
minimize the reconstruction error of the where t is the current iteration step, 𝛽1 is
received signal. The Adam optimizer the exponential decay rate for the first
algorithm uses a combination of moment estimate, and 𝛽2 is the
momentum and adaptive learning rate to exponential decay rate for the second
converge to the minimum of the cost moment estimate.
function efficiently. The cost function in In addition to the standard
the case of an autoencoder is the hyperparameters used in the Adam
reconstruction error between the input and optimizer, such as the learning rate and
output signal. exponential decay rates for the first and
During training, the Adam optimizer second moment estimates, there are
algorithm updates the parameters of the additional hyperparameters that are
autoencoder using the following steps: specific to the application of autoencoders
for wireless communication. These include
1. Initialize the first and second moment the Signal-to-Noise Ratio (SNR) and the
estimates: batch size.
initial first moment vector 𝑚0 = 0 and In Fig. 3, the training process at a
initial second moment vector 𝑣0 = 0 noise level of 3dB has been illustrated, and
2. Forward pass through the autoencoder it can be observed that the validation
to obtain the output signal. accuracy quickly surpasses 90%, while the
3. Compute the reconstruction error validation loss consistently decreases. This
between the input and output signal. pattern indicates that the initial training
4. Compute the gradient of the cost value was set low enough to produce some
function with respect to the parameters errors but not too low to prevent
𝑔𝑡 = ∇𝜃 𝐽(𝜃𝑡 ) convergence.
5. Update the first moment estimate
𝑚𝑡 = 𝛽1 ∗ 𝑚𝑡−1 + (1 − 𝛽1 ) ∗ 𝑔𝑡
where 𝛽1 is the exponential decay rate
for the first moment estimate, typically
set to 0.9.
6. Update the second moment estimate:
𝑣𝑡 = 𝛽2 ∗ 𝑣𝑡−1 + (1 − 𝛽2 ) ∗ 𝑔𝑡2
where 𝛽2 is the exponential decay rate
for the second moment estimate,
typically set to 0.999.
Fig. 3. The training process plot.

1573
K.Srinivasa Rao.et.al., Performance Analysis of Autoencoders in Wireless Communication Systems
with Deep Learning Techniques

Fig.4 displays the layer diagrams of the


complete autoencoder, including its The Block Error Rate (BLER)
encoder and decoder networks, represented performance of the (2,2) autoencoder was
by the objects generated from the trained simulated by generating the random
network. The encoder network is also integers in the [0-1] range to represent
known as the transmitter, while the random information bits. These
decoder network is known as the receiver. information bits were then encoded into
complex symbols. The real valued vector
was mapped into a complex valued vector
such that the odd and even elements were
mapped into the in-phase and the
quadrature component of a complex
symbol, respectively. The array was
treated as an interleaved complex array.
The encoded complex symbols were
passed through an AWGN channel to
simulate channel impairment. The channel
impaired complex symbols were then
decoded and the simulation was run for
Fig. 4. The objects generated by the each point for at least 10 block errors to
trained network. compare the results with that of an
The constellation learned by the uncoded QPSK system with block length
autoencoder was plotted to send symbols 2.
through the AWGN channel together with
the received constellation. For a (2,2)
configuration, the autoencoder learned a
QPSK (M = 22 = 4) constellation with a
phase rotation. The received constellation
was basically the activation values at the
output of the channel layer obtained using
the activation function and treated as
interleaved complex numbers.

Fig. 6. The BLER plot for QPSK (2,2) and


AE (2,2).
It can be inferred from Fig.6, indicating
well-formed constellation together with the
Fig. 5. The constellation diagrams BLER results, that training for 15 epochs
produced by Autoencoder. was enough to get a satisfactory

1574
Journal of Survey in Fisheries Sciences 10(4S) 1567-1579 2023

convergence. Learned constellations of simulated with that of (7,4) Hamming code


several autoencoders normalized to unit with QPSK modulation for both hard
energy and unit average power have also decision and maximum likelihood (ML)
been generated and the same are shown in decoding. An uncoded (4,4) QPSK was
Fig.7. It is to be noted that Train (2,4) used as a baseline. The (4,4) uncoded
autoencoder was normalized to unit QPSK was essentially a PSK modulated
energy. system that sent blocks of 4 bits and
measured BLER.

Fig. 8. (7, 4) Autoencoder BLER


performance comparison.
Subsequently, the BLER performance of
autoencoders with R=1 was simulated and
compared with that of uncoded QPSK
Fig.7. Comparisons of Constellation systems. The uncoded (2,2) and (8,8)
diagrams of several autoencoders. QPSK were used as baselines. The BLER
The (2,2) autoencoder has been performance of these systems was
trained to reach a convergence point on a compared with that of (2,2), (4,4) and (8,8)
QPSK constellation, which has a phase autoencoders.
shift that is optimal for the encountered
channel conditions. On the other hand, the
(2,4) autoencoder with energy
normalization converges on a 16PSK
constellation with a phase shift. It is
important to note that energy
normalization is used to ensure that every
symbol has the same energy and is placed
on the unit circle. Under this constraint,
the best constellation is PSK constellation
with equal angular distance between
symbols. Finally, the (2,4) autoencoder
with average power normalization
converges to a three-tier constellation
consisting of 1-6-9 symbols. The BLER Fig. 9. Autoencoder of R = 1 BLER
performance comparison.
performance of a (7,4) autoencoder was

1575
K.Srinivasa Rao.et.al., Performance Analysis of Autoencoders in Wireless Communication Systems
with Deep Learning Techniques

The Bit error rate of QPSK was found to Hamming (7,4) code. As the Eb/N0
be the same for both (8,8) and (2,2) decreased from 10 dB to 1 dB, the BLER
cases, however, the BLER was observed performance of the (7,4) autoencoder was
to be dependent on the block length, n, observed to get closer to the Hamming
and became worse as n increased in (7,4) code with MLD, and at that point, it
accordance with relationship almost matched the MLD Hamming (7,4)
. As expected, the code. This is a quite significant result as it
BLER performance of (8,8) QPSK was establishes the possibility of learning the
observed to be worse than the (2,2) joint coding and modulation schemes by
QPSK system. It was also observed that autoencoders in an unsupervised manner.
the BLER performance of (2,2)
autoencoder matched the BLER 5. Conclusion and Future scope
performance of (2,2) QPSK. On the other In this paper, the use of deep learning
hand, (4,4) and (8,8) autoencoders were architectures in optimizing communication
found to optimize the channel coder and systems has been brought out. The authors
the constellation jointly in order to obtain propose the implementation of an
a coding gain in comparison to the Autoencoder as a transmitter and receiver
corresponding uncoded QPSK systems. for the physical layer of communication.
Instead of optimizing individual blocks of
a conventional communication system, an
end-to-end optimization approach has been
suggested to minimize the reconstruction
loss. The efficacy of this approach has also
been demonstrated in capturing channel
impairments in single antenna systems and
matching modulation techniques using off-
the-shelf DNNs. It has been concluded that
autoencoders are capable of designing the
end-to-end communication system in an
unsupervised manner by learning the
‘coding and modulation’ as one entity.
With regards to the future work, multiple
Fig.10. Autoencoder with Hamming code
learning strategies can be explored on the
performance comparison.
autoencoder side, including different
weight initialization, hyperparameter
The (7,4) autoencoder was trained with
selection, and various emerging
energy normalization under different
autoencoder architectures. Further,
values and the BLER performance has
additional autoencoders can be utilized to
been compared. The BLER performance
extend this approach to multi-user and
has also been plotted in Fig. 10, together
multiple-antenna systems. This work can
with the theoretical upper bound for hard
also be applied to specific domains such as
decision decoded Hamming (7,4) code and
satellite communications, backhaul radios,
the simulated BLER of Maximum
dense urban wireless, 5G MIMO, amongst
Likelihood Decoded (MLD) pertaining to
others.

1576
Journal of Survey in Fisheries Sciences 10(4S) 1567-1579 2023

References Technology Convergence (ICTC),


1. E.A.A. Alaoui, S.C.K. Tekouabou, S. 2018, pp. 860–865.
Hartini, Z. Rustam, H. Silkan, S. 7. M.E. Morocho-Cayamcela, H. Lee, W.
Agoujil, Improvement in automated Lim, Machine learning to improve
diagnosis of soft tissues tumors using multihop searching and extended
machine learning, Big Data Min. Anal. wireless reachability in V2x, IEEE
4 (1) (2021) 33–46, Commun. Lett.(2020) 1.
http://dx.doi.org/10.26599/ 8. J.N. Njoku, M.E. Morocho-Cayamcela,
BDMA.2020.9020023. W. Lim, CGDNet: Efficient hybrid
2. S. Shorewala, A. Ashfaque, R. deep learning model for robust
Sidharth, U. Verma, Weed density and automatic modulation recognition,
distribution estimation for precision IEEE Netw. Lett. 3 (2) (2021) 47–51,
agriculture using semi-supervised http://dx.doi.org/10.1109/LNET.2021.
learning, IEEE Access 9 (2021) 3057637.
27971–27986, 9. J.N. Njoku, M.E. Morocho-Cayamcela,
http://dx.doi.org/10.1109/ACCESS.20 W. Lim, Automatic radar waveform
21.3057912. recognition using the wigner-ville
3. M. Varasteh, J. Hoydis, B. Clerckx, distribution and AlexNet-SVM, in:
Learning to communicate and Proceedings of the KICS Summer
energize: Modulation, coding, and Conference, Pyeongchang, South
multiple access designs for wireless Korea, 2020, pp. 1–4.
information-power transmission, IEEE 10. P. Popovski, A mathematical view on a
Trans. Commun. 68 (11) (2020) 6822– communication channel, in: Wireless
6839, Connectivity: An Intuitive and
http://dx.doi.org/10.1109/TCOMM.20 Fundamental Guide, Wiley, Wiley,
20.3017020. 2020, pp.145–173,
4. J. Ren, G. Yu, G. Ding, Accelerating http://dx.doi.org/10.1002/97811191149
DNN training in wireless federated 63.ch6.
edge learning systems, IEEE J. Sel. 11. F. Farzaneh, A. Fotowat, M. Kamarei,
Areas Commun. 39 (1) (2021) 219– A. Nikoofard, M. Elmi, 1 the amazing
232, http: world of wireless systems, in:
//dx.doi.org/10.1109/JSAC.2020.30369 Introduction to Wireless
71. Communication Circuits, River
5. M.E. Morocho-Cayamcela, H. Lee, W. Publishers, 2020, pp. 1–26.
Lim, Machine learning for 5G/B5G 12. T. O’Shea, J. Hoydis, An introduction
mobile and wireless communications: to deep learning for the physical layer,
Potential, limitations, and future IEEE Trans. Cogn. Commun. Netw. 3
directions, IEEE Access 7 (2019) (4) (2017) 563–575.
137184–137206. 13. T.J. O’Shea, K. Karra, T.C. Clancy,
6. M.E. Morocho Cayamcela, W. Lim, learning to communicate: Channel
Artificial intelligence in 5G autoencoders, domain specific
technology: A survey, in: 2018 regularizers, and attention, in: 2016
International Conference on IEEE International Symposium on
Information and Communication Signal Processing and Information

1577
K.Srinivasa Rao.et.al., Performance Analysis of Autoencoders in Wireless Communication Systems
with Deep Learning Techniques

Technology, ISSPIT 2016, 2017, pp. http://dx.doi.org/10.1109/ALLERTON


1–6. .2017.8262721.
14. H. Zhang, L. Zhang, Y. Jiang, 19. T.J. O’Shea, T. Roy, N. West, B.C.
Overfitting and underfitting analysis Hilburn, Physical layer
for deep learning based end-to-end communications system design over-
communication systems, in: 2019 11th the-air using adversarial networks, in:
International Conference on Wireless European Signal Processing
Communications and Signal Conference, 2018-Septe, 2018, pp.
Processing (WCSP), 2019, pp. 1–6, 529–532, http://dx.doi.org/10.
http://dx.doi.org/10.1109/WCSP.2019. 23919/EUSIPCO.2018.8553233.
8927876. 20. M.E. Morocho Cayamcela, J.N. Njoku,
15. L. Liu, T. Lin, Y. Zhou, A deep J. Park, W. Lim, Learning to
learning method-based receiver design, communicate with autoencoders:
in: 2020 IEEE 6th International Rethinking wireless systems with deep
Conference on Computer and learning, in: 2020 International
Communications (ICCC), 2020, pp. Conference on Artificial Intelligence in
975–979, Information and Communication
http://dx.doi.org/10.1109/ICCC51575. (ICAIIC), Fukuoka, Japan, 2020, pp.
2020.9344965. 308–311.
16. L. Liu, Y. Luo, X. Shen, M. Sun, B. Li, 21. S. Mohamed, J. Dong, A.R. Junejo,
𝛽 -Dropout: A unified dropout, IEEE D.C. Zuo, Model-based: End-to-end
Access 7 (2019) 36140–36153, molecular communication system
http://dx.doi.org/10.1109/ACCESS.20 through deep reinforcement learning
19.2904881. auto encoder, IEEE Access 7 (2019)
17. M. Soltani, W. Fatnassi, A. Aboutaleb, 70279–70286.
Z. Rezki, A. Bhuyan, P. Titus, 22. H. Lee, S.H. Lee, T.Q.S. Quek, I. Lee,
Autoencoder based optical wireless Deep learning framework for wireless
communications systems, in: 2018 systems: Applications to optical
IEEE Globecom Workshops, GC wireless communications, IEEE
Wkshps 2018 - Proceedings, Institute Commun. Mag. 57 (3) (2019) 35–41.
of Electrical and Electronics Engineers 23. D. Wu, M. Nekovee, Y. Wang, Deep
Inc., 2019, pp. 1–6, learning-based autoencoder for m-user
http://dx.doi.org/10.1109/GLOCOMW wireless interference channel physical
.2018.8644104. layer design, IEEE Access 8 (2020)
18. T.J. O’Shea, T. Erpek, T.C. Clancy, 174679–174691,
Physical layer deep learning of http://dx.doi.org/10.1109/ACCESS.20
encodings for the MIMO fading 20.3025597.
channel, in: 55th Annual Allerton 24. D.J. Ji, J. Park, D.-H. Cho, ConvAE: A
Conference on Communication, new channel autoencoder based on
Control, and Computing, Allerton convolutional layers and residual
2017, 2018-January, Institute of connections, IEEE Commun. Lett. 23
Electrical and Electronics Engineers (10) (2019) 1769–1772,
Inc., 2017, pp. 76–80, http://dx.doi.org/10.1109/lcomm.2019.
2930287.

1578
Journal of Survey in Fisheries Sciences 10(4S) 1567-1579 2023

25. T. O’Shea and J. Hoydis, "An


Introduction to Deep Learning for the
Physical Layer," in IEEE Transactions
on Cognitive Communications and
Networking, vol. 3, no. 4, pp. 563-575,
Dec. 2017, doi:
10.1109/TCCN.2017.275837.
26. T. J. O’Shea, T. Erpek, and T. C.
Clancy, “Physical layer deep learning
of encodings for the MIMO fading
channel,” in 2017 55th Annual
Allerton Conference on
Communication, Control, and
Computing (Allerton). IEEE, 10 2017,
pp. 76–80.

1579

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy