0% found this document useful (0 votes)
39 views7 pages

Legendre FLANN

This document discusses using a Legendre neural network (LeNN) to identify nonlinear dynamic systems. LeNN offers less computational complexity than multilayer perceptrons (MLPs). The LeNN is shown to have superior performance compared to MLPs in identifying plant models, with lower error and faster convergence. Nonlinear system identification using neural networks aims to determine an accurate model (P^) that approximates the actual system (P) within a desired error tolerance.

Uploaded by

physics lover
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views7 pages

Legendre FLANN

This document discusses using a Legendre neural network (LeNN) to identify nonlinear dynamic systems. LeNN offers less computational complexity than multilayer perceptrons (MLPs). The LeNN is shown to have superior performance compared to MLPs in identifying plant models, with lower error and faster convergence. Nonlinear system identification using neural networks aims to determine an accurate model (P^) that approximates the actual system (P) within a desired error tolerance.

Uploaded by

physics lover
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Nonlinear Dynamic System Identification Using Legendre Neural

Network
Jagdish C. Patra and Cedric Bornand

Abstract— We propose a computationally efficient Legendre with faster convergence and lesser computational complexity.
neural network (LeNN) for identification of nonlinear dynamic A comprehensive survey on various applications of FLANN
systems. Due to its single-layer architecture, the LeNN offers can be found in [10].
much less computational complexity than that of a multilayer
perceptron (MLP). By taking several plant models of increasing Because of their better representation capability, the
complexity and with extensive simulations we have shown supe- Chebyshev polynomial based FLANN, called ChNN has
rior performance of the LeNN-based plant model in comparison been proposed for static function approximation [11] and
to that of an MLP model in terms of estimated output, mean pattern classification tasks [12]. It is shown that ChNN has
square error (MSE) and computational complexity, in presence the universal approximation capability. The ChNN has been
of additive noise.
extensively applied for nonlinear dynamic system identifi-
I. INTRODUCTION cation [13], [14], [15], [16], digital communication channel
Accurate identification of a plant is crucial for its control equalization [17], [18], [19] and intelligent sensors [20], with
and stability. In many practical applications, e.g., robotics, quite satisfactory results. FLANN-based neurofuzzy network
process control and autonomous systems, because of their for nonlinear system control [21] and ChNN-based fuzzy
nonlinear and dynamic characteristics, and absence of any recurrent network for modeling of servo-mechanism [22]
apriori knowledge, to obtain an accurate model of the plant have been proposed.
is a challenging task. Since early 1990, artificial neural Yang and Tseng reported a novel Legendre polynomial-
networks (ANNs) have been the objective of numerous inves- based linear ANN for static function approximation [23].
tigations because of their biological analogy and the record They have shown that this network has fast convergence,
of successes obtained in such areas as machine learning and and provide high accuracy. Recently, a Legendre polynomial
pattern recognition. The interest of the control community ANN, named LeNN, for channel equalization problem has
in neural networks increased after the appearance of the pio- been proposed and its superiority over MLP-based equalizer
neering work of [1] and [2]. It is shown that ANN approaches is shown [24], [25].
to control and identification has high potential, especially In this paper, we apply a novel LeNN for identification
when the plant information is not known apriri. Among the of nonlinear dynamic systems. By taking several nonlinear
recently developed intelligent methods for control, ANNs system models form [1] and with extensive computer simula-
have played a prevailing role in solving problems whose tion, we have shown that the LeNN-based model performs as
solutions would be hard to find with classical techniques good as MLP-based model but provides advantages in terms
[1]. Using 3-layer multilayer perceptrons (MLPs) they have of computational complexity and faster convergence. The rest
shown that these networks are capable of modeling of com- of this paper is as follows. In Section 2 we explain the basic
plex dynamical systems effectively. Some of other reports on problem of nonlinear system identification. In Section 3 we
ANN application for system identification and control can be explain the basic architecture of the FLANN and LeNN and
found in [3], [4], [5]. the learning algorithm. In Section 4 we provide details of the
Besides MLP, many other ANN architectures have been modeling examples and results of the ANN-based models.
used for pattern recognition, classification and system iden- Finally, conclusions are drawn in the last Section.
tification and control [6]. One of the major drawbacks
II. S YSTEM M ODELING
of the MLP is that it is computationally intensive and
needs large number of iterations for its training. In order The system under study is first characterized by a math-
to reduce the computational complexity, Pao et al. have ematical model. Let us express the system model by an
introduced functional-link ANN (FLANN), and shown that operator P from an input space U into an output space
it is capable of producing similar performance as that of Y . The objective is to categorize the class P to which
MLP but with much less computational cost [7], [8]. A P belongs. For a given class P, P ∈ P, the identification
FLANN-based nonlinear dynamic system identification with problem is to determine a class P̂ ⊂ P and P̂ ∈ P̂ such that
satisfactory results has been reported [9]. It is shown that P̂ approximates P in some desired sense. In a static system,
the performance of FLANN is similar to that of an MLP but the spaces U and Y are subsets of R n and R m , respectively.
Whereas, in a dynamic system, they are assumed to be
J. C. Patra is with School of Computer Engineering, Nanyang Techno- bounded Lebesgue integrable functions in the interval [0, T ]
logical University, Singapore. E.mail: aspatra@ntu.edu.sg.
C. Bornand is with HEIG-VD, University of Applied Sciences, Yverdon- or [0, ∞]. However, in both cases, the operator P is defined
les-Bains, Switzerland. E.mail: Cedric.Bornand@heig-vd.ch. implicitly by the specified input-output pairs [1].

978-1-4244-8126-2/10/$26.00 ©2010 IEEE


SYSTEM P(u)
NOISE
(P) +
u e
+ ^ _+
MODEL P(u)
(ANN)
UPDATE
ALGORITHM

Fig. 1. Identification scheme of a dynamic system. Fig. 2. Schematic diagram of an MLP.

A typical example of identification of static system is The input and the output of the SISO plant at the kth
the problem of pattern recognition. By a decision function time instant are denoted by u(k) and y p (k), respectively, and
P, compact input sets Ui ⊂ R n are mapped into elements m ≤ n. In this study two neural networks (MLP and LeNN)
yi ∈ R m for i = 1, 2, · · · in the output space. The elements have been used to construct the nonlinear functions f and
of Ui denote the pattern vectors corresponding to class yi . g so as to approximate such mappings over compact sets. It
Whereas, in a dynamic system, the input-output pairs of is assumed that the plant is bounded-input-bounded-output
the time function {u(t), y(t)}, t ∈ [0, T ], implicitly define the (BIBO) stable. In contrast to this, the stability of ANN model
operator P describing the dynamic plant. The main objective can not be assured. Therefore, in order to guarantee that
in both types of identification is to determine P̂ such that the parameters of the ANN model converge, a series-parallel
scheme is utilized. In this scheme, output of the plant, instead
y − ŷ = P(u) − P̂(u) < ε , (1)
of ANN model, is fed back into the model during training
where u ∈ U , ε is some desired small value > 0, and  ·  of the ANN [1].
is a defined norm on the output space. In Eq. (1), P̂ and
P denote the output of the identified model and the plant, III. ANN S TRUCTURES FOR S YSTEM I DENTIFICATION
respectively. The error e = y − ŷ is the difference between
In this Section we briefly describe the ANN architectures
the observed plant output and the output generated by the
for channel equalization based on MLP and LeNN.
model.
A schematic diagram of identification of a time-invariant,
causal, discrete-time plant is shown in Fig. 1. The input A. The MLP
and output of the plant are represented by u and p(u), The MLP-based neural network, as shown in Fig. 2, is
respectively, where u is assumed to be an uniformly bounded a multi-layered architecture in which each layer consists of
function of time. The stability of the plant is assumed with a certain number of nodes or processing units. A 2-layer MLP
known parameterization but with unknown parameter values. is denoted as {n0 , n1 n2 }, where, n0 , n1 and n2 , denote the
The objective is to construct a suitable model generating an number of nodes (excluding the bias unit) in the input, hidden
output P̂(u) which approximates the plant output P(u). In and output layers, respectively. The number of nodes in the
the present study we considered single-input-single-output input and the output layers are equal to the dimension of the
(SISO) plants, and four models are introduced below: input and output patterns, respectively. In a fully-connected
MLP, each neuron of a lower layer is connected to all nodes
• Model 1:
n−1 of the upper layer through a set of connecting weights. Since
y p (k + 1) = ∑ αi y p(k − i) no processing is done in the input layer, the output of its
i=0 nodes is equal to the input pattern itself. In the nodes of other
+g[u(k), u(k − 1), · · · , u(k − m + 1)] layers, processing is carried out as follows. The weighted
• Model 2: sum of the node outputs of the lower layer is computed, then
y p (k + 1) = f [y p (k), y p (k − 1), · · · , y p (k − n + 1)] it is passed through a continuous nonlinear function (usually
m−1 a tanh(.) or sigmoid function).
+ ∑ βiu(k − i) During training phase, an input pattern is applied to the
i=0
MLP, and the node outputs of all the layers are computed.
• Model 3: The MLP output (output of the output layer nodes) is
y p (k + 1) = f [y p (k), y p (k − 1), · · · , y p (k − n + 1)] compared with the desired (or target) output to generate an
+ g[u(k), u(k − 1), · · · , u(k − m + 1)] error signal ek . This error is used to update the weights of
• Model 4: the MLP using a learning algorithm, e.g., back propagation
(BP) or Levenberg-Marquardt (LM) algorithm [6]. Updating
y p (k + 1) = f [y p (k), y p (k − 1), · · · , y p (k − n + 1); of connection weights is carried out until the MSE reaches
+ u(k), u(k − 1), · · · , u(k − m + 1)] a predefined small value.
C. Learning of LeNN
The learning process involves updating of the weights of
LeNN in order to minimize a given cost function. Gradient
descent algorithm is used for learning where the gradient
of a cost function with respect to the weights is determined
and the weights are incremented by a fraction of the negative
gradient at each iteration. The well known BP algorithm is
used to update the weights of LeNN. Consider a LeNN with
a single output node. The goal of the learning algorithm is
to minimize Ek , the cost function at kth instant, given by
1 1
Ek = [yk − dk ]2 = e2k (5)
2 2
where dk is the desired output at the kth instant, and ek (=
dk − yk ) denotes the error term. In each iteration, an input
pattern is applied, the output of the LeNN is computed and
Fig. 3. Schematic diagram of an LeNN.
the error ek is obtained. The error value is used in the BP
algorithm to minimize the cost function until it reaches a
pre-defined minimum value.
B. The Legendre ANN The weights are updated as follows:
The structure of an LeNN shown in Fig. 3 is a single-  
∂ Ek
layer flat structure where the hidden layers are eliminated by Wk+1 = Wk + Wk = Wk + −α (6)
∂ Wk
transforming the input pattern to a higher dimensional space.
In the projected higher dimensional space, the patterns are where α is a learning parameter which is set between 0 and
linearly separable. Due to the absence of hidden layer, LeNN 1. The gradient of the cost function (5) is given by
provides computational advantage over the MLP. Using a set
∂ Ek ∂ yk
of Legendre orthogonal functions, the pattern enhancement = ek (7)
is done in the LeNN. These functions take an element or the ∂W ∂W
entire pattern as its argument. The update rule for the weight w ji is given by
The Legendre polynomials are denoted by Ln (X ), where ∂ y j,k
n is the order and −1 < x < 1 is the argument of the w ji,k+1 = w ji,k + α e j,k (8)
∂ w ji
polynomial. They constitute a set of orthogonal polynomials
as solutions to the differential equation If nonlinear tanh(.) function is used at the output node, the
update rule becomes
 
d dy w ji,k+1 = w ji,k + α e j,k w ji,k
(1 − x2) + n(n + 1)y = 0 (2) (9)
dx dx
where w ji,k = (1 − y j,k )2 φi (X ) and φi (X ) = Li (X ). A mo-
The zeroth and the first order Legendre polynomials are mentum term β is added to the update rule to improve
respectively given by, L0 (x) = 1 and L1 (x) = x. The higher convergence, as given below:
order polynomials are
w ji,k+1 = w ji,k + α w ji,k + β w ji,k−1 (10)
1 2
L2 (x) = (3x − 1)
2 where the momentum factor is set between 0 and 1.
1 3
L3 (x) = (5x − 3x) (3) D. Computational Complexity
2
1 Let us consider an L−layer MLP with nl number of
L4 (x) = (35x4 − 30x2 + 3).
8 nodes (excluding the threshold unit) in layer l, l = 0, 1, · · · , L,
where n0 and nL are the number of nodes in the input and
The recursive formula to generate higher order Legendre
output layer, respectively. An L-layer ANN architecture may
polynomials is given by
be represented by {n0 − n1 − · · · − nL−1 − nL }. Three basic
1 computations, i.e., addition, multiplication, and computation
Ln+1 (x) = [(2n + 1)xLn(x) − nLn−1(x)] (4) of tanh(·) are involved for updating weights of the ANN.
n+1
The computations in the network are due to the following
As an example, a 2-dimensional input pattern X = [x1 , x2 ]T is requirements: (i) forward calculations to find the activation
enhanced to a 7-dimensional pattern by Legendre functional value of all the nodes of the entire network; (ii) back-error
expansion as X e = [1, L1 (x1 ), L2 (x1 ), L3 (x1 ), L1 (x2 ), L2 (x2 ) propagation for the calculation of square error derivatives;
L3 (x2 )]T . and (iii) updating the weights of entire network.
TABLE I

1 ND
C OMPARISON OF COMPUTATIONAL COMPLEXITY.
NMSE = ∑ [y(k) − ŷ(k)]2 ,
σ 2 ND k=1
(12)
Number of MLP LeNN
Weights S1 n 1 N0 where y(k) and ŷ(k) represent the plant and the ANN model
Additions 3S2 + 3nL − n0n1 2n1 N0 + n1 outputs at kth discrete time, respectively, and σ 2 denotes
Multi 4S2 + 3S3 − n0n1 + 2nL 3n1 N0 + n0 variance of the plant output sequence over the test duration
tanh(·) S3 n1 ND . It may be noted that, in the results for all the examples
L−1 provided below, the ANN model was trained with random
S1 = ∑ (nl + 1)nl+1 N0 = n 0 + 1 signals. Where as, testing of the ANNs were carried out by
l=0
L−1 L applying a sinusoidal signal (11) to the plant and the model.
S2 = ∑ nl nl+1 S3 = ∑ nl The results are shown for 600 discrete samples, i.e., ND =
l=0 l=1 600.
A. Example 1
Total number of weights to be updated in one iteration We consider a system described by the difference equation
in an MLP is given by Σl=0 L−1
(nl + 1)nl+1 , whereas in case of Model 1. The plant is assumed to be of second order and
of a LeNN it is only (n0 + 1)n1 . Since there is no hidden is described by the following difference equation:
layer in the LeNN, computational requirement is drastically
reduced compared to that of an MLP. A comparison of y p (k + 1) = 0.3y p(k) + 0.6y p(k − 1) + g[u(k)], (13)
the computational requirements in one iteration of training
using the BP algorithm, for two ANNs, is provided in where the nonlinear function g is unknown, but α0 = 0.3 and
Table I. In this Table, the number of multiplications and α1 = 0.6 are assumed to be known.
additions required to evaluate the Legendre polynomials are The unknown function g is given by: g(u) = 0.6sin(π u) +
not included. 0.3sin(3π u) + 0.1sin(5π u). To identify the plant, a series-
parallel model was considered which is governed by the
IV. S IMULATION S TUDIES following difference equation:
Extensive simulation studies were carried out with several
examples of nonlinear dynamic systems. We compared per- ŷ p (k + 1) = 0.3y p(k) + 0.6y p(k − 1) + N [u(k)]. (14)
formance of the proposed LeNN with that of an MLP for
The MLP used for the purpose of identification has a
the nonlinear dynamic systems reported by Narendra and
structure of {1 − 20 − 10 − 1}. For the LeNN, the input was
Parthasarathy [1] and used the same MLP architecture {1-
expanded to 14 terms using Legendre polynomials. Both α
20-10-1} as used by them.
and σ in (10) were chosen to be 0.5 in the two ANNs, and
During the training phase, an uniformly distributed random
a white Gaussian noise of −10 dB was added to the input of
signal over the interval [-1, 1] was applied to the plant and the
the plant. The results of the identification with the sinusoidal
ANN model. White Gaussian noise was added to the input
signal (11) are shown in Fig. 4. It may be seen from this
of the plant. As it is usually done in adaptive algorithms, the
figure that the identification of the plant is satisfactory for
learning parameter α and the momentum factor β in both
both the ANNs. But, the estimation error in the LeNN is
ANNs were chosen after several trials to obtain best results.
found to be less than that of the MLP. The NMSE for the
In a similar manner, the functional expansion of the LeNN
MLP and the LeNN models are found to be −16.69 dB and
was carried out. The adaptation continued for 50000 itera-
−25.04 dB, respectively.
tions during which the series-parallel identification scheme
was used. Thereafter, adaption was discontinued. Using the B. Example 2
adapted weights, the ANNs were applied for identification of We consider a plant described by the difference equation
the plants. During the test phase, effectiveness of the ANN of Model 2:
models were analyzed by presenting a sinusoidal signal given
by: y p (k + 1) = f [y p (k), y p (k − 1)] + u(k). (15)
It is known apriori that the output of the plant depends
u(k) = sin(2π k/250), 0 <k≤ 250,
(11) only on the past two values of the output and the input of
= 0.8sin(2π k/250) + 0.2sin(2π k/25), k> 250.
the plant. The unknown function f is given by f (y1 , y2 ) =
Note that this input was not seen by the ANN models during y1 y2 (y1 + 2.5)(y1 − 1.0)/(1.0 + y21 + y22 ). The series-parallel
training as the ANNs were trained using only random signals. scheme used to identify the plant is described by:
Performance comparison between the MLP and the LeNN
ŷ p (k + 1) = N [y p (k), y p (k − 1)] + u(k). (16)
was carried out in terms of output estimated by the ANN
models, actual output of the plant and modeling error. A A MLP of {2-20-10-1} structure was used. In the LeNN,
standard quantitative measure for performance evaluation is the 2−dimensional input vector was expanded by the Leg-
normalized mean square error (NMSE) and is defined as [26]: endre polynomials to 12 terms. A Gaussian noise of −20
5

Plant/ANN outputs and error


(a) MLP PLANT
ANN
Error

−5
0 100 200 300 400 500 600
Discrete time, k

5
Plant/ANN outputs and error

PLANT
(b) LeNN
ANN
Error

−5
0 100 200 300 400 500 600
Discrete time, k

Fig. 4. Identification of nonlinear plant (Example 1) with test sinusoidal signal and additive noise of −10dB: (a) MLP and (b) LeNN.

1
Plant/ANN outputs and error

(a) MLP

−1 PLANT
ANN
ERR
−2
0 100 200 300 400 500 600
Discrete time, k

1
Plant/ANN outputs and error

(b) LeNN

−1 PLANT
ANN
ERR
−2
0 100 200 300 400 500 600
Discrete time, k

Fig. 5. Identification of nonlinear plant (Example 2) with test sinusoidal signal and additive noise of −20dB: (a) MLP and (b) LeNN.

dB was added to the input of the plant. For the MLP the
values of α and β were set to 0.05 and 0.10, respectively. y p (k + 1) = f [y p (k)] + g[u(k)], (17)
In the case of leNN, the values of α and β were chosen as
where the unknown functions f and g have are given by:
0.009 and 0.01, respectively. After completion of training, y(y+0.3)
the sinusoidal signal (11) was applied to the plant and the f (y) = 1.0+y2 , and g(u) = u(u + 0.8)(u − 0.5). The series
ANN models. The results of the identification are shown in parallel model for identification is given by
Fig. 5. The values of NMSE for the MLP and the LeNN ŷ p (k + 1) = N1 [y p (k)] + N2 [u(k)], (18)
are found to be −18.95 dB and −18.80 dB, respectively. It
may be seen that the performance of both ANN models are where N1 and N2 are the two ANNs used to approximate
similar and satisfactory. the two nonlinear functions f and g, respectively.
In the case of MLP, both N1 and N2 were represented by
{1 − 20 − 10 − 1} whereas, in the case of LeNN these were
C. Example 3
represented by {14 − 1} structure. To improve the learning
Here, the plant is of Model 3, and is described by the process, the output of the plant was scaled down by a scale
following difference equation: factor (SF) before applying it to the ANN model. The SF
Plant and ANN outputs and error
2
(a) MLP PLANT
1.5 ANN
ERR
1

0.5

−0.5
0 100 200 300 400 500 600
Plant and ANN outputs and error Discrete time, k

2
(b) LeNN PLANT
1.5 ANN
ERR
1

0.5

−0.5
0 100 200 300 400 500 600
Discrete time, k

Fig. 6. Identification of nonlinear plant (Example 3) with test sinusoidal signal and additive noise of −20dB: (a) MLP and (b) LeNN.

1
Plant/ANN outputs and error

(a) MLP PLANT


ANN
0.5
ERR

−0.5

−1
0 100 200 300 400 500 600
Discrete time, k

1
Plant/ANN outputs and error

(b) LeNN PLANT


ANN
0.5
ERR

−0.5

−1
0 100 200 300 400 500 600
Discrete time, k

Fig. 7. Identification of nonlinear plant (Example 4) with test sinusoidal signal and additive noise of −20dB: (a) MLP and (b) LeNN.

was chosen as 2.0. The learning parameters α and β were by the following difference equation:
chosen as 0.50 and 0.2, respectively, for the MLP. Whereas,
y p (k + 1) = f [y p (k), y p (k − 1), y p(k − 2), u(k), u(k − 1)],
in the case of LeNN, α and β were selected as 0.25 and
(19)
0.95, respectively. A Gaussian noise of −20 dB was added
where the unknown nonlinear function f is given by
to the input of the plant. The results of the identification
f [a1 , a2 , a3 , a4 , a5 ] = (a1 a2 a3 a5 (a3 − 1.0) + a4 )/(1.0 + a22 +
are depicted in Fig. 6. The NMSE values are found to be
a23 ). The series-parallel model for identification of the plant
−19.45 dB and −19.87 dB for the MLP and LeNN models,
is given by
respectively. It may be seen that the LeNN is able to estimate
the plant response similar to that of the MLP. ŷ p (k + 1) = N [y p (k), y p (k − 1), y p(k − 2), u(k), u(k − 1)].
(20)
In the case of MLP and LeNN, N is represented by {5 −
D. Example 4 20−10−1} and {10−1} structures, respectively. The inputs,
y p ’s and u’s were expanded by using Legendre polynomials
The plant model selected here is the most general of all the to 10 terms and used in the LeNN for identification of the
examples chosen. It belongs to the Model 4 and is described plant. A Gaussian noise of −20 dB was added to input of the
plant and a SF of 0.7 was selected. The learning parameter α [12] A. Namatame and N. Uema, “Pattern calssification with Chebyshev
and the momentum factor β for the MLP model were chosen neural networks,” Neural Networks, vol. 3, pp. 23–31, March 1992.
[13] J. C. Patra and A. C. Kot, “Nonlinear dynamic system identification
as 0.01 and 0.10, respectively. Where as, in case of LeNN, using Chebyshev functional link artificial neural network,” IEEE
the α and β were selected as 0.02 and 0.10, respectively. Trans. Systems, Man, and Cybernetics, Part B- Cybernetics, vol. 32,
The outputs of the plant and the ANN models along with pp. 505–511, Aug. 2002.
[14] H. Zhao and J. Zhang, “Nonlinear dynamic system identification
their corresponding errors are shown in Fig. 7. The NMSE using pipelined functional link artificial recurrent neural network,”
for the MLP and LeNN models were found to be −15.56 Neurocomputing, vol. 72, pp. 3046–3054, 2009.
dB and −15.53 dB, respectively. It may be observed that [15] S. Purwar, I. N. Kar, and A. N. Jha, “On-line system identification
of complex systems using chebyshev neural networks,” Applied Soft
the performance of the LeNN is similar to that of the MLP Computing, vol. 7, pp. 364–372, 2007.
model. From the above simulation results (Figs. 4-7), it may [16] ——, “Adaptive output feedback tracking control of robot manipula-
be seen that outputs of the ANN models agree with that of tors using position measurements only,” Expert Systems with Applica-
tions, vol. 34, pp. 2789–2798, 2008.
the plant models satisfactorily. The performance of the MLP [17] J. C. Patra, W. B. Poh, N. S. Chaudhari, and A. Das, “Nonlinear
and LeNN is found to be similar. The prime advantage of chaneel equalization with QAM signal using Chebyshev artificial
using LeNN is that its computational requirement is much neural network,” in Proc. IEEE Int. Joint Conf. Neural Networks
(IJCNN2005). Montreal, Canada, Aug. 1995, pp. 3214–3219.
less than that of MLP. [18] H. Zhao and J. Zhang, “Functional link neural network cascaded with
Chebyshev orthogonal polynomial for nonlinear channel equalization,”
V. C ONCLUSIONS Signal Processing, vol. 88, pp. 1946–1957, 2008.
[19] W. D. Weng, C. S. Yang, and R. C. Lin, “A channel equalizer using
We have proposed a novel single-layer NN structure, reduced decision feedback Chebyshev functional link artificial neural
named Legendre neural network, for identification of non- networks,” Information Sciences, vol. 177, pp. 2642–2654, 2007.
[20] J. C. Patra, M. Juhola, and P. K. Meher, “Intelligent sensors using
linear dynamic systems. In the proposed LeNN, the input computationally efficient Chebyshev neural networks.” IET Sci. Meas.
functional expansion is carried out using Legendre poly- Technol., vol. 2, no. 2, pp. 68–75, 2008.
nomials. In the four models of nonlinear dynamic systems [21] C.-H. Chen, C.-J. Lin, and C.-T. Lin, “A functional-link-based neu-
rofuzzy network for nonlinear system control,” IEEE Trans. Fuzzy
considered in this study, the LeNN is found to be effective in Systems, vol. 16, no. 5, pp. 1362–1378, Oct. 2008.
identification of all the systems. The prime advantage of the [22] Y.-R. Huang, Y. Kang, M.-H. Chu, and Y.-P. Chang, “Modeling
proposed NN is its reduced computational complexity with- belt-servomechanism by chebyshev functional recurrent neuro-fuzzy
network.” Journal of Advanced Mechanical Design, Systems, and
out compromising performance. Simulation results indicate Manufacturing, vol. 2, no. 5, PAGES = 949-960, YEAR = 2008.
that performance of the proposed network is as good as that [23] S. S. Yang and C.-S. Tseng, “An orthogonal neural network for
of a MLP network in presence of additive noise. Because of function approximation,” IEEE Trans. Systems, Man, and Cybernetics,
Part-B, vol. 26, no. 5, pp. 779–785, Oct. 1996.
computational advantage of LeNN, it may be used for other [24] J. C. Patra, W. C. Chin, P. K. Meher, and G. Chakraborty, “Legendre-
signal and image processing applications. FLANN-based nonlinear channel equalization in wireless communi-
cation systems,” in Proc. IEEE Int. Conf. Systems, Man, Cybernetics
(SMC2008). Singapore, Oct. 2008, pp. 1826–1831.
R EFERENCES [25] J. C. Patra, P. K. Meher, and G. Chakraborty, “Nonlinear channel
[1] K. S. Narendra and K. Parthasarathy, “Identification and control equalization for wireless communication systems using Legendre
neural networks,” Signal Processing., vol. 89, pp. 2251–2262, 2009.
of dynamical systems using neural networks.” IEEE Trans. Neural
Networks, vol. 1, no. 1, pp. 4–27, March 1990. [26] N. A. Gershenfeld and A. S. Weigend, “The future of time series:
Learning and understanding.” in Time Series Prediction: Forecasting
[2] P. J. Antsaklis, “Neural networks for control systems.” IEEE Trans.
the Future and Past, A. S.Weigend and N. A. Gershenfeld, Eds.
Neural Networks, vol. 1, no. 2, pp. 242–244, Jun. 1990.
Reading, MA: Addison-Wesley, 1993, pp. 1–70.
[3] A. G. Parlos, K. T. Chong, and A. F. Atiya, “Application of recurrent
multilayer perceptron in modeling of complex process dynamics,”
IEEE Trans. Neural Networks, vol. 5, pp. 255–266, Mar. 1994.
[4] N. Sadegh, “A perceptron network for functional identification and
control of nonlinear systems.” IEEE Trans. Neural Networks, vol. 4,
no. 6, pp. 982–988, Nov. 1993.
[5] S. S. Ge, T. H. Lee, and C. H. Harris, Adaptive neural network
control of robotic manipulators, World Scientific Series in Robotics
and Intelligent Systems , vol. 19, 1998.
[6] S. Haykin, Neural Networks, 2nd Ed. Upper Saddle River, NJ:
Prentice Hall, 1999.
[7] Y.-H. Pao, Adaptive Pattern Recognition and Neural Networks. Read-
ing, MA: Addison-Wesley, 1989.
[8] Y.-H. Pao and S. M. Philips, “The functional link net and learning
optimal control,” Neurocomputing, vol. 9, pp. 149–164, 1995.
[9] J. C. Patra, R. N. Pal, B. N. Chatterji, and G. Panda, “Identification
of nonlinear dynamic systems using functional linkartificial neural
networks,” IEEE Transactions on Systems, Man and Cybernetics, Part
B- Cybernetics, vol. 29, no. 2, pp. 254–262, Apr. 1999.
[10] S. Dehuri and S.-B. Cho, “A comprehensive survey on functional link
neural networks and an adaptive psobp learning for cflnn.” Neural
Computing and Applications, vol. DOI 10.1007/s00521-009-0288-5,
July 2009.
[11] T. T. Lee and J. T. Teng, “The Chebyshev polynomial-based unified
model neural network for function approximation,” IEEE Trans. Sys-
tems, Man, and Cybernetics, Part-B, vol. 28, no. 6, pp. 925–935, Dec.
1998.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy