Solutions of Partial Differential Equations Using
Solutions of Partial Differential Equations Using
Yaseen M. Alrajhi
Al-Mustansiriyah University, College of Science , Department of Mathematics.
E-mail address: yargmmn@yahoo.com
INTRODUCTION
Applying mathematics to a problem of the real world mostly means, at first modeling the
problem mathematically, maybe with hard restrictions, idealizations ,or simplifications,[1] then
solving the mathematical problem, and finally drawing conclusions about the real problem
based on the solutions of the mathematical problem.Since about 60 years, a shift of paradigms
has taken place in some sense, the opposite way has come into fashion. The point is that the
world has done well even in times when nothing about mathematical modeling was known.
More specifically, there is an enormous number of highly sophisticated processes and
mechanisms in our world which have always attracted the interest of researchers due to their
admirable perfection. To imitate such principles mathematically and to use them for solving a
broader class of problems has turned out to be extremely helpful in various disciplines. Just
briefly. The class of such methods will be the main object of study throughout this whole
project Genetic Algorithms (GAs).
Generally speaking, genetic algorithms are simulations of evolution, of what kind ever. In most
cases, however, genetic algorithms are nothing else than probabilistic optimization methods
which are based on the principles of evolution. This idea appears first in 1967 in J. D. Bagley’s
thesis [2]. The theory and applicability was then strongly influenced by J. H. Holland, who
can be considered as the pioneer of genetic algorithms [6, 7]. Since then, this field has
witnessed a tremendous development. The purpose of this project is to give a comprehensive
overview of this class of methods and their applications in optimization, program induction,
and machine learning. Genetic Algorithm uses this idea of selection of the best fit to find
55
International Journal of Mathematics and Statistics Studies
Vol.2, No.1, pp.55-69, March 2014
Published by European Centre for Research Training and Development (www.ea-journals.org)
optimal solutions to problems. We will examine how to use genetic Algorithm to solve Partial
differential equations (PDE's) and also how to approximate solutions of an PDE's with no exact
solution.
In the following sections we give in Section 2 description of the The Evolutionary
Computation in Section 3 The Basics of Genetic Algorithms in Section 4 Grammatical
Evolution in Section 5 Technical of the accelerated method in section 6 Applications of the
Algorithm in section 7 Compare the method in section 8 Convergence and in section 9
conclusion
Among other things this genotype includes a parameter vector which contains all necessary
information about the properties of a certain individual. Before the intrinsic evolutionary
process takes place, the population is initialized arbitrarily; evolution, i.e., replacement of the
old generation by a new generation, proceeds until a certain termination criterion is fulfilled.
The major difference between evolution strategies and genetic algorithms lies in the
representation of the genotype and in the way the operators are used (which are mutation,
selection, and eventually recombination). In contrast to As, where the main role of the mutation
operator is simply to avoid stagnation, mutation is the primary operator of evolution strategies.
Genetic programming (GP), an extension of the genetic algorithm, is a domain-independent,
biologically inspired method that is able to create computer programs from a high-level
problem statement.
In fact, virtually all problems in artificial intelligence, machine learning, adaptive systems, and
automated learning can be recast as a search for a computer program, genetic programming
provides a way to search for a computer program in the space of computer programs ,[10].
Similar to GAs, GP works by imitating aspects of natural evolution, but whereas GAs are
intended to find arrays of characters or numbers, the goal of a GP process is to search for
computer programs (or, for example, formulas) solving the optimization problem at hand. As
in every evolutionary process, new individuals (in GP’s case, new programs) are created. They
are tested, and the fitter ones in the population succeed in creating children of their own
whereas unfit ones tend to disappear from the population.
56
International Journal of Mathematics and Statistics Studies
Vol.2, No.1, pp.55-69, March 2014
Published by European Centre for Research Training and Development (www.ea-journals.org)
The Basics of Genetic Algorithms
Genetic Algorithms are inspired by Charles Darwin’s principle of natural selection[1,4]. The
basic algorithm starts with a ‘population’ of random parameter vectors or ‘individuals’ and
uses these to evolve a particular individual that solves or partially solves some optimization
problem. ‘Evolution’ is implemented by using an artificial selection mechanism at each
generation and using the selected individuals to produce the next generation. New individuals
can be produced from old individuals by a variety of means including random perturbation
(asexual reproduction or ‘mutation’) or by combining parts from two or more individuals
(parents) to make a new individual (hermaphroditic reproduction or ‘crossover’).
The process of constructing new individuals from the previous generation is called
‘reproduction’, and mutation and crossover are called reproduction ‘operators’. An individual’s
actual encoding (i.e. the integers and reals , or just ones and zeros) is sometimes called its
‘genetic’ encoding and the form of the encoding is often described s consisting of
‘chromosomes’ or sometimes, ‘genes’. The term ‘representation’ in GAs describes a higher
level concept than the actual encoding of ones and zeros ,the ‘representation’ is the meaning
attached to each of the parameter values in the parameter vector. The only domain-specific
information available to the algorithm is implicitly built into the ‘fitness function’. The
representation only acquires the meaning we want if we build that assumption into the fitness
function. The fitness function takes a single parameter vector (an ‘individual’) as its argument
and returns (usually) a single ‘fitness value’, so called because it measures the ‘fitness’ of that
individual in terms of its ability to solve the optimization problem at hand.
Selection occurs solely on the basis of these fitness values. This allows the neat partitioning of
the genetic algorithm into three separate components:
1. The (initially random) parameter vectors
2. The (problem specific) fitness function
3. The evolution mechanism (consisting of selection and reproduction)
In GAs the evolution mechanism is often generalized to such an extent that the only change
required to apply the algorithm to a new problem is to define a new fitness function. A number
of crossover operators may be available, but many of these are identical in principle, but for a
change in representation1. A common choice is 1-point crossover which chooses a point
randomly along the parameter vector and concatenates that part of the vector to the left of this
point in one parent’s encoding with that part to the right in the other parent’s encoding. This
can be generalized to n point crossover by choosing n points and taking each section from
alternate parents. It is important to simulate a sufficiently large population (~102-103), and to
allow evolution to proceed for enough generations (~50-2000), however, the success of a
genetic algorithm depends mainly on how appropriate the reproduction operators are for the
particular representation, and how informative the fitness function is.
An ‘appropriate’ reproduction operator is one which produces new individuals in such a way
that there is a reasonable probability that the fitness of the new individual will be significantly
higher than that of its parents. An ‘informative’ fitness function is one that gives monotonically
increasing fitness values to individuals as they get closer to solving the problem under
consideration. The combination of these two properties opens the way to the successful
evolution of a solution to the optimization problem at hand. Much of the theoretical work in
the field of GAs is into finding the most appropriate reproduction operators and representations.
57
International Journal of Mathematics and Statistics Studies
Vol.2, No.1, pp.55-69, March 2014
Published by European Centre for Research Training and Development (www.ea-journals.org)
The range of behaviors generable by a GA (or GP) is limited explicitly by the fitness function
and the meaning of the parameters it accepts. In GAs the meaning of the parameters is usually
quite limited and the number of parameters fixed. In Genetic Programming, the representation
of each individual is actually a program in some limited programming language which must be
executed to determine the fitness of the individual. In this case the fitness function contains an
interpreter that must decode the instructions of each individual to determine how well that
individual solves the given problem.
Concerning its internal functioning, a genetic algorithm is an iterative procedure which usually
operates on a population of constant size and is basically executed in the following way: [2]
58
International Journal of Mathematics and Statistics Studies
Vol.2, No.1, pp.55-69, March 2014
Published by European Centre for Research Training and Development (www.ea-journals.org)
Backus–Naur Form (BNF)
BNF is a notation for expressing the grammar of a language in the form of production rules
[3,16]. BNF grammars consist of terminals , which are items that can appear in the language,
e.g., +,- , etc., and non-terminals , which can be expanded into one or more terminals and non-
terminals. A grammar can
be represented by the tuple {N , T , P , S }, where N is the set of non-terminals, T the set of
terminals, P a set of production rules that maps the elements of N to T , and S is a start symbol
that is a member of N . When there are a number of productions that can be applied to one
particular N , the choice is delimited with the ' │ ' symbol. Below is an example BNF, where
Grammatical Evolution
Grammatical evolution is an evolutionary algorithm that can produce code in any programming
language,[5 ,16]. The algorithm requires as inputs the BNF grammar definition of the target
language and the appropriate fitness function. Chromosomes in grammatical evolution, in
contrast to classical genetic programming [2], are not expressed as parse trees, but as vectors
of integers. Each integer denotes a production rule from the BNF grammar. The algorithm starts
from the start symbol of the grammar and gradually creates the program string, by replacing
non terminal symbols with the right hand of the selected production rule. The selection is
performed in two steps:[5].
where NR is the number of rules for the specific non-terminal symbol. The process of replacing
non terminal symbols with the right hand of production rules is continued until either a full
program has been generated or the end of chromosome has been reached. In the latter case we
can reject the entire chromosome or we can start over (wrapping event) from the first element
of the chromosome. In our approach we allow at most two wrapping events to occur. In our
method we used grammar as we can see in Table 1. The numbers in parentheses denote the
sequence number of the corresponding production rule to be used in the selection procedure
described above.
59
International Journal of Mathematics and Statistics Studies
Vol.2, No.1, pp.55-69, March 2014
Published by European Centre for Research Training and Development (www.ea-journals.org)
Table 1: The grammar of the proposed method
S::=<expr>
<expr> ::= <expr> <op> <expr> (0)
| ( <expr> ) (1)
| <func> ( <expr> ) (2)
|<digit> (3)
|x (4)
|y (5)
|z (6)
Further details about grammatical evolution can be found in [12, 13, 14, 15]
The symbol S in the grammar denotes the start symbol of the grammar. For example, suppose
we have the chromosome g = [7 9 4 14 28 10 12 2 17 15 6 11 10 24 11 ]. In Table 2 , we show how a
valid function is produced from g. The resulting function is : f(x,y) = cos(x) + sin(y).
60
International Journal of Mathematics and Statistics Studies
Vol.2, No.1, pp.55-69, March 2014
Published by European Centre for Research Training and Development (www.ea-journals.org)
Technique of the algorithm
The algorithm has the following phases:
1. Initialization.
2. Fitness evaluation.
3. Genetic operations.
4. Termination control.
Initialization
In the initialization phase the values for mutation rate and selection rate are set. The selection
rate denotes the fraction of the number of chromosomes that will go through unchanged to the
next generation(replication). The mutation rate controls the average number of changes inside
a chromosome. Every chromosome in the population is initialized at random. The initialization
of every chromosome is performed by randomly selecting an integer for every element of the
corresponding vector.
Fitness evaluation
We express the PDE's in the following form:
𝜕𝑢 𝜕𝑢 𝜕2 𝑢 𝜕2 𝑢
𝑓 (𝑥, 𝑦, 𝜕𝑥 (𝑥, 𝑦), 𝜕𝑦 (𝑥, 𝑦), 𝜕𝑥 2 (𝑥, 𝑦), 𝜕𝑦 2 (𝑥, 𝑦)) = 0 , 𝑥 ∈ [𝑥0 , 𝑥1 ] , 𝑦 ∈ [𝑦0 , 𝑦1 ]
The steps for the fitness evaluation of the population are the following:
1. Choose N2 equidistant points in the box [𝑥0 , 𝑥1 ] × [𝑦0 , 𝑦1 ] , Nx equidistant points on
the boundary at x = x0 and at x = x1 , Ny equidistant points on the boundary at y = y0
and at y = y1
2. For every chromosome i
i- Construct the corresponding model Mi(x ,y), expressed in the grammar described
earlier.
ii- Calculate the quantity
2 𝜕 𝜕 𝜕2 𝜕2
𝐸(𝑀𝑖 ) = ∑𝑁
𝑗=0(𝑓(𝑥𝑗 , 𝑦𝑗 , 𝜕𝑥 𝑀𝑖 (𝑥𝑗 , 𝑦𝑗 ), 𝜕𝑦 𝑀𝑖 (𝑥𝑗 , 𝑦𝑗 ), 𝜕𝑥 2 𝑀𝑖 (𝑥𝑗 , 𝑦𝑗 ), 𝜕𝑦 2 𝑀𝑖 (𝑥𝑗 , 𝑦𝑗 ))
2
iii- Calculate an associated penalty Pi(Mi) . The penalty function P depends on the
boundary conditions and it has the form:
𝑁𝑥
𝑃1 (𝑀𝑖 ) = ∑𝑗=1(𝑀𝑖 (𝑥0 , 𝑦𝑗 ) − 𝑓0 (𝑦𝑗 ))2
𝑁
𝑃2 (𝑀𝑖 ) = ∑𝑗=1
𝑥
(𝑀𝑖 (𝑥1 , 𝑦𝑗 ) − 𝑓1 (𝑦𝑗 ))2
𝑦 𝑁
𝑃3 (𝑀𝑖 ) = ∑𝑗=1(𝑀𝑖 (𝑥𝑗 , 𝑦0 ) − 𝑔0 (𝑥𝑗 ))2
𝑦 𝑁
𝑃4 (𝑀𝑖 ) = ∑𝑗=1(𝑀𝑖 (𝑥𝑗 , 𝑦1 ) − 𝑔1 (𝑥𝑗 ))2
61
International Journal of Mathematics and Statistics Studies
Vol.2, No.1, pp.55-69, March 2014
Published by European Centre for Research Training and Development (www.ea-journals.org)
Genetic operators
The genetic operators that are applied to the genetic population are the initialization, the
crossover and the mutation.
The initialization is applied only once on the first generation. For every element of each
chromosome a random integer in the range [0..255] is selected.
The crossover is applied every generation in order to create new chromosomes from the old
ones, that will replace the worst individuals in the population. In that operation for each couple
of new chromosomes two parents are selected, we cut these parent - chromosomes at a
randomly chosen point and we exchange the right-hand-side sub-chromosomes, as shown in
Fig.1.
– Parent 1: 2 20 14 |5 25 18
– Parent 2: 8 13 17 | 28 3 30
– Offspring 1: 2 20 14 28 3 30
– Offspring 2: 8 13 17 5 25 18
Fig.1 : Crossover
The parents are selected via tournament selection, i.e. :
- First, we create a group of K >= 2 randomly selected individuals from the current
population.
- The individual with the best fitness in the group is selected, the others are discarded.
The final genetic operator used is the mutation, where for every element in a chromosome a
random number in the range [0 , 1] is chosen. If this number is less than or equal to the
mutation rate the corresponding element is changed randomly, otherwise it remains intact.
In every generation the following steps are performed:
1. The chromosomes are sorted with respect to their fitness value, in a way that the best
chromosome is placed at the beginning of the population and the worst at the end.
2. c = (1 - s) * g new chromosomes are produced by the crossover operation, where s is the
replication rate of the model and g is the total number of individuals in the population.
The new individuals will replace the worst ones in the population at the end of the
crossover.
3. The mutation operation is applied to every chromosome excluding those which have
been selected for replication in the next generation.
Termination control
The genetic operators are applied to the population creating new generations, until a maximum
number of generations is reached or the best chromosome in the population has fitness better
than a preset threshold.
62
International Journal of Mathematics and Statistics Studies
Vol.2, No.1, pp.55-69, March 2014
Published by European Centre for Research Training and Development (www.ea-journals.org)
TECHNIQUES OF THE ACCELERATED METHOD
To make the method is faster to arrive the exact solution of the partial differential equations by
the following :
1- insert the boundary conditions of the problem as a part of chromosomes in the our
population of the problem, the algorithm gives the exact solution a few generations. i.e. y3
= [28 5 2 7 5 2 5 15 2] represent the boundary condition of problem.
2- insert a part of exact solution ( or particular solution ) as a part of a chromosome in the
population, we find the algorithm gives the exact solution in a few generations.
3- insert the vector of exact solution ( if exist ) as a chromosome in the our population of the
problem, the algorithm gives the exact solution in the first generation . i.e. cos(x)cos(y) =
[7 2 9 4 14 2 21 12 31 4] the solution of example 2.
The value of N for PDE's was set to 5 and Nx = Ny = 50. depending on the problem, Also, for
some problems we present graphs of the intermediate trial solutions. In all experiments we use
Mat lab R2010a ,And use the function (randi) to generate the random integers with normal
distribution where used to generation the population . In each experiment we insert the the
boundary conditions as apart of chromosome in the population to renders the method fast to
arrives the exact solution of the problems.
63
International Journal of Mathematics and Statistics Studies
Vol.2, No.1, pp.55-69, March 2014
Published by European Centre for Research Training and Development (www.ea-journals.org)
Example 1:-
𝜕2 𝑢 𝜕2 𝑢
(𝑥, 𝑦) + (𝑥, 𝑦) = 0 , for (x,y) in the set 𝑅 = {(𝑥, 𝑦): 0 < 𝑥 < 0.5 , 0 < 𝑦 < 0.5}
𝜕𝑥 2 𝜕𝑦 2
With the boundary conditions 𝑢(0, 𝑦) = 0 , 𝑢(𝑥, 0) = 0 , 𝑢(𝑥, 0.5) = 200𝑥 , 𝑢(0.5, 𝑦) = 200𝑦
The exact solution u(x,y) = 400xy recovered at the 10th generation. At generation 1 the trial
solution was GP2(x, y) = 200xy with fitness value 8.3607e+004 . At the 2nd generation the
trial solution was GP3(x, y) = exp(y) + 350xy-1 with fitness value 3.9236*1e4 as shown in
fig.2. The error = exact solution–trail solution show in Table 3.
Example 2 :-
𝜕2 𝑢 𝜕2 𝑢
(𝑥, 𝑦) + (𝑥, 𝑦) = −(cos(𝑥 + 𝑦) + cos(𝑥 − 𝑦))
𝜕𝑥 2 𝜕𝑦 2
for (x,y) in the set 𝑅 = {(𝑥, 𝑦): 0 < 𝑥 < 𝜋 , 0 < 𝑦 < 𝜋2}
𝜋
With the boundary conditions 𝑢(0, 𝑦) = 𝑐𝑜𝑠𝑦 , 𝑢(𝑥, 0) = 𝑐𝑜𝑠𝑥 , 𝑢 (𝑥, = 0) = 200𝑥 , 𝑢(𝜋, 𝑦) = −𝑐𝑜𝑠𝑦
2
The exact solution u(x,y) = cosxcosy . recovered at the 5th generation. with fitness value
0.7883*1e-30. as shown in Fig.3.
64
International Journal of Mathematics and Statistics Studies
Vol.2, No.1, pp.55-69, March 2014
Published by European Centre for Research Training and Development (www.ea-journals.org)
Example 3:-
𝜕𝑢 𝜕2 𝑢
(𝑥, 𝑡) − 2 (𝑥, 𝑡) = 0 , 0 < 𝑥 < 1 , 𝑡 > 0 with boundary conditions
𝜕𝑡 𝜕𝑥
𝑢(0, 𝑡) = 𝑢(1, 𝑡) = 0 , 0 < 𝑡 and initial condition 𝑢(𝑥, 0) = 𝑠𝑖𝑛𝜋𝑥 , 0 ≤ 𝑥 ≤ 1
2
The exact solution 𝑢(𝑥, 𝑡) = 𝑒 −𝜋 𝑡 sin(𝜋𝑥) . recovered at the 50th generation. At generation
1 the trial solution was GP1(x, t) = sin(x)/exp(t) with fitness value 12.3667. as shown in
Fig.4.
65
fig.4 : Exact and trail solutions of example 3
International Journal of Mathematics and Statistics Studies
Vol.2, No.1, pp.55-69, March 2014
Published by European Centre for Research Training and Development (www.ea-journals.org)
Example 4:-
𝜕𝑢 𝜕2 𝑢
(𝑥, 𝑡) − 2 (𝑥, 𝑡) = 2 , 0 < 𝑥 < 1 , 𝑡 > 0 with boundary conditions
𝜕𝑡 𝜕𝑥
𝑢(0, 𝑡) = 𝑢(1, 𝑡) = 0 , 0 < 𝑡 and initial condition 𝑢(𝑥, 0) = sin(𝜋𝑥) + 𝑥(1 − 𝑥) , 0 ≤ 𝑥 ≤ 1
2
The exact solution 𝑢(𝑥, 𝑡) = 𝑒 −𝜋 𝑡 sin(𝜋𝑥) + 𝑥(1 − 𝑥) . recovered at the generation 58. At
generation 1 the trial solution was GP1(x, t) = x + sin(x)/exp(t) - x2 with fitness value 12.3667
as shown in Fig. 5. The error = exact solution – trail solution shown in Table 4 .
Example 5:-
𝜕2 𝑢 𝜕2 𝑢
2
(𝑥, 𝑡) − 4 (𝑥, 𝑡) = 0 , 0 < 𝑥 < 1 , 𝑡 > 0 subject to the conditions
𝜕𝑡 𝜕𝑥 2
𝑢(0, 𝑡) = 𝑢(1, 𝑡) = 0 , 𝑡 > 0 and 𝑢(𝑥, 0) = sin(𝜋𝑥) 𝑎𝑛𝑑 𝜕𝑢 𝜕𝑡
(𝑥, 0) = 0 , 0 ≤ 𝑥 ≤ 1 .
The exact solution 𝑢(𝑥, 𝑡) = sin(𝜋𝑥)cos(2𝜋𝑡) . recovered at the generation 30. At generation 1
the trial solution was GP10(x,t) = cos(4t)sin(2x) with fitness value 51.4329. as shown in
fig.6. The error = exact solution – trail solution shown in table 4 .
66
International Journal of Mathematics and Statistics Studies
Vol.2, No.1, pp.55-69, March 2014
Published by European Centre for Research Training and Development (www.ea-journals.org)
Example 6:-
𝜕2 𝑢 1 𝜕2 𝑢
2
(𝑥, 𝑡) − (𝑥, 𝑡) = 0 , 0 < 𝑥 < 0.5 , 𝑡 > 0 subject to the conditions
𝜕𝑡 16𝜋 2 𝜕𝑥 2
𝑢(0, 𝑡) = 𝑢(0.5, 𝑡) = 0 , 𝑡 > 0 and 𝑢(𝑥, 0) = 0 𝑎𝑛𝑑 𝜕𝑢
𝜕𝑡
(𝑥, 0) = sin(4𝜋𝑥) , 0 ≤ 𝑥 ≤ 0.5
The exact solution 𝑢(𝑥, 𝑡) = sin(𝑡)sin(4𝜋𝑥) . recovered at the generation 31. At generation 1 the
trial solution was GP1(x, t) = sin(5x)sin(t) with fitness value 1.7290. as shown in Fig. 7.
67
International Journal of Mathematics and Statistics Studies
Vol.2, No.1, pp.55-69, March 2014
Published by European Centre for Research Training and Development (www.ea-journals.org)
Comparison the Method
The comparison of the present method with other grammatical evolutionary methods ,we found
that this method is faster than others, because the technical of accelerated the method. In the
Table.5 followed below ,we compared the current method with the study of [ 5 ] , where Min
, Max , Avg ,means minimum ,maximum ,average of generations to find the exact solution of
the PDE's .
CONCLUSION
In this paper new accelerated method of GA was introduced, and applied for the purposes of
solving PDE's . It is noted that this method has general utility for applications. GA is an
evolutionary algorithm that can evolve ' rulesets '. In this study we found that insertion of
boundary condition in population make the algorithm fast to approximate the exact solution.
Our method for this initial study has included a number of simplifications, for example we only
considered a small set of equations . The study could also be extended by constructing a another
type of partial differential equations .
REFERENCES
[1] A. Beham, M. Affenzeller ,Genetic Algorithms and Genetic Programming, Modern
Concepts and Practical Applications , Berlin, Germany, 2009.
[2] J. D. Bagley, The Behavior of Adaptive Systems Which Employ Genetic and
Correlative Algorithms. PhD thesis, University of Michigan,1967.
[3] C. Ryan, M. O'Neill, and J.J. Collins, Grammatical Evolution: Solving
Trigonometric Identities," In proceedings of Mendel 1998.
[4] D.E. Goldberg, Genetic algorithms in search, Optimization and Machine Learning,
Addison Wesley, 1989.
[5] G. Tsoulos .I.E, Solving differential equations with genetic programming P.O.Box
1186 , Ioannina 45110, 2003.
68
International Journal of Mathematics and Statistics Studies
Vol.2, No.1, pp.55-69, March 2014
Published by European Centre for Research Training and Development (www.ea-journals.org)
[6] J. H. Holland , Adaptation in Natural and Artificial Systems, first MIT Press ed. The
MIT Press, Cambridge, MA, 1992. First edition: University of Michigan Press, 1975.
[7] J. H. Holland, K. J. Holyoak, R.E.Nisbett, and P. R. Thagard, Induction: Processes
of Inference, Learning, and Discovery. Computational Models of Cognition and
Perception. The MIT Press, Cambridge, MA, 1986.
[8] H.P. Schwefel. ,Numerische Optimierung von Computer-Modellen mittels der
Evolutions strategie, Birkhauser Verlag,Basel, Switzerland, 1994.
[9] I. Rechenberg . Evolutions strategies . Friedrich Frommann Verlag, 1973.
[10] J. R. Koza, Genetic Programming: On the programming of Computer by Means
of Natural Selection. MIT Press: Cambridge, MA, 1992.
[11] R. Kruse, J. Gebhardt, and . Klawonn, Foundations of Fuzzy Systems. John Wiley
& Sons, New York, 1994.
[12] M. O'Neill and C. Ryan, Under the hood of grammatical evolution,1999.
[13] M. O'Neill and C. Ryan, Evolving Multi-Line Compliable C Programs, Springer-
Verlag, pp. 83-92, 1999.
[14] M. O'Neill and C. Ryan, Grammatical Evolution: Evolutionary Automatic
Programming in a Arbitrary Language, volume 4 of Genetic programming. Kluwer
Academic Publishers, 2003.
[15] M. O'Neill and C. Ryan, Grammatical Evolution, IEEE Trans. Evolutionary
Computation, Vol. 5, pp. 349-358, 2001.
[16] P. Naur, “Revised report on the algorithmic language ALGOL 60,”Commun. CM,
vol. 6, no. 1, pp. 1–17, Jan. 1963.
[17] R. H. J. M. Otten, , and L. P. P. P. van Ginneken, The Annealing Algorithm.
Kluwer Academic Publishers, Boston, 1989.
[18] D. E Rumelhart, and J. L. McClelland, Parallel Distributed Processing Exploration
in the Microstructures of Cognition, Volume I:Foundations. MIT Press,
Cambridge, MA, 1986.
[19] P. J. M. van Laarhoven, , and E. H. L., Aarts, Simulated Annealing: Theory and
Applications, Kluwer Academic Publishers, Dordrecht, 1987.
[20] P. Whigham, Grammatically-based Genetic Programming. In Proceeding of the
Workshop on Genetic Programming : From Theory to Real-World Applications, pages
33-41. Morgan Kaufmann Pub. 1995.
[21] H.-J Zimmermann,. Fuzzy Set Theory and its Applications, second ed. Kluwer
Academic Publishers, Boston, 1991.[32] Zurada, J. M. Introduction to Artificial
Neural Networks. West Publishing ,St. Paul, 1992.
69