0% found this document useful (0 votes)
118 views18 pages

Artificial Neural Networks - Theoretical B PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
118 views18 pages

Artificial Neural Networks - Theoretical B PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

652  Wesolowski & Suchacz: Journal of AOAC International Vol. 95, No.

3, 2012

SPECIAL GUEST EDITOR SECTION

Artificial Neural Networks: Theoretical Background and


Pharmaceutical Applications: A Review
Marek Wesolowski and Bogdan Suchacz
Medical University of Gdansk, Department of Analytical Chemistry, Al. Gen. J. Hallera 107, 80-416 Gdansk, Poland

In recent times, there has been a growing interest instrumental techniques of analysis can obtain enormous
in artificial neural networks, which are a rough databases of measurement results in a relatively short time. The
simulation of the information processing ability of results not only describe the concentration of analyte in a complex
the human brain, as modern and vastly sophisticated matrix, which is, for example, a medicinal product or a biological
computational techniques. This interest has also material used in medical diagnostics, but also the physical,
been reflected in the pharmaceutical sciences. This chemical, and biological properties of studied substances and their
paper presents a review of articles on the subject mixtures, formulated in different stages of the process related to
of the application of neural networks as effective discovering and manufacturing of new drugs. The high number
tools assisting the solution of various problems in of variables included in multidimensional databases is certainly
science and the pharmaceutical industry, especially the basis for an extensive description of studied phenomena, and
those characterized by multivariate and nonlinear is also the reason for the proper interpretation to be complicated.
dependencies. After a short description of theoretical Major difficulties in interpreting very complex processes arise,
background and practical basics concerning especially when relations among investigated variables are not
the computations performed by means of neural linear.
networks, the most important pharmaceutical For the reasons mentioned above, in the last 20  years, a
applications of neural networks, with suitable significantly increased interest has been observed in many
references, are demonstrated. The huge role played scientific fields, including pharmacy, in application of statistical
by neural networks in pharmaceutical analysis, techniques to the analysis of huge databases containing
pharmaceutical technology, and searching for the measurement results (6–8). Chemometrics was introduced into
relationships between the chemical structure and common usage as a branch of analytical chemistry focused
the properties of newly synthesized compounds on projection of chemical experiments, optimization of
as candidates for drugs is discussed. measurement methods, and elaboration of measurement results
and maximum extraction of useful information from them by
means of mathematics, probability rules, mathematical statistics,

M
odern techniques of instrumental analysis have been decision theory, and computer techniques. Combining the
applied across a wide range of pharmaceutical sciences. elements of informatics and chemical analysis, contemporary
The application of these techniques can be found in computational techniques proved to be extremely useful in
the evaluation of medicinal product quality, raw materials used pharmacy. They assist in the investigations of new chemical
in technology of pharmaceutical dosage forms, research leading compounds as potential new drugs, clinical studies of these
to the development of manufacturing technology of new drug compounds, optimization of drug forms, and evaluation of drug
formulations with the desired drug substance release profile, quality. Furthermore, the application of these techniques greatly
and process control of drug manufacture at an industrial scale in contributes to the recognition of complex processes, which
real time (1–5). They not only ensure a substantial reduction of concern drug substances in the human organism.
analysis time, but also expand the range of studies, which can Among chemometric techniques, special meaning has been
be carried out in entirely new areas, including the determination acquired by use of artificial neural networks (ANNs;  9,  10).
of important pharmaceutical properties of drug substances and These techniques have been powerful and attractive tools that
formulated drug forms. It is known that a certain form of drug must enable the processing of enormous multidimensional databases
contain not only the declared amount of an active pharmaceutical often obtained during environmental, medical, biological, and
ingredient (API), but its form has to enable release of the drug pharmaceutical studies. As learning systems by nature, they are
compound in a defined period of time. Due to the facts mentioned extremely useful for classification by pattern recognition and for
above, analytical techniques play a very important role in the solving optimization problems. Their high effectiveness results
investigation of biological availability, which is based on the from the fact that based on earlier acquired knowledge, the
determination of the drug dose fraction given to a patient that, in network is able to deal with unknown, and even deformed, input
the unchanged form, enters the circulating system, and on time data. The examples of the application of ANNs were documented
estimation in which this process occurs. in many reviews covering various fields, e.g., accounting and
Skilled application of modern, automatic, and computerized finance (11), health and medicine  (11,  12), pharmacy (13–18),
analytical chemistry (19, 20), and environmental studies (21).
Guest edited as a special report on “Chemometrics in Pharmaceutical Taking all of the above into consideration, the objective of this
Analysis” by Łukasz Komsta. review is the summary and evaluation of current achievements
Corresponding author’s e-mail: marwes@gumed.edu.pl in the field of ANN applications to solve various, complex
DOI: 10.5740/jaoacint.SGE_Wesolowski_ANN problems in drug technology and analyses. This subject was
Wesolowski & Suchacz: Journal of AOAC International Vol. 95, No. 3, 2012  653

Figure  1.  Illustration of the artificial neuron.

previously considered in several reviews, the last of which was


published in 2003 (13–18). However, the prior reviews describe
mainly problems related to the optimization of the drug form Figure  2.  The general architecture of feed-forward
and its manufacturing process. The issue of application of ANNs neural networks.
in pharmaceutical analysis and the studies of the properties of
substances used in pharmacy have not been discussed. Current
information flows only in one direction, the MLP is also termed
publications show the crucial progress in calculation methodology
a feed-forward network. The diagram of standard feed-forward
resulting from a significant increase of the computational power
artificial neural network is illustrated in Figure 2, where the
of computers, and the new areas of the application of ANNs in
inputs to the network are (x1, x2, ...., xm), the outputs are (out1,
pharmacy. For these reasons, it has been decided to undertake
out2, ..., outk), wji is the weight vector of the jth hidden neuron
this issue once again.
coming from the ith input neuron, and wkj is the weight vector of
Theoretical Background the kth output neuron coming from the jth hidden neuron  (10).
In order to configure the most suitable MLP model, some steps
In chemometrics, the most interesting and promising data must be followed. First, the number of layers must be established,
analysis tools for processing of large data sets are ANNs, which although the solution to most problems requires using only three:
are very sophisticated computational techniques that can simulate the input, hidden, and output layers. Second, it must be decided
the neurological processing ability of the human brain (10). The how many nodes should be located in the hidden layer, as the
networks comprise a large number of interconnected processing number of neurons in the input and output layers is defined by
elements functioning in parallel to work out a specific problem. the problem at hand. Furthermore, in the process of building
They can model extremely complex functions, and thus are the MLP, the choice of activation function, error criterion, and
able to quantify any nonlinear relationships among different learning algorithm is particularly necessary. In the MLP, the
factors, e.g., pharmaceutical responses relying exclusively on the activation function in the hidden and output layer is symmetric
representative data obtained from a designed experiment (6, 22). sigmoid, e.g., logistic, hyperbolic, etc. For the hidden and output
The power and advantage of ANNs lies in their ability to learn layer in MLP, a neuron with activation function permanently
existing relationships directly from the data being modeled. Once set to 1 is added. Such a neuron is called a bias, which connects
the network has acquired this knowledge through training, it can to the neurons through a weight, referred to as the threshold.
be applied to unknown data for the purpose of classification, The purpose of the threshold is to determine whether specific
prediction, time series analysis, etc. In contrast to traditional conditions are fulfilled in order to correctly interpret the attained
statistical methods, ANNs do not require implementing results (8, 22, 23).
appropriate algorithms in order to identify existing relationships. In each neuron a linear combination of the weighted inputs
While a statistical analysis relies on some assumptions about a (including the bias) is computed, added up using the aggregation
model form, e.g., linearity, describing relations between variables, function, and passed through the activation function (linear or
ANNs simply discover the structure of the data by a training nonlinear). Table 1 lists the aggregation and activation functions
process. It follows that ANNs do not learn new algorithms; they typically applied in ANNs.
learn by example. In addition, ANNs are quite tolerant of deficient (a)  Back-propagation algorithm (BP).—A correct setting of
data and undoubtedly perform better in cases where the model weights in the network is not known in advance, so initially a
form is indefinite, nonlinear, or has complex interrelations (8, 10, random value is given to them. In order to update the weights
22, 23). Since the field of neural networks comprises a variety to proper values, the BP is commonly applied (23, 25). Such a
of models with different structures, the next sections include the process is called training or learning. The BP is a generalization
most important terms and descriptions regarding ANNs. of the least mean squared algorithm that modifies the weights to
minimize the mean squared error (MSE) between the target value
Multilayer Perceptron (MLP) yk and the actual outputs in hidden and output layer of the network
according to Equation 1:
MLP consists of multiple processing units called neurons 1 m n
( out jk − yk )
(1)
2
(or nodes) placed in several different layers (24). The scheme MSE = ∑∑
m j =1 k =1
of a neuron is shown in Figure 1. The neuron in one layer is
connected to all neurons in the next layer, and because the data
654  Wesolowski & Suchacz: Journal of AOAC International Vol. 95, No. 3, 2012

Table  1.  The aggregate and activation functions used in ANNsa

Aggregate functions
Linear Radial
m
m
net = ∑ ( wi − xi )
2
net = ∑ wi xi
i =1
i =1
Activation functions (range)
Linear (-∞, +∞) Logistic sigmoid (0, +1)

1
f(net) = net f ( net ) =
1 + e − net
Hyperbolic (-1,+1) Exponential (0, +∞)

e net − e − net f ( net ) = e − net


f ( net ) = net
e + e − net
Gaussian Softmax (0,+1)

e net
1 − 2
net 2
f ( net ) =
f ( net ) = e 2s ∑ eneti
2πs i

a
 net = Summation of all inputs to the neuron; m = number of inputs to the neuron; xi = input to the ith neuron: wi = weight vector of
the ith neuron; s = radial spread, and e = base of the natural logarithms.

where m is the total number of training cases, n is equal to the to use the error function, which not only evaluates how close the
number of network outputs, outjk is the output for the jth neuron, predictions of the network are to the targets, but also greatly
as well as the kth network output. Because in the training the affects the performance of the training algorithms. On the basis
output value must be compared with the target, the process is of the error value, the degree of weight correction applied by the
referred to as supervised (6). training algorithm in each epoch is determined. There are two error
The activation function of the input layer is given functions commonly used in neural networks (22). Sum-squared
by f(x) = x; therefore, the input layer operates as a flow-through error (SSE), which is mostly used in regression problems, is the
unit. For the hidden layer the output outj of unit j is an activation sum of squared differences between the expected and obtained
function of the netj to unit j. A similar situation happens in the values on every neuron in the output layer according to Equation 3,
neurons of the output layer, but the output and the activation where yk indicates the target output, outjk is the actual output in
function are denoted as outk and netk, respectively (10). the hidden and output layer, m is a number of cases in the training
After the error has been calculated, the adaptation of the dataset, and n is a number of neurons in the output layer (10):
weights Δw at the tth iteration initiates in the output layer and
m n
proceeds backwards, toward the input layer, using Equation 2, SSE = ∑∑ ( out jk − yk )
2

where η is the learning rate, α the momentum, and j the index j =1 k =1


(3)
of the neuron in the current layer; a neuron in the upper layer is
denoted by i, the output of unit i is outi, the local error gradient is The cross-entropy error (CEE) function sums up the results of
δj, and the previous iteration is denoted as t–1 (22): the target value and the logarithm of the error value executed on
every neuron located in the output layer (Equation 4). This error
Dwji (t) = ηdj outi + aDwji (t – 1) (2) function has two variants depending on the number of neurons
in output layer, specifically whether it has a single or a multiple
The values of two constants, which are responsible for output. For a single output, where two-class classification is carried
changing of the weights in the previous iteration, must be out, the cross-entropy is combined with the logistic activation
regulated. The learning rate controls the amount of the change in function, while in the latter case softmax function is used (6, 22):
the weights, whereas the momentum determines the extent of this
previous change to be considered. The adjustment of the weights m
 out 
is run iteratively in epochs by submitting the training dataset to CEE = −∑ yk ln  k  (4)
k =1  yk 
the network until the calculated output values become equal to
the expected values within a specified bias (6, 23).
The entropy functions are more suitable for classification,
In order to accomplish the neural network training it is necessary
Wesolowski & Suchacz: Journal of AOAC International Vol. 95, No. 3, 2012  655

because they relate to maximum likelihood decision-


making, in addition to allowing outputs to be interpreted ∇EkT+1 (∇Ek +1 − ∇Ek )
βk = (7)
as probabilities (22, 26). ∇EkT+1∇Ek
(b)  The performance measures of ANNs.—The performance
of the network is commonly determined by the value of the
(d)  Levenberg-Marquardt algorithm (LMA).—The LMA
root mean square (RMS) error of the training and validation
is the fastest nonlinear optimization algorithm used for weight
sets, which are calculated and observed during the training adjustment in ANNs. Unfortunately, it is not void of some crucial
process. Every time the RMS error becomes constant, the limitations. The first is that only networks of relatively small
training of the network is terminated, yielding a single sizes and with a single output neuron can be trained using this
figure that illustrates the overall error of the network. The algorithm. The second is that the only error function defining
calculation of RMS error is performed according to Equation 5: LMA is SSE; thus it is commonly used in networks designed
for regression problems. Moreover, it is not able to train radial
m n neurons (6).
∑∑ ( out − yk )
2
jk The basis of LMA operation is the assumption that the function
RMS = j =1 k =1 (5)
modeled by a neural network is linear and therefore its minimum
m
can be reached in one step. Taking into consideration that such
an assumption is roughly true and only in close vicinity of a
where outjk is the value generated in the hidden and output layer, minimum, the LMA must compromise between the linear and a
yk is the expected value, m is the number of cases, and n is the gradient-descent model (22, 30).
number of values obtained in the output layer (27, 28). The formula for the adjustment of neural network weights
The additional assessment of an ANN can be made by trained by LMA is as follows (30):
means of a confusion matrix, which gives a categorized list of
misclassifications. It presents the number of cases belonging Dw = –(JTJ+lI)–1 JTE (8)
to various classes, which are assigned by the model to the one
being predicted. Therefore, the confusion matrix is intent on where J is the partial derivatives matrix of case errors in
demonstrating the tendencies of the ANN when specific groups connection with the weights, I denotes the identity matrix, E
of samples are mistakenly classified to the particular class. The is the vector of case errors, and λ is the damping factor, which
matrix is very valuable in solving the problems when more than controls the relative impact of the linear and a gradient-descent
two output classes are handled (6). approach at each iteration. LMA constantly shifts between these
(c)  Conjugate gradient descent (CGD).—CGD is considered two approaches by adjusting the value of λ (29).
to be a recommended training algorithm for networks of any kind,
having a large number of weights and/or multiple output neurons. Radial Basis Function (RBF) Networks
The algorithm, in contrary to back propagation, calculates the
average error surface gradient after all cases have been processed An RBF network is like MLP, the feed-forward network
through the network. Then, the weights are adjusted once at the with the analog architecture shown in Figure 2. However, the
end of the iteration. Consequently, there is no need to shuffle significant differences between these two ANNs are related to the
training cases prior to executing the algorithm. Additionally, no inner structures of the units in hidden and output layers. There
selection of learning rate or momentum coefficient is necessary, is also no bias connected to the hidden layer, and the threshold
so using CGD is much easier than back propagation. While being is treated as a deviation. Moreover, these networks deal with
implemented, conjugate gradient descent creates a series of line classification problems in different ways. In order to separate
searches across the surface of error. When it finds the direction of dissimilar classes, MLPs use hyperplanes, while RBFs use
the steepest descent, it projects a straight line in that direction so hyperspheres (22).
as to discover a minimum down this line. Afterward, more line An RBF network contains radial neurons in the hidden layer,
searches are conducted, one in every iteration. The conjugate which model a Gaussian response surface. Given that these
directions (the line-search directions) are selected in such a way functions are nonlinear, there is no need to put more than one
that the directions that have already been minimized stay that hidden layer into the architecture of an RBF network. In view
way (22, 29, 30). of the fact that a linear combination of signals coming from
CGD calculates the error gradient as the sum of error gradients radial hidden layer is sufficient to model any nonlinear function,
on each training case. The search direction is modified according in a typical RBF network the output neurons contain a linear
to the Polak-Ribiere formula. Equation 6 shows that the conjugate activation function (6, 23).
direction dk + 1 at stage k + 1 is calculated by means of a linear Since the RBF network is capable of modeling any nonlinear
combination of the negative gradient –Ek + 1 at stage k + 1 function having only one hidden layer, there is no problem
and the direction vector dk at the previous stage k. It is shown in deciding about a number of layers. Moreover, the linear
in Equation  7 that the new conjugate direction is somewhat activation function in the output layer can be optimized by
compromised by the conjugate direction made in a previous stage traditional linear modeling techniques. They are quick and not
with a coefficient βk adjusted in such a way that the minimization affected by problems such as local minima, so RBF networks can
reached by the preceding conjugate direction would be preserved be trained extremely quickly in comparison to MLPs. However,
to the highest extent (6, 31): prior to linear optimization the number of radial neurons must be
established, followed by setting of their centers and deviations.
dk+1 = –DEk+1+bkdk (6) Regrettably, the algorithms to accomplish this are inclined to
656  Wesolowski & Suchacz: Journal of AOAC International Vol. 95, No. 3, 2012

vector most similar to the input vector. The neuron best satisfying
the selected criterion, becomes “the winner.” The criterion can
be demonstrated by Equation 9, where outw is the output of the
winning neuron, xi denotes the training vector, and wji indicates
th th
the weight vector between the i input and the j neuron (6, 33):

m 
outw ← min ∑ ( xi − w ji ) 2  (9)
 i =1 

After the winning neuron has been chosen, its weights are
adjusted so its response gets closer to the desired one. The weights
of neighboring neurons are modified as well. Such adjustments
are mostly scaled down in relation to the distance from the wining
neuron (23, 25).
The topological arrangement performed by SOMs requires
adding the concept of a neighborhood to the algorithm. The
Figure  3.  The scheme of an SOM network. neighborhood is a collection of neurons that surround the
winning neuron. In the beginning the neighborhood covers
a great number of neurons, probably the whole topological
discover suboptimal combinations. In addition, the shape of RBF map. In the stages to follow the neighborhood will be reduced
network response surface requires a lot of neurons to effectively to zero, so it takes up only the winning neuron. The Kohonen
find the solution (22, 32). algorithm modifies not only the activated neuron, but all neurons
In an RBF network, neurons respond nonlinearly to the distance in the actual neighborhood. Consequently, the SOM develops an
of points from the center represented by the radial unit. A radial introductory topological categorization in which similar cases
neuron is defined by its center point and radius. The center of a stimulate adjacent neurons on the topological map. Throughout
radial node operates as weights, while the radius value acts as the iterations of the training the dimension of the neighborhood and
threshold. It must be remembered that the weights and thresholds the learning rate are progressively reduced, with the intention that
in an RBF network are entirely differ from those in MLP neurons. more delicate dissimilarities between the areas of the map are
The radial weights produce a point, whereas the radial threshold made possible (23, 25, 26).
is a deviation (22). The correction of the neuron’s weight vectors in response to
The training of RBF networks proceeds in two separate stages. the input pattern is usually carried out according to the Kohonen
First is the setting of the centers and deviations of the radial learning rule (33):
neurons; after that, the linear output layer is optimized using
the pseudoinverse (singular value decomposition) algorithm.
Assigning of the centers must be performed is such a way as w ji (t + 1) = w ji (t ) + η (t + 1) ⋅ S ( j , c) ⋅ ( xi − w ji (t )) (10)
to reflect the natural clustering of the data. There are two most
th
common methods applied: subsampling and k-means algorithm. where wji (t + 1) is the weight vector of the j output neuron in the
th
In subsampling training, points are selected at random and copied epoch (t + 1), wji (t) is the weight vector of the j output neuron in
to the radial units. A k-means algorithm attempts to find an the previous epoch (t), η (t + 1) is the learning rate in the epoch (t
th
optimal set of points to be positioned at the centroids of clusters + 1), S(j,c) is the neighborhood function of the j output neuron
of the training data (22, 23, 32). in relation to the winning neuron, and xi – wji is the difference
th th
between i input and the weight vector of the j output neuron.
Self-Organizing Maps (SOMs) The Kohonen training algorithm covers two distinct stages. In
the first one, the values of the learning rate and the neighborhood
SOMs, known also as Kohonen networks, are composed of are high, and the duration of the stage is short (50–300 epochs).
only two layers: the input and the output layers of radial neurons, In the second, the learning rate is low, and the neighborhood is
forming a two-dimensional map. A diagram of an SOM is reduced to zero or near-zero. After the network training has been
illustrated in Figure 3. Accordingly, the construction of SOMs accomplished, its topological map visualizes the data in order to
is determined only by the topology of the neighborhood and the facilitate exploration and recognition of the data structure (6, 23).
size of the output matrix where the samples are to be mapped.
Unlike MLP trained by BP, SOM operates using unsupervised Counter-Propagation Artificial Neural Networks
learning, in which the network weights are entirely adjusted (CP-ANNs)
in response to the input pattern. There is no need to submit the
inputs and outputs to the network at the time. The network makes CP-ANNs consist of two layers: the input (Kohonen layer) and
the neurons compete among themselves to determine which one the output layers (also called the Grossberg layer). The neurons in
is to be stimulated. Such a process is called competitive learning, the Kohonen layer have as many weights as the number of classes
which leads to clustering of data and relating similar classes to to be modeled. Each neuron in the output layer simply generates
each other (23, 33). the weight of the connection between itself and the ignited neuron
The result of the competition between neurons can be decided in the Kohonen layer (10, 34, 35).
according to two aspects. One is based on the largest output for a The modeling performed by CP-ANN is based on a two-step
given input and the other is identifying the neuron with the weight training procedure (10, 36). The first step is unsupervised and is
Table  2.  Application of neural networks in quality assessment and quantification of APIs

Comparison of ANNs’
Active pharmaceutical ingredients Drug formulations Analytical techniques Neural network approach predictive power Ref.
Ranitidine hydrochloride Two polymorphic forms, mixtures and tablets XRD MLP feed-forward ANN with BP Conventional RSM 43, 44

Ranitidine hydrochloride Two crystal forms, bulk drug and tablets DRIFT, XRD MLP feed-forward ANN with BP 45, 45

Mebendazole Powder mixtures of polymorphs A-C DRIFT Compression of spectra by PCA, ANN PLS 47

D-mannitol (excipient) Ternary powder mixtures of polymorphs FT-Raman Feed-forward CG-ANN PLS 48

Mebendazole Polymorphic forms, raw material, suspensions DRA-UV, ATR-FTIR GRNN 49

Terbutaline sulphate Bulk drug DRIFT MLP feed-forward RBF-ANN 50

Atorvastatin calcium Tablets FT-Raman CP-ANN PLS, PCR 51

Potassium sodium dehydro- Lyophilized powders for injection DR-FT-NIR Preprocessing of original spectra, ANN with BP PLS 52
androandrographolide succinate algorithm
Diclofenac sodium Powders NIR O-PSL pretreated spectra, PC-ANN, PLS 53
MLP feed-forward BP-ANN
Paracetamol and amantadine hydrochloride Tablets and powder NIR Pretreatment of original spectra, BP-ANN 54

Aspirin and phenacetin Tablets NIR Pretreatment of original Conventional ANN 55


spectra, PC feed-forward BP-ANN
Acetaminophen and phenobarbital Tablets Vis spectrophotometry Reduction of the data by PCA, ANN, BP Conventional ANN, PLS 56

Dextropropoxyphene and dipyrone Injections UV-Vis spectrophotometry Feed-forward BP-ANN PLS, NLPLS 57
Chlorpheniramine, naphazoline, and Nasal solutions UV-Vis spectrophotometry Feed-forward BP-ANN PLS 58
dexamethasone
Tyrosine, tryptophane, phenylalanine, and Mixtures UV spectrophotometry Compression of spectra by FA ANN without FA 59
3,4-dihydroxyphenylalanine feed-forward BP-ANN
Vitamins C, B6, and PP Commercial pharmaceutical preparations Differential pulse voltammetry Reduction of the data by PCA, Levenberg-Mar- PLS 60
quardt algorithm three layer ANN
Amiloride and methychlothiazide Tablets HPLC Feed-forward BP-ANN RSM 61
a
  NLPLS = Nonlinear partial least squares; FA = factor analysis.
Wesolowski & Suchacz: Journal of AOAC International Vol. 95, No. 3, 2012  657
658  Wesolowski & Suchacz: Journal of AOAC International Vol. 95, No. 3, 2012

analogous to the mapping of the multidimensional input data into radial layer, then the weighted sum is divided by the sum of the
the lower dimensional (typically two-dimensional) grid, using the weighting factors to provide the weighted average. The sum of
Kohonen competitive learning rule described above  (33). After the weighting factors is calculated by a single specialized neuron
the position of the winning neuron of the input vector has been in the summation layer. These two outputs are simply divided in
identified, the weights of both the input and output layers of the the output neuron to yield the predicted value of the dependent
CP-ANN are adjusted correspondingly, based on the Kohonen characteristic (6, 39, 40).
learning rule presented by Equation 10 (33).
The second step of the training is supervised; therefore, the Genetic Algorithm (GA)
target value is required for each input. The training of the network
is carried out with a set of input-target pairs. The purpose of the The identification of important variables can also be made by
network training is to adjust the weights of the neurons in such means of the GA, which investigates binary strings or real number
a manner that for each training input, the output of the network strings. The algorithm creates a population of such strings at
matches the target. Throughout the training procedure all input- random in order to apply a process equivalent to natural selection
output pairs are introduced to the network iteratively, and the so as to decide on the strings of best quality. In the strings, the
weights of both layers (Kohonen and output layer) are corrected input variables that should be included in building up the neural
in proportion to the differences between the targets and the actual networks are indicated as 1. If the input variable is denoted as 0,
outputs (Equation 11; 36, 37): it suggests that the variable should be removed from the model.
The crossover (single- and two-point) and the mutation are two
fundamental operators of a GA. The performance of a GA is
out ji (t + 1) = out ji (t ) + η (t + 1) ⋅ S ( j , c) ⋅ ( yi − out ji (t )) (11) highly dependent on them. In the single-point crossover method,
when the intersection point is chosen, the part of the binary string
where outji (t + 1) is the weight vector of the jth output neuron from beginning to this point is transferred from the string A, and
th
in the epoch (t + 1), outji (t) is the weight vector of the j output the rest is obtained from the string B. Two-point crossover is
neuron in the preceding epoch (t), η (t + 1) is the learning rate in based on selecting two intersection points. Binary strings from
the current epoch (t + 1), S(j,c) is the neighborhood function of the the beginning to the first point and from the second point to the
th
j output neuron in relation to the winning neuron, and yi – outji is end are copied from string A, and the part from the first to the
th
the difference between i taget value and the weight vector of the second point is acquired from the string B. In mutation selected
th
j output neuron. bits within a string that has been developed through crossover are
After the completion of the network training, each neuron of inverted, that is, 0 becomes 1 and 1 turns into 0 (6, 41).
the Kohonen layer can be assigned to a certain class on account The selected strings are subsequently multiplied to create new
of the output weights. Consequently, all samples located in that populations, which consecutively become better and better. In the
neuron are assigned to the corresponding class as well (10, 35). end, the most superior string of the final generation is selected.
Even though the algorithm is time-consuming, it is particularly
Generalized Regression Neural Networks (GRNNs) suited for the selection of the most important variables when they
are in great numbers (40 and more), intercorrelated, or jointly
The ANNs that estimate probability density functions (PDFs) required (42).
from the analyzed data using kernel-based approximation are
GRNNs. It is regarded as a so-called Bayesian network and Application of ANNs in Pharmacy
operates under the assumption that the presence of a specific
case indicates some probability density at that point. When cases ANNs are known as a powerful tool to simulate various
form a cluster together they indicate the area of high probability nonlinear relationships and have been applied to numerous
density. In kernel-based estimation, simple functions are placed at problems of considerable complexity. Generally, various
each case, and combined to give the overall PDF (38). applications of ANNs can be summarized into classification or
The GRNNs are applied to solving regression problems, with pattern recognition, prediction, and modeling (13). Supervised
the continuous target variable. The kernel functions are normally associating networks can be applied in the pharmaceutical field
Gaussians situated at each case in the training dataset. The as an alternative to conventional response surface methodology,
response surface (bell-shaped) has its peak exactly at a specific whereas unsupervised feature-extracting networks represent an
point in the input space for each training case. The training cases alternative to principal component analysis (PCA). Nonadaptive
are copied into the network and used to estimate the response unsupervised networks are able to reconstruct their patterns
in relation to new points. For the estimation of the output, a when presented with noisy samples and can be used for image
weighted average of the outputs of the training cases is applied. recognition. Thus, neural networks have been widely used in
The distance of a certain point from the evaluated point relates to different areas of great importance to pharmacy, ranging from the
the weighting. This means that adjacent points add to the estimate interpretation of analytical data, and drug and dosage form design
significantly (6, 39). through biopharmacy to clinical pharmacy.
The architecture of GRNNs contains four layers–the input
layer, the radial (pattern) layer, the summation layer, and the Neural Network in Pharmaceutical and Clinical Analysis
output layer. The number of input neurons equals the number
of independent features. Neurons in the radial layer correspond ANN methodology has been used in pharmaceutical
to training vectors and are connected to the two neurons in the analysis primarily as an effective tool for quantification of
summation layer, which assist in estimating the weighted average complex mixtures, including pharmaceutical preparations. It
of the radial layer. One of the neurons sums all the outputs of the has been applied in simultaneous determination of two active
Table  3.  Application of neural network in pharmaceutical product development

Active
pharmaceutical Drug
ingredients formulations Considered problems Neural network approach Ref.
Acetaminophen Tablets The effect of binder type and concentration on the properties of tablets GA-ANN 121
a
Benzimidazole Microparticles The influence of process parameters and polymer and NaOH concentrations on the particle size of benzimidazole Design Expert ver. 7.0.3 122
Budesomide Nanoemulsions Identifying factors (surfactants, internal phase content, processing conditions) that influences the particle size and INForm ver. 3.5a 123, 124
stability
Carbodiimide hydro- Hydrogels The effect of experimental variables on the creation of optimal formula Computer program 125
chloride written by authors
Chlorpheniramine Capsules The design of a controlled-release hydrophilic matrix capsule containing blends of anionic polymers BP-ANN 126
maleate
Dapivirine Gels, tablets The effect of formulation variables on the properties of mucoadhesive gels and freeze-dried system for vaginal BP-MLP 127
delivery
Diclofenac sodium Tablets The effect of three formulation components on the dissolution rate of diclofenac sodium from sustained release BP-ANN 128
matrix tablet
Doxorubicin Microspheres The effect of drug loading level and concentration of NaCl and CaCl2 in the release media on the in vitro drug BP-ANN 129
release kinetics
Insulin (bovine) Pellets To design an implant controlled-release system for protein drug deliver BP-ANN 130
Ketoprofen Hydrogels The influence of an absorption enhancer on the percutaneous absorption of ketoprofen from transdermal drug Computer program written 131, 132
delivery by authors
Ketoprofen Solid dispersions Predicting dissolution of ketoprofrn form solid dispersions and physical mixtures BP-ANN 133
Melatonin Hydrogels The effect of water, ethanol, and propylene glycol on the permeation of drug by skin from transdermal delivery BP-ANN 134
system
Nimodipine Effervescent The development of effervescent controlled release floating tablet formulations with solid dispersion of drug Feed-forward BP-ANN 135, 136
tablets
Omeprazole Pellets The effect of excipients and operating conditions on the possibility of enteric-coated pellets tableting MLP, RBF, GRNN 137
Paclitaxel Emulsions The effect of formulation components on the particle size, entrapment efficiency, and stability of emulsion Computer program written 138
by authors
a
Prednisolone Nanoparticles The effect of process conditions and microreactor setup on the particle size INForm ver. 3.5 130
Rifampicin and Microemulsions To develop a colloidal dosage form (oil phase content, surfactants combination) for the oral delivery of two drugs MLP, RBF 140
isononiazid
Sodium dodecyl Gels The effect of formulation and physiological variables on the diffusivity of drug from female controlled drug delivery Three-layered ANN 141
sulfate system
Sucrose Granulates The effect of different operating conditions on the properties of granules GRNN 142
Theophylline Granulates The influence of granulation variables on the granulate size and friability Feed-forward ANN 143
Theophylline Pellets The influence of different ratios of excipients on the dissolution profiles CG-MLP 144
Theophylline Pellets The effect of amount of pectin-chitosan complex and coating weight gain on the in vitro release profile Feed-forward ANN, 145
five training algorithms
Theophylline Tablets The development of controlled-release tablet Partitioned BP-ANN 146
Wesolowski & Suchacz: Journal of AOAC International Vol. 95, No. 3, 2012  659
Table  3.  (continued)
Active
pharmaceutical Drug
ingredients formulations Considered problems Neural network approach Ref.
Trapidil Tablets The effect of formulation and process variables on the release order and the rate constant Computer program written 147
by authors
Verapamil hydro- Beads The effect of process and formulation variables on in vitro release profile CAD/Chem ver. 5.1a 148
chloride
Un-named drug Tablets Effect of experimental design on the modeling of a film coated formulation BP-MLP 149
substance
Un-named drug Tablets The quantitative characteristics of the influence of each excipient’s concentration on the properties of solid Generalized feed-forward 150, 151
substance dosage form MLP
Un-named drug Tablets The effect of polymer and pigment on a tablet coating formulation requiring minimization of crack velocity and CAD/Chema 152
substance maximization of film opacity
Un-named drug Tablets Comparison of modeling abilities of ANN to the abilities of classical statistical methods using data from study on a Generalized feed-forward 153
substance solid dosage form MLP
Un-named drug Tablets To determine whether ANNs differing in the BP algorithm are capable of generating equivalent, highly predictive Three ANN packages 154
substance models
Un-named drug Tablets Comparison of neurofuzzy logic and ANNs in modeling experimental data of an immediate release tablet BP-MLP 155
substance formulation
Un-named drug Solid disper- To provide a single methodology, which could be universal for application of ANNs for various dosage form Various ANN models 156
substance sions, micro- modeling
emulsions
Un-named drug Pellets The identification of relations between formulations characteristics and pellet properties BP-MLP 157
substance
M-112 Tablets The influence of dry granulation process and the settings of the tableting parameters on tablet capacity tendency Feed-forward ANN, 158
Levenberg-Marquardt
algorithm
Sympathomimetic Tablets The effect of formulation and tablet variables in the design of controlled-release dosage form CAD/Chem version. 4.6a 159
660  Wesolowski & Suchacz: Journal of AOAC International Vol. 95, No. 3, 2012

drug
a
  Commercially available ANN software.
Wesolowski & Suchacz: Journal of AOAC International Vol. 95, No. 3, 2012  661

pharmaceutical ingredients, discrimination of herbal remedies spectroscopic techniques are the most frequently applied
and vegetable oils, and for medicinal diagnosis. Selected in simultaneous determination of two active constituents in
examples of application of ANNs in pharmaceutical analysis are multicomponent, commercial medicinal products  (54–59). The
presented below. great advantages of these methods, especially those using the
(a)  APIs.—Neural networks are used as supporting techniques IR range, over other analytical techniques are the simplicity of
that make quality assessment possible and enable quantitative measurement, short time of analysis, and capacity for automation
analysis of pharmaceutical raw materials and medicinal products. of the analytical procedure. NIR and Raman spectroscopy methods
They are used more and more frequently for the multivariate are also nondestructive techniques that require small samples,
calibration of the datasets achieved by the analysis of drugs with particularly for solid-state samples. In addition, there is no need to
the aid of modern analytical instruments, e.g., spectroscopic, solvate or extract the active pharmaceutical ingredients, so the use
electrochemical, and chromatographic. Multivariate calibration of toxic and corrosive reagents can be avoided (51–55). Efficacy
allows identification and determination of active compounds in of NIR and Raman spectral patterns and ANN methodology in
complex mixtures, where the content of the constituents differs quantification of medicinal products has been well documented.
significantly. Representative examples of applications of ANNs However, it must be taken into consideration that ANNs suffer
in pharmaceutical analysis are compiled in Table 2. from three major drawbacks: the predictive properties of ANNs
For many years, linear calibration was the most commonly strongly depend on the learning parameters and the topology
used technique based on the modeling of a single variable at of the network; training time is lengthy; and ANN models are
the time (43). However, the most important problem in the complex and difficult to interpret (54, 55).
study of pharmaceutical raw materials and preparations is Neural networks have also been applied as supporting techniques
often nonlinearity; therefore, numerous constraints apply to the using as input data UV-Vis spectrophotometric (56–59), differential
optimization problem. Multivariable experimental design can pulse voltammetric (60), and HPLC (61) patterns, or the scores of
overcome the problems with interaction effects, and nonlinear ANN principal component analysis (PC-ANN; 56, 60), intended
estimation can be used to compute the relationship between for simultaneous quantitative estimation of active ingredients
several independent variables and a single dependent variable. in commercial pharmaceutical formulations (nasal solutions,
In particular, ANN methodology is a powerful tool that enables injections, and tablets). The obtained results show reasonably
modeling of nonlinear functions with large numbers of variables. good accuracy and precision compared with those achieved by
Purity, stability, and content of different crystalline forms of the comparative methods. It has also been revealed that PC-ANN
a drug are among the most important problems in pharmacy. modeling simplifies the training procedure of ANN, because
The existence of different crystal forms has a significant impact the inclusion of only the significant principal components in the
on key properties of an API, such as shelf-life, vapor pressure, model decreases the contribution of experimental noise and other
solubility, bioavailability, and crystal morphology and density. minor extraneous factors.
For those reasons, the goal of numerous works was to search for In conclusion, it can be stated that ANNs are good recognizers
suitable instrumental techniques to characterize pharmaceutical of patterns and robust classifiers, with the ability to generalize
solids, identify crystalline solid phases, and quantitatively when making decisions based on imprecise input data (12). The
analyze different crystalline forms in their mixtures (43–49). advantage of neural networks over other chemometric methods,
Results of these investigations showed that ANNs can be used such as partial least squares (PLS) and principal component
for the identification, characterization, and quantification regression (PCR), in the modeling for which nonlinear signal-
of the crystal forms of unknown drug samples based on the answer dependencies are present, has been well documented (51).
proper preprocessing of the X-ray powder diffraction patterns, Furthermore, ANNs have several advantages over statistical
diffuse reflectance FTIR (DRIFT) spectral patterns, FT-Raman techniques, i.e., they have the ability to continuously adopt to
spectroscopic data, and diffuse reflectance UV and attenuated new data through the use of less rigid assumptions about the
total reflectance-FTIR spectral patterns. underlying data distribution. They allow models to be built
The developed ANN models confirmed that the characteristic without knowing the actual modeling functions; from Kohonen-
signals generated by the above-mentioned instrumental techniques ANN and counter-propagation-ANN, useful information about
are directly proportional to the measured amounts of drug crystal input and output variables can be extracted, respectively (19).
forms present in the samples (43–49). High usefulness of the (b)  Herbal remedies and vegetable oils.—A high percentage
methods was proven by the analysis of enantiomeric purity and of the world’s population still depends on medicinal plants to
stability of drugs. Moreover, the elaborated procedures proved to treat many health problems. Moreover, many current drugs are
be simple, direct, and nondestructive. derived from plants, e.g., morphine, which is a potent painkiller
Besides the application of ANNs in assessment of purity, from the opium poppy, or digitalis, a heart remedy from foxglove.
stability, and quantification of different crystalline forms of a drug, For these reasons, it is very important to adopt modern analytical
new easy and fast ways of acquiring data obtained with reliable methods for purposes of plant drugs QC.
instrumental techniques have allowed neural networks to be In recent years, differential thermal analysis, thermogravimetry,
widely used for the quantification of medicinal products (50–61). stripping voltammetry, pyrolysis MS, and electrospray
As an example, they were applied to quantification of atorvastatin ionization-MS were used for research in medicinal plant raw
calcium in tablets using the FT-Raman spectral patterns (51), materials  (62–65). With the aid of different types of ANNs
potassium sodium dehydroandroandrographolide succinate in and training algorithms, databases received from these studies
lyophilized powders for injection (52), and diclofenac sodium in were used for selection of the thermal variables that are able
powder mixtures (53) using near-IR (NIR) spectral patterns. to recognize herbal samples composed of various plant organs
Based on the information compiled in Table 2, it can be (62); to recognize the taxonomy of a plant and the anatomical
concluded that compared to conventionally analytical methods, part a sample originated from on the basis of some trace elements
662  Wesolowski & Suchacz: Journal of AOAC International Vol. 95, No. 3, 2012

content (63); for discriminating plant seeds at the genus, species, predict pharmacokinetic parameters for independent testing of
and subspecies level (64); and for metabolic fingerprinting with compounds.
considerable potential for applications where high-throughput Neural networks and PLS have been used to predict the
screening is desired  (65). Additionally, probabilistic neural blood-to-plasma concentration ratio of drugs based on selected
networks were also utilized for the classification of 102 active molecular descriptors (72). Statistical analysis of the training
compounds from diverse medicinal plants with anticancer dataset evidently indicated the superiority of the ANN model over
activity against the human rhinopharyngocele cell line KB (66). the PLS regression. The advantage of the ANN was confirmed by
The models built in this work would be of potential help in the comparison of the predictive ability on the test and validation set.
design of novel and more potent anticancer agents. The use of this model may be an important tool in early drug
Soybean and rapeseed oils are the most popular vegetable discovery by providing a relevant pharmacokinetic parameter.
oils widely used in preparation of prescription drugs. Because ANNs were also used to predict the pharmacokinetics and
these oils may spoil naturally through rancidity or adulteration, pharmacodynamics of aminoglycoside antibiotics in severely ill
the quality of vegetable oils has been constantly monitored by patients (12, 73, 74). The results showed that ANNs were useful
a measure of the physicochemical properties of oils, such as not only for accurate prediction of the plasma concentration
density, refractive index, saponification value, and iodine and of aminoglycosides, but also for classification of patients
acid numbers (67–69). Recently, the application of thermal whose plasma concentration would be in the therapeutic
methods of analysis to confirm the degree of oil spoilage was concentration range. Additionally, numerous papers have been
observed (67, 68). Research based on the physicochemical and published indicating the application of ANNs for predicting
thermal properties of vegetable oil samples as variables revealed the pharmacokinetic parameters: area under concentration-time
that by combining neural network techniques with unsupervised curve (AUC), peak plasma concentration (cmax), time to reach
pattern-recognition methods, the classification of rapeseed and peak plasma concentration (tmax) and the assessment of their
soybean oils according to their type and quality can be performed variability in bioequivalence studies (75), as well as for diagnosis
more accurately. and therapy. For example, two unsupervised ANNs were applied
The fatty acid composition of 137 samples of different for the classification of the patients in three medical fields based
commercially marketed edible vegetable oils, including on their electroencephalograms and evoked potentials (76).
pumpkin, sunflower, peanut, olive, soybean, rapeseed, corn, and Interesting studies were also performed indicating the
some mixed oils, has also been used to develop and implement an effectiveness of neural networks for simultaneous determination
automated method for classification of oil samples in routine food of the composition of human urinary calculi (77, 78). This is a
control laboratories (37). The evaluation of different chemometric very important problem because more than half of the analyzed
methods (PCA, Kohonen neural network, CP-ANN) proved calculi from the patients from Macedonia were composed
that counter-propagation ANN was a valuable model for the of whewellite (CaC2O4·H2O), weddellite (CaC2O4·2H2O),
classification of different vegetable oils on the basis of their and uric acid or carbonate apatite, as single components or in
composition regarding seven fatty acids determined by GC. The binary or ternary mixtures. Feed-forward ANNs trained using
proposed method is of significant value for the determination of the Levenberg-Marquardt algorithm were applied, and in order
unknown oil samples and could be implemented as a fast and to make the training procedure faster and to get better results,
effective method in routine analysis. Moreover, a search for PCA was performed on normalized IR spectra. The results for the
reliable analytical methods for oil investigation showed that by synthetic mixtures were better than those obtained with factor-
1 13
use of H and C NMR spectrometry resonance data as input based methods (PLS, PCR), due to a better prediction capacity
variables, neural networks proved to be a useful tool for detecting of neural networks. ANNs led to better results, especially
the presence of hazelnut oil in olive oil at content levels higher for mixtures of components with highly overlapping spectra
than 8% (70). In order to decrease the effect of random variables (whewellite, weddellite).
in the experimental analyses, the NMR spectrometry data were
standardized prior to performing the ANN analyses, and a GA Application of ANNs in Quantitative Structure-Activity
1 13
was applied to select the optimal set of variables ( H and C Relationship (QSAR) Studies
NMR resonance) for the neural network.
(c)  Laboratory diagnostics.—ANNs have also been widely One of the basic prerequisites in drug design is the assumption
used in pharmacokinetics and pharmacodynamics research. They that compounds possessing a similar structure exhibit analogous
offer quick and simple method for predicting and identifying types of biological activity. Different in silico techniques
significant covariates, and are a versatile computational are now applied for screening new chemical compounds in
tool exhibiting a clear advantage over conventional model- terms of biological activity  (79,  80). Among these techniques,
independent pharmacokinetics and pharmacodynamics analysis QSAR is the one used most often  (79–108). QSARs represent
(12). ANNs have the potential to become a useful analytical tool predictive models deriving from the application of statistical tools
for population pharmacokinetic data analysis. correlating biological activity of drug candidates with descriptors
For example, a study was made on the use of ANNs for the representative of molecular structure and/or property (79). The
prediction of clearances, fraction bound to plasma proteins, success of any QSAR model depends on the accuracy of input
and volume of distribution of a series of structurally diverse data, selection of appropriate descriptors, and statistical tools.
compounds (71). The research has shown that useful information (a)  QSARs.—In QSAR investigations, quantitative signs
can be derived from the chemical structure of a drug to generate are mostly used in the regression methods of analysis, while
descriptors encoding various properties of that drug. Using qualitative ones are used in the methods of pattern recognition,
theoretically calculated descriptors for a set of structurally such as GAs, the method of k-nearest neighbors, and neural
diverse compounds, ANN models successfully managed to networks (80). Among these techniques, ANNs possess certain
Wesolowski & Suchacz: Journal of AOAC International Vol. 95, No. 3, 2012  663

important properties that make them very functional in QSAR (b)  Quantitative structure-property relationships.—Predicting
studies. In addition, neural networks can combine and incorporate chromatographic behavior from molecular structure of solutes
both literature-based and experimental data in order to solve is one of the main goals of quantitative structure-retention
problems (13). relationship (QSRR) studies (109–112). Neural networks were
In the literature, there are numerous examples confirming the used to find molecular parameters related to the retention times
high value of ANNs in an earlier stage of drug development to and to predict the retention as a function of changes in mobile
identify the potential interaction of new compounds possessing phase pH and composition, along with molecular descriptors
like structure with particular drug receptors. The three-layered, of separated solutes (109). The results have shown the benefits
feed-forward neural networks trained with the BP have been the of ANN application in predicting the behavior of a group of
most frequently used type of ANNs in QSAR studies (95, 96). In structurally diverse diuretics in RP-HPLC from mobile phase
addition, other neural networks have been used, e.g., the Bayesian- composition, physicochemical properties, and molecular
regularized ANN (98), the multilayer perceptron  (105) with descriptors of solutes, thus proving that ANNs can be used in
Levenberg-Marquardt algorithm for training data, self-organizing creating a model for the prediction of the retention values of
maps in order to settle structural similarities among the samples unanalyzed molecules. The next studies revealed that ANNs can
considered (98), and ANNs coupled with GAs (85–93, 102). The be successfully used for modeling and prediction of migration
implementation of GAs enables the selection of variables while indices of the 53 benzene derivatives and heterocyclic compounds
applying back-propagation, counter-propagation, and generalized in microemulsion electrokinetic chromatography (110).
neural networks. The original number of variables becomes Studies were also performed to model RP-HPLC separation
significantly reduced, and the interpretation of a QSAR model of 18 selected amino acids (111). With the assistance of a GA
can be executed more efficiently. to select important molecular descriptors and supervised ANN,
In QSAR research, there are also the counter-propagation the best model of neural network with five input descriptors was
neural networks used as an important tool (81–94). For instance, chosen, and the significance of the selected descriptors for amino
CP-ANNs have been successfully applied in QSAR studies of acid separation was examined. QSRR models were also applied to
flavonoid protein tyrosine kinase inhibitors (81) and of flavonoid evaluate the molecular interactions between a set of 52 structurally
interaction with bilitranslocase (82). This algorithm has been diverse drug compounds and human α1-acid glycoprotein (112).
used to develop QSAR models in the data set of 38 α1-adrenergic The proposed final model had a 36-5-1 architecture, while the
antagonists with respect to their selectivity for all three subtypes correlation coefficient for learning, validating, and testing sets
of α1-adrenereceptors (83), as well as in the data set of compounds was equal to 0.975, 0.950, and 0.972, respectively.
of known chemical structure that were tested in order to discover Quantitative structure-property relationship (QSPR) is another
the mechanism involved in the binding of estrogenic compounds modeling methodology based on the assumption that molecular
to the receptor (84–87). The newest studies present quantitative structure is responsible for the observed behavior of a compound.
(continuous) and qualitative (categorical) QSAR models for the As a consequence, QSPR methods correlate structural or property
data set of 805 noncongeneric organic compounds for prediction descriptors of compounds with their chemical or biological
of their carcinogenic potency (94). activities (113–117). For example, to take advantage of neural
QSAR studies, as an important part of drug research, have networks, the aqueous solubility of series of structurally related
been carried out on 1-[2-hydroxyethoxy-methyl]-6-(phenylthio) compounds has been assessed (113, 114). Studies on transdermal
thymine  (95), tetrahydroimidazolo [4,5,1-jk] [1,4] benzo- delivery of insulin revealed that a hybrid algorithm that combines
diazepine (96, 97), cyclic urea (98), and 1-(3,3-diphenylpropyl)- differential evolution algorithms and ANNs provides a reasonably
piperidinyl amide and urea (99) derivatives as inhibitors of HIV-1 good predictive model for insulin permeability in the presence
reverse transcriptase (95–97) or protease (98), or binding affinity of chemical penetration enhancers  (115). An ANN model has
to the chemokine receptor (99), using topological, structural, also been developed for the estimation of maximum steady-
physicochemical, electronic, and spatial descriptors. Based on state flux through a polydimethylsiloxane membrane for a set of
the results, it can be concluded that ANNs were able to establish selected descriptors of 245 drugs (116). ANNs were also used
a satisfactory relationship between the combined set of selected to understand the relationship between the structure of 35 newly
descriptors and the anti-HIV activity. Neural networks have synthesized cyclohexanol derivatives (promoting agents) and
also been shown to give better results than other models and to the in vivo percutaneous absorption of ketoprofen in rats (117).
be particularly successful in their ability to identify nonlinear The results suggested that ANNs could be used for clarifying the
relationships. mechanism of action of promoting enhancers using the structural
The usefulness of neural networks in QSAR studies has descriptors.
been well documented (100–108). They have been utilized for Some other modeling methods mentioned in the literature
selection of anticancer leads from a structurally heterogeneous combine ANN models with search for the quantitative
series of compounds (100) and for modeling of antitumor relationships between molecular structure and chemical or
activity of dibenzo[c,h]-[1,6] naphthyridin-6-one and its analogs biological activities for compounds possessing like structure.
(101). With the aid of different types of ANNs, QSAR has The CP-ANNs were used to develop quantitative structure-
also been used to search for structure-activity relationships in selectivity relationships for a set of artificial metalloenzymes
numerous groups of newly synthesized derivatives  (102–105). (118). Multilayer perceptron network architecture trained with
The antiparasitic  (106), antibacterial  (107), and antifungal a Levenberg-Marquardt algorithm has been developed for
(108) activity of newly synthesized compounds possessing obtaining sufficient quantitative structure-binding relationship
similar structure were also studied by QSAR methods coupled data with high accuracy for barbituranes as guests complexing
with neural networks. Statistically significant descriptors were to α- and β-cyclodextrins (119). A quantitative structure-toxicity
indicated in all cases. relationship study was applied to a series of 54 benzodiazepine
664  Wesolowski & Suchacz: Journal of AOAC International Vol. 95, No. 3, 2012

derivatives in order to find out the influence of structural Application of ANNs in pharmaceutical technology can be
descriptors on their toxicity (120). The correlations with toxicity grouped into two main classes: the optimization of dosage form
values were examined by conventional three-layered neural composition, and the optimization of preparation technology of
networks. a particular dosage form (156). The examples in Table 3 have
confirmed that neural networks provide a very useful tool for
Neural Networks in Pharmaceutical Product development of the most modern drug delivery systems, such as
Development microemulsions (140), gels (127, 141), hydrogels (131, 132, 134),
capsules (126), pellets (130), and tablets (127, 128, 146, 147, 159).
A pharmaceutical formulation is composed of several Controlled-release drug delivery systems offer great advantages
formulation factors and process variables. Several responses over conventional dosage forms. These include dramatic decrease
relating to effectiveness, usefulness, stability, and safety must in dosing frequency and improved patient compliance, minimized
be optimized simultaneously (17). One of the difficulties in the in vivo fluctuation of drug concentrations within a desired range,
quantitative approach for formulation design is understanding localized drug delivery, and reduced side effects (15).
the relationship between causal factors and individual A number of controlled-release drug delivery systems, such as
pharmaceutical responses. Another difficulty is that a desirable oral (135, 136, 140), transdermal (131, 132, 134), injectable (138),
formulation for one property is not necessarily desirable for implantable (130), and intrauterine devices  (123), have been
other characteristics. This is called a multiobjective optimization developed with the support of ANN methods. However, because
problem. Consequently, expertise and experience are required to of the complexities of the formulations that are required to
design acceptable pharmaceutical formulations. maintain the desired in vivo drug release rates, the application
(a)  Medicinal products.—Neural networks are popular of neural networks for development of controlled-release drug
computation methodology that can be used as a multiobjective delivery systems offers a considerable challenge.
simultaneous optimization technique. The data in Table 3 show (b)  Ingredients of medicinal products.—Neural networks have
that ANNs have been widely applied for selecting acceptable also been applied in determining physical and chemical properties
pharmaceutical formulations (121–159). This can be achieved by of APIs and excipients that are used in pharmaceutical technology.
two distinct approaches. The first one is to understand the effects of Since the aqueous solubility of active ingredients is one of the
the formulation factors and process variables on the performance most important factors in establishing their biological activity,
of formulations (15). It aids a formulator to understand how robust methods have been described for estimating the aqueous
the formulation components and process parameters affect the solubility of a set of 734 organic compounds from different
drug delivery system. The second approach is to develop the structural classes based on multiple linear regression (MLR) and
formulation through ANN modeling. This approach can also be ANN models (160). The results show that a practical solubility
applied to fundamental investigations of the effects of formulation prediction model can be constructed with ANN modeling, in
and process variables on the delivery system. particular. Similar investigations were also carried out in order
In a classical process for the prediction of the best medicinal to calculate the solubility of drug substances in water-cosolvent
formulations, composite experimental design can be applied for mixtures (161). The results for different numerical analyses using
selecting rational model formulations, which are composed of the ANN model were compared with those obtained from the
several formulation factors and process variables (17). Compared most accurate MLR model, and revealed that the ANN model
with a classical statistical analysis based on a one-factor-at-a-time outperformed the regression model.
experiments, ANNs can greatly reduce the number of experiments There were other applications of ANNs apart from solubility
for the preparation of model formulation. Response variables prediction. For example, neural networks were used for the
of these model formulations are predicted quantitatively by the investigation of the effect of micrometric properties on the flow
combination of causal factors. However, theoretical relationships rate through circular orifices of three pharmaceutical excipients:
between causal factors and response variables are not clear. lactose, starch, and dicalcium phosphate dihydrate  (162). By
Accordingly, neural networks confirm their effectiveness in cases means of ANNs, an appropriate nonlinear model was developed
where a functional dependence between the inputs and outputs describing powder flow rate with the four input variables of the
is not linear. greatest predictive capacity.
The distinct features of ANNs as a very convenient The utility of ANNs as a preformulation tool to determine
methodology for dosage forms technology are as follows (15): the physicochemical properties of amorphous polymers, such as
neural networks can handle multiple independent and dependent the hydration characteristics, glass transition temperatures, and
variables simultaneously in one model, e.g., BP; the functional rheological properties, was also investigated (163). These studies
relationship between the independent and dependent variables indicate that ANNs accurately predicted the water uptake, glass
need not be known a priori, since an ANN model can learn the transition temperatures, and viscosities of different amorphous
latent relationships between the causal factors and the response; polymers and their physical blends with a low error of prediction.
an ANN model is effective in modeling nonlinear relationships In order to model the torque measurements of the various
between the dependent and independent variables using an microcrystalline celluloses, an RBF network was utilized (164).
approach similar to a “black box”; and a neural network model The combination of ANNs and a data clustering technique known
has prediction and formulation optimization capabilities, and as discrete incremental clustering offers the opportunity for
can be updated with new data. The ANN models can be used to clustering microcrystalline celluloses into discrete groups that
predict the response for new experimental conditions after the possess equivalent or comparable performance, thus providing a
models have been trained. The trained ANN model can also be basis for suggesting interchange ability among microcrystalline
used to optimize controlled release formulations when searching celluloses within the same group.
algorithms such as genetic algorithms are integrated in the model. Other studies have presented the application of neural network
Wesolowski & Suchacz: Journal of AOAC International Vol. 95, No. 3, 2012  665

software in the quantitative development of in vitro-in vivo Development for Pharmaceuticals, Elsevier, Academic Press,
correlations. For example, ANN methodology has been used for Amsterdam, The Netherlands, pp1–11
generating models able to predict the relative lung bioavailability    (4) Šašić, S., & Ozaki, Y. (2010) Raman, Infrared, and Near-
and clinical effect of salbutamol when delivered from dry powder Infrared Chemical Imaging, John Wiley & Sons, Inc., Hoboken,
NJ, pp 167–226
inhalers to healthy volunteers and asthmatic patients  (165).
   (5) Gabbott, P. (Ed) (2008) Principles and Applications of Thermal
GRNNs were able to generalize complex relations between the
Analysis, Blackwell Publishers, Oxford, UK, pp 76–81
output and input parameters and could account for the differences    (6) Hill, T., & Lewicki, P. (2007) Statistics, Methods and
in drug release kinetics observed under various conditions in Applications: A Comprehensive Reference for Science, Industry,
vitro, hence offering a potentially reliable and robust estimate of and Data Mining, StatSoft, Inc., Tulsa, OK, pp 349–363
medicinal product in vivo behavior (166).    (7) Cserháti, T. (2008) Multivariate Methods in Chromatography: A
ANN techniques were also used to predict the phase behavior Practical Guide, John Wiley & Sons, Ltd, Chichester, UK,
of quaternary microemulsion-forming systems consisting of pp 1–7
oil, water, surfactants, and cosurfactants  (167–170). For this    (8) Otto, M. (2007) Chemometrics: Statistics and Computer
purpose, feed-forward BP-ANN (167, 170), genetic neural Application in Analytical Chemistry, 2nd Ed., Wiley-VCH,
network (168), and GRNN (169) methodology has been applied. Weinheim, Germany, pp 119–224
   (9) Zupan, J., & Gasteiger, J. (1993) Neural Networks for Chemists:
The neural networks have been shown to be highly successful
An Introduction, VCH, Weinheim, Germany, pp 151–294
in predicting phase behavior of these systems, and the results
  (10) Zupan, J., & Gasteiger, J. (1999) Neural Networks in Chemistry
obtained have proven valuable for identifying the structural and Drug Design, 2nd Ed., Wiley-VCH, Weinheim, Germany,
requirements for pharmaceutically acceptable cosurfactants. The pp 3–8
reached conclusion is that ANNs can provide suitable means for   (11) Paliwal, M., & Kumar, U.A. (2009) Expert Syst. Appl. 36, 2–17.
developing microemulsion-based drug delivery systems. http://dx.doi.org/10.1016/j.eswa.2007.10.005
  (12) Yamamura, S. (2003) Adv. Drug Deliv. Rev. 55, 1233–1251.
Conclusions http://dx.doi.org/10.1016/S0169-409X(03)00121-2
  (13) Agatonovic-Kustrin, S., & Beresford, R. (2000) J. Pharm.
This review of articles confirms a high usefulness of Biomed. Anal. 22, 717–727. http://dx.doi.org/10.1016/S0731-
7085(99)00272-1
various models of artificial neural networks in resolving
  (14) Yamashita, F., & Takayama, K. (2003) Adv. Drug Deliv. Rev. 55,
complex problems often encountered in different areas of
1117. http://dx.doi.org/10.1016/S0169-409X(03)00118-2
the pharmaceutical sciences. It is indisputable that ANNs are   (15) Sun, Y., Peng, Y., Chen, Y., & Shukla, A.J. (2003) Adv. Drug
extremely helpful tools supporting the process of designing the Deliv. Rev. 55, 1201–1215. http://dx.doi.org/10.1016/S0169-
most modern drug delivery systems such as microemulsions, 409X(03)00119-4
gels, hydrogels, capsules, pellets, and tablets, which provide   (16) Takayama, K., Fujikawa, M., & Nagai, T. (1999) Pharm. Res.
controlled release of the API. With the use of neural networks, the 16, 1–6
optimization of selection and content of particular excipients, as   (17) Takayama, K., Fujikawa, M., Obata, Y., & Morishita, M. (2003)
well as adjustment of the conditions of the manufacturing process Adv. Drug Deliv. Rev. 55, 1217–1231. http://dx.doi.org/10.1016/
of certain drug formulations having a desired physicochemical S0169-409X(03)00120-0
property, can be achieved.   (18) Ichikawa, H. (2003) Adv. Drug Deliv. Rev. 55, 1119–1147. http://
dx.doi.org/10.1016/S0169-409X(03)00115-7
The increased interest in neural networks applied in
  (19) Zupan, J., Novič, M., & Ruisánchez, I. (1997) Chemometr.
pharmaceutical analysis must be also taken into consideration.
Intell. Lab. Syst. 38, 1–23. http://dx.doi.org/10.1016/S0169-
Analytical laboratories are becoming better equipped, with 7439(97)00030-0
devices digitally recording spectrograms, voltammograms,   (20) Cirovic, D.A. (1997) Trends Anal. Chem. 16, 148–155. http://
chromatograms, or thermograms. This is why advanced, dx.doi.org/10.1016/S0165-9936(97)00007-1
nonlinear multivariate computational methods should be used to   (21) Gardner, M.W., & Dorling, S.R. (1998) Atm. Environ. 32,
investigate obtained data. In such cases the neural networks have 2627–2636. http://dx.doi.org/10.1016/S1352-2310(97)00447-0
proved to be very handy, due to their capability of developing   (22) Bishop, C. (1995) Neural Networks for Pattern Recognition,
multivariate calibration algorithms. Such calibration enables Oxford University Press, Oxford, UK, pp 116–292
direct quantification of drug substances in complex matrixes, e.g.,   (23) Haykin, S. (1999) Neural Networks: A Comprehensive
medicinal products. This creates the possibility of using ANNs as Foundation, 2nd Ed., Prentice Hall, Upper Saddle River, NJ,
a supporting element in the process control of drug manufacture pp 156–317, 443–483
  (24) Rosenblatt, F. (1961) Principles of Neurodynamics: Perceptrons
in real-time process analytical technologies, guaranteeing the
and the Theory of Brain Mechanisms, Spartan Books,
proper course of production and obtaining a final product of
Washington, DC, pp 311–320
suitable quality.   (25) Patterson, D. (1996) Artificial Neural Networks, Prentice Hall,
Singapore, pp 20–28, 367–395
References   (26) Fausett, L. (1994) Fundamentals of Neural Networks, Prentice
Hall, New York, NY, pp 169–186, 289–304
   (1) Skoog, D.A., West, D.M., Holler, F.J., & Crouch, S.R. (2004)   (27) Ripley, B.D. (1996) Pattern Recognition and Neural Networks,
Fundamentals of Analytical Chemistry, 8th Ed., Brooks & Cole Cambridge University Press, Cambridge, UK, pp 143–179
Publishers, Belmont, CA, pp 2–16   (28) Dreyfus, G. (2005) Neural Networks: Methodology and
   (2) McMahon, G. (2007) Analytical Instrumentation: A Guide to Applications, Springer-Verlag, Berlin, Germany, pp 85–103
Laboratory, Portable and Miniaturized Instruments, John Wiley   (29) Shepherd, A.J. (1997) Second-Order Methods for Neural
& Sons, Ltd, Chichester, UK, pp 1–6 Networks, Springer, New York, NY, pp 38–129. http://dx.doi.
   (3) Ahuja, S., & Rasmussen H. (Eds) (2007) HPLC Method org/10.1007/978-1-4471-0953-2
666  Wesolowski & Suchacz: Journal of AOAC International Vol. 95, No. 3, 2012

  (30) Priddy, K.L., & Keller, P.E. (2005) Artificial Neural Networks: Eur. J. Pharm. Sci. 32, 193–199. http://dx.doi.org/10.1016/j.
An Introduction, SPIE, Bellingham, WA, pp 107–125 ejps.2007.07.002
  (31) Jensen, F. (2007) Introduction to Computational Chemistry, 2nd   (56) Ni, Y., Liu, Ch., & Kokot, S. (2000) Anal. Chim. Acta 419,
Ed., John Wiley & Sons, Inc., Chichester, UK, pp 380–389 185–196. http://dx.doi.org/10.1016/S0003-2670(00)00978-8
  (32) Buhmann, M.D. (2003) Radial Basis Functions: Theory and   (57) Cámara, M.S., Ferroni, F.M., De Zan, M., & Goicoechea,
Implementations, Cambridge University Press, Cambridge, UK, H.C. (2003) Anal. Bioanal. Chem. 376, 838–843. http://dx.doi.
pp 2–45. http://dx.doi.org/10.1017/CBO9780511543241 org/10.1007/s00216-003-1977-z
  (33) Kohonen, T. (2001) Self-Organizing Maps, 3rd Ed.,   (58) Goicoechea, H.C., Collado, M.S., Satuf, M.L., & Olivieri,
Springer-Verlag, Berlin, Germany, pp 127–176. http://dx.doi. A.C. (2002) Anal. Bioanal. Chem. 374, 460–465. http://dx.doi.
org/10.1007/978-3-642-56927-2 org/10.1007/s00216-002-1435-3
  (34) Hecht-Nielsen, R. (1987) Appl. Opt. 26, 4979–4984. http://   (59) Zhong-Xiao, P., De-Jing, P., Pei-Yan, S., Mao-Sen, Z.,
dx.doi.org/10.1364/AO.26.004979 Zuberbuhler, A.D., & Jung, B. (1997) Spectrochim. Acta A 53,
  (35) Zupan, J., Novič, M., & Gasteiger, J. (1995) Chemometr. Intell. 1629–1632. http://dx.doi.org/10.1016/S1386-1425(97)00099-1
Lab. Syst. 27, 175–187   (60) Barthus, R.C., Mazo, L.H., & Poppi, R.J. (2005) J. Pharm.
  (36) Kuzmanovski, I., & Novič, M. (2008) Chemometr. Biomed. Anal. 38, 94–99. http://dx.doi.org/10.1016/j.
Intell. Lab. Syst. 90, 84–91. http://dx.doi.org/10.1016/j. jpba.2004.12.017
chemolab.2007.07.003   (61) Agatonovic-Kustrin, S., Zecewic, M., Zivanovic, Lj., & Tucker,
  (37) Brodnjak-Vončina, D., Kodba, Z.C., & Novič, M. (2005) I.G. (1998) J. Pharm. Biomed. Anal. 17, 69–76. http://dx.doi.
Chemometr. Intell. Lab. Syst. 75, 31–43. http://dx.doi. org/10.1016/S0731-7085(97)00170-2
org/10.1016/j.chemolab.2004.04.011   (62) Wesolowski, M., Suchacz, B., & Konieczynski, P. (2003) Comb.
  (38) Speckt, D.F. (1991) IEEE Trans. Neural Networks 2, 568–576. Chem. High Throughput Screen. 6, 811–820
http://dx.doi.org/10.1109/72.97934   (63) Suchacz, B., & Wesolowski, M. (2006) Talanta 69, 37–42.
  (39) Chtioui, Y., Panigrahi, S., & Francl, L. (1999) Chemometr. http://dx.doi.org/10.1016/j.talanta.2005.08.026
Intell. Lab Syst. 48, 47–58. http://dx.doi.org/10.1016/S0169-   (64) Goodacre, R., Pygall, J., & Kell, D.B. (1996) Chemometr.
7439(99)00006-4 Intell. Lab. Syst. 34, 69–83. http://dx.doi.org/10.1016/0169-
  (40) Zaknich, A. (2003) Neural Networks for Intelligent Signal 7439(96)00021-4
Processing, World Scientific Publishing Co., Ltd, Singapore,
  (65) Goodacre, R., York, E.V., Heald, J.K., & Scott, I.M. (2003)
pp 190–204
Phytochemistry 62, 859–863. http://dx.doi.org/10.1016/S0031-
  (41) Haupt, R.L., & Haupt, S.E. (2004) Practical Genetic Algorithms,
9422(02)00718-5
2nd Ed., John Wiley & Sons, Inc., Hoboken, NJ, pp 27–66
  (66) Xue, C.X., Zhang, X.Y., Liu, M.C., Hu, Z.D., & Fan, B.T.
  (42) Mitchell, M. (1998) An Introduction to Genetic Algorithms, The
(2005) J. Pharm. Biomed. Anal. 38, 497–507. http://dx.doi.
MIT Press, Cambridge, MA, pp 2–26
org/10.1016/j.jpba.2005.01.035
  (43) Agatonovic-Kustrin, S., Wu, V., Rades, T., Saville, D., &
  (67) Wesolowski, M., & Suchacz, B. (2001) Fresenius’ J. Anal.
Tucker, I.G. (1999) Int. J. Pharm. 184, 107–114. http://dx.doi.
Chem. 371, 323–330. http://dx.doi.org/10.1007/s002160100921
org/10.1016/S0378-5173(99)00104-0
  (68) Wesolowski, M., & Suchacz, B. (2002) J. Therm. Anal. Calorim.
  (44) Agatonovic-Kustrin, S., Vu, V., Rades, T., Saville, D., & Tucker,
68, 893–899. http://dx.doi.org/10.1023/A:1016134304708
I.G. (2000) J. Pharm. Biomed. Anal. 22, 985–992. http://dx.doi.
  (69) Zhang, G., Ni, Y., Churchill, J., & Kokot, S. (2006) Talanta 70,
org/10.1016/S0731-7085(00)00256-9
293–300. http://dx.doi.org/10.1016/j.talanta.2006.02.037
  (45) Agatonovic-Kustrin, S., Tucker, I.G., & Schmierer, D. (1999)
  (70) García-Gonzáles, D.L., Mannina, L., D`Imperio, M., Segre,
Pharm. Res. 16, 1477–1482. http://dx.doi.org/10.1023/
A:1018975730945 A.L., & Aparicio, R. (2004) Eur. Food Res. Technol. 219,
  (46) Agatonovic-Kustrin, S., Rades, T., Wu, V., Saville, D., & Tucker, 545–548. http://dx.doi.org/10.1007/s00217-004-0996-0
I.G. (2001) J. Pharm. Biomed. Anal. 25, 741–750. http://dx.doi.   (71) Turner, J.V., Maddalena, D.J., & Cutler, D.J. (2004) Int.
org/10.1016/S0731-7085(01)00375-2 J. Pharm. 270, 209–219. http://dx.doi.org/10.1016/j.
  (47) Kachrimanis, K., Rontogianni, M., & Malamataris, S. (2010) J. ijpharm.2003.10.011
Pharm. Biomed. Anal. 51, 512–520. http://dx.doi.org/10.1016/j.   (72) Paixão, P., Gouveia, L.F., & Morais, J.A.G. (2009) Eur.
jpba.2009.09.001 J. Pharm. Sci. 36, 544–554. http://dx.doi.org/10.1016/j.
  (48) Braun, D.E., Maas, S.G., Zencrici, N., Langes, Ch., Urbanetz, ejps.2008.12.011
N.A., & Griesser, U.J. (2010) Int. J. Pharm. 385, 29–36. http://   (73) Yamamura, S., Kawada, K., Takehira, R., Nishizawa, K.,
dx.doi.org/10.1016/j.ijpharm.2009.10.019 Katayama, Sh., Hirano, M., & Momose, Y. (2004) Biomed.
  (49) Agatonovic-Kustrin, S., Glass, B.D., Mangan, M., & Pharmacother. 58, 239–244. http://dx.doi.org/10.1016/j.
Smithson, J. (2008) Int. J. Pharm. 361, 245–250. http://dx.doi. biopha.2003.12.012
org/10.1016/j.ijpharm.2008.04.039   (74) Yamamura, S., Kawada, K., Takehira, R., Nishizawa, K.,
  (50) Agatonovic-Kustrin, S., & Alany, R. (2001) Anal. Chim. Katayama, Sh., Hirano, M., & Momose, Y. (2008) Biomed.
Acta 449, 157–165. http://dx.doi.org/10.1016/S0003- Pharmacother. 62, 53–58. http://dx.doi.org/10.1016/j.
2670(01)01234-X biopha.2007.11.004
  (51) Mazurek, S., & Szostak, R. (2009) J. Pharm. Biomed. Anal. 49,   (75) Opara, J., Primožič, S., & Cvelbar, P. (1999) Pharm. Res. 16,
168–172. http://dx.doi.org/10.1016/j.jpba.2008.10.015 944–948. http://dx.doi.org/10.1023/A:1018857108713
  (52) Li, Y., Liu, Sh., & Wang, L. (2011) J. Pharm. Biomed. Anal. 55,   (76) Papadourakis, G., Vourkas, M., Micheloyannis, S., & Jervis,
216–219. http://dx.doi.org/10.1016/j.jpba.2010.12.028 B. (1996) Math. Comput. Simulat. 40, 623–635. http://dx.doi.
  (53) Wang, B., Liu, G., Dou, Y., Liang, L., Zhang, H., & Ren. Y. org/10.1016/0378-4754(96)00011-0
(2009) J. Pharm. Biomed. Anal. 50, 158–163. http://dx.doi.   (77) Kuzmanovski, I., Zografski, Z., Trpkovska, M., Šoptrajanov, B.,
org/10.1016/j.jpba.2009.04.014 & Stefov, V. (2001) Fresenius’ J. Anal. Chem. 370, 919–923.
  (54) Dou, Y., Sun, Y., Ren, Y., Ju, P., & Ren, Y. (2005) J. Pharm. http://dx.doi.org/10.1007/s002160100887
Biomed. Anal. 37, 543–549. http://dx.doi.org/10.1016/j.   (78) Kuzmanovski, I., Trpkovska, M., Šoptrajanov, B., & Stefov,
jpba.2004.11.017 V. (2003) Anal. Chim. Acta 491, 211–218. http://dx.doi.
  (55) Dou, Y., Qu, N., Wang, B., Chi, Y.Z., & Ren, Y.L. (2007) org/10.1016/S0003-2670(03)00787-6
Wesolowski & Suchacz: Journal of AOAC International Vol. 95, No. 3, 2012  667

  (79) Roy, K., & Roy, P.P. (2009) Eur. J. Med. Chem. 44, 2913–2922. Karelson, M. (2006) Bioorg. Med. Chem. 14, 7490–7500. http://
http://dx.doi.org/10.1016/j.ejmech.2008.12.004 dx.doi.org/10.1016/j.bmc.2006.07.022
  (80) Kovalishin, V.V., Tetko, I.V., Luik, A.I., Artemenko, A.G., & (104) Agatonovic-Kustrin, S., Turner, J.V., & Glass, B.D. (2008) J.
Kuz’min, V.E. (2001) Pharm. Chem. J. 35, 78–84. http://dx.doi. Pharm. Biomed. Anal. 48, 369–375. http://dx.doi.org/10.1016/j.
org/10.1023/A:1010420904703 jpba.2008.04.008
  (81) Novič, M., Nikolovska-Coleska, Z., & Solmajer, T. (1997) J. (105) Jalali-Heravi, M., Asadollahi-Baboli, M., & Shahbazikhah, P.
Chem. Inf. Comput. Sci. 37, 990–998. http://dx.doi.org/10.1021/ (2008) Eur. J. Pharm. Sci. 43, 548–556
ci970222p (106) Prado-Prado, F.J., García-Mera, X., & Gonzáles-Díaz, H. (2010)
  (82) Karawajczyk, A., Drgan, V., Medic, N., Oboh, G., Passamonti, Bioorg. Med. Chem. 18, 2225–2231. http://dx.doi.org/10.1016/j.
S., & Novič, M. (2007) Biochem. Pharmacol. 73, 308–320. bmc.2010.01.068
http://dx.doi.org/10.1016/j.bcp.2006.09.024 (107) Worachartcheewan, A., Nantasenamat, Ch., Naenna, T.,
  (83) Eric, S., Solmajer, T., Zupan, J., Novič, M., Oblak, M., & Isarankura-Na-Ayudhya, Ch., & Prachayasittikul, V. (2008)
Agbaba, D. (2004) Il Farmaco 59, 389–395. http://dx.doi. Eur. J. Med. Chem. 44, 1664–1673. http://dx.doi.org/10.1016/j.
org/10.1016/j.farmac.2003.12.009 ejmech.2008.09.028
  (84) Maran, E., Novič, M., Barbieri, P., & Zupan, J. (2004) SAR (108) Hasegawa, K., Deushi, T., Yaegashi, O., Miyashita, Y., &
QSAR Environ. Res. 15, 469–480. http://dx.doi.org/10.1080/106 Sasaki, S. (1995) Eur. J. Med. Chem. 30, 569–574. http://dx.doi.
29360412331297461 org/10.1016/0223-5234(96)88271-7
  (85) Marini, F., Roncaglioni, A., & Novič, M. (2005) J. Chem. Inf. (109) Agatonovic-Kustrin, S., Zecewic, M., & Zivanovic, Lj. (1999)
Model. 45, 1507–1519. http://dx.doi.org/10.1021/ci0501645 J. Pharm. Biomed. Anal. 21, 95–103. http://dx.doi.org/10.1016/
  (86) Boriani, E., Spreafico, M., Benfenati, E., & Novič, M. (2007) S0731-7085(99)00133-8
Mol. Divers. 11, 153–169. http://dx.doi.org/10.1007/s11030-008- (110) Fatemi, M.H. (2003) J. Chromatogr. A 1002, 221–229. http://
9069-9 dx.doi.org/10.1016/S0021-9673(03)00687-3
  (87) Spreafico, M., Boriani, E., Benfenati, E., & Novič, M. (2007) (111) Tham, S.Y., & Agatonovic-Kustrin, S. (2002) J. Pharm.
Mol. Divers. 11, 171–181. http://dx.doi.org/10.1007/s11030-008- Biomed. Anal. 28, 581–590. http://dx.doi.org/10.1016/S0731-
9070-3 7085(01)00690-2
  (88) Mlinsek, G., Novič, M., Hodoscek, M., & Solmajer, T. (2001) (112) Buciński, A., Wnuk, M., Goryński, K., Giza, A., Kochańczyk,
J. Chem. Inf. Comput. Sci. 41, 1286–1294. http://dx.doi. J., Nowaczyk, A., Bączek, T., & Nasal, A. (2009) J. Pharm.
org/10.1021/ci000162e Biomed. Anal. 50, 591–596. http://dx.doi.org/10.1016/j.
  (89) Župerl, Š., Mlinsek, G., Solmajer, T., Zupan, J., & Novič, M. jpba.2008.11.005
(2007) J. Chemometr. 21, 346–356. http://dx.doi.org/10.1002/ (113) Tantishaiyakul, V. (2005) J. Pharm. Biomed. Anal. 37, 411–415.
cem.1046 http://dx.doi.org/10.1016/j.jpba.2004.11.005
  (90) Mazzatorta, P., Smiesko, M., Lo Piparo, L., & Benfenati, E. (114) Louis, B., Agrawal, V.K., & Khadikar, P.V. (2010) Eur. J.
(2005) J. Chem. Inf. Model. 45, 1767–1774. http://dx.doi. Med. Chem. 45, 4018–4025. http://dx.doi.org/10.1016/j.
org/10.1021/ci050247l ejmech.2010.05.059
  (91) Arakawa, M., Hasegawa, K., & Funatsu, K. (2006) Chemometr. (115) Yerramsetty, K.M., Neely, B.J., Madihally, S.V., & Gasem,
Intell. Lab. Syst. 83, 91–98. http://dx.doi.org/10.1016/j. K.A.M. (2010) Int. J. Pharm. 388, 13–23. http://dx.doi.
chemolab.2006.01.009 org/10.1016/j.ijpharm.2009.12.028
  (92) Ji, L., Wang, X.D., Luo, S., Qin, L., Yang, X.S., Liu, S.S., & (116) Agatonovic-Kustrin, S., Beresford, R., & Yusof, A.P.M. (2001)
Wang, L.S. (2008) Sci. China, Ser. B. Chem. 51, 677–683 J. Pharm. Biomed. Anal. 26, 241–254. http://dx.doi.org/10.1016/
  (93) Kuzmanovski, I., Novič, M., & Trpkovska, M. (2009) S0731-7085(01)00421-6
Anal. Chim. Acta 642, 142–147. http://dx.doi.org/10.1016/j. (117) Obata, Y., Li, Ch.J., Fujikawa, M., Takayama, K., Sato, H.,
aca.2009.01.041 Higashiyama, K., Isowa, K., & Nagai, T. (2001) Eur. J. Pharm.
  (94) Fjodorova, N., Vračko, M., Tušar, M., Jezierska, A., Novič, M., 212, 223–231
Kűhne, R., & Schűűrmann, G. (2010) Mol. Divers. 14, 581–594. (118) Mazurek, S., Ward, T.R., & Novič, M. (2007) Mol. Divers. 11,
http://dx.doi.org/10.1007/s11030-009-9190-4 141–152. http://dx.doi.org/10.1007/s11030-008-9068-x
  (95) Douali, L., Villemin, D., Zyad, A., & Cherqaoui, D. (119) Loukas, Y.L. (2001) Int. J. Pharm. 226, 207–211. http://dx.doi.
(2004) Mol. Divers. 8, 1–8. http://dx.doi.org/10.1023/ org/10.1016/S0378-5173(01)00779-7
B:MODI.0000006753.11500.37 (120) Funar-Timofei, S., Ionescu, D., & Suzuki, T. (2010) Toxicol. in
  (96) Mandal, A.S., & Roy, K. (2009) Eur. J. Med. Chem. 44, Vitro 24, 184–200. http://dx.doi.org/10.1016/j.tiv.2009.09.009
1509–1524. http://dx.doi.org/10.1016/j.ejmech.2008.07.020 (121) Turkoglu, M., Aydin, I., Murray, M., & Sakr, A. (1999) Eur.
  (97) Goodarzi, M., & Freitas, M.P. (2010) Eur. J. Med. Chem. 45, J. Pharm. Biopharm. 48, 239–245. http://dx.doi.org/10.1016/
1352–1358. http://dx.doi.org/10.1016/j.ejmech.2009.12.028 S0939-6411(99)00054-5
  (98) Fernández, M., & Caballero, J. (2006) Bioorg. Med. Chem. 14, (122) Leonardi, D., Salomón, C.J., Lamas, M.C., & Olivieri, A.C.
280–294. http://dx.doi.org/10.1016/j.bmc.2005.08.022 (2009) Int. J. Pharm. 367, 140–147. http://dx.doi.org/10.1016/j.
  (99) Shi, W., Zhang, X., & Shen, Q. (2010) Eur. J. Med. Chem. 45, ijpharm.2008.09.036
49–54. http://dx.doi.org/10.1016/j.ejmech.2009.09.022 (123) Amani, A., York, P., Chrystyn, H., Clark, B.J., & Do, D.Q.
(100) Gonzáles-Díaz, H., Bonet, I., Terán, C., De Clercq, E., Bello, (2008) Eur. J. Pharm. Sci. 35, 42–51. http://dx.doi.org/10.1016/j.
R., García, M.M., Santana, L., & Uriarte, E. (2007) Eur. ejps.2008.06.002
J. Med. Chem. 42, 580–585. http://dx.doi.org/10.1016/j. (124) Amani, A., York, P., Chrystyn, H., & Clark, B.J. (2010) Pharm.
ejmech.2006.11.016 Res. 27, 37–45. http://dx.doi.org/10.1007/s11095-009-0004-2
(101) Yu, Y., Su, R., Wang, L., & Qi, W. (2010) Med. Chem. Res. 19, (125) Onuki, Y., Hoshi, M., Okabe, H., Fujikawa, M., Morishita, M.,
1233–1244. http://dx.doi.org/10.1007/s00044-009-9266-9 & Takayama, K. (2005) J. Controlled Release 108, 331–340.
(102) Goodarzi, M., Freitas, M.P., & Ghasemi, N. (2010) Eur. J. http://dx.doi.org/10.1016/j.jconrel.2005.08.022
Med. Chem. 45, 3911–3915. http://dx.doi.org/10.1016/j. (126) Hussain, A.S., Yu, X., & Johnson, R.D. (1991) Pharm. Res. 8,
ejmech.2010.05.045 1248–1252. http://dx.doi.org/10.1023/A:1015843527138
(103) Katritzky, A.R., Pacureanu, L.M., Slavov, S., Dobchev, D.A., & (127) Woolfson, A.D., Umrethia, M.L., Kett, V.L., & Malcolm, R.K.
668  Wesolowski & Suchacz: Journal of AOAC International Vol. 95, No. 3, 2012

(2010) Int. J. Pharm. 388, 136–143. http://dx.doi.org/10.1016/j. (149) Plumb, A.P., Rowe, R.C., York, P., & Doherty, Ch. (2002) Eur.
ijpharm.2009.12.042 J. Pharm. Sci. 16, 281–288. http://dx.doi.org/10.1016/S0928-
(128) Zupančič Božič, D., Vrečer, F., & Kozjek, F. (1997) Eur. J. 0987(02)00112-4
Pharm. Sci. 5, 163–169. http://dx.doi.org/10.1016/S0928- (150) Bourquin, J., Schmidli, H., van Hoogevest, P., & Leuenberger,
0987(97)00273-X H. (1998) Eur. J. Pharm. Sci. 7, 5–16. http://dx.doi.org/10.1016/
(129) Li, Y., Rauth, A.M., & Wu, X.Y. (2005) Eur. J. Pharm. Sci. 24, S0928-0987(97)10028-8
401–410. http://dx.doi.org/10.1016/j.ejps.2004.12.005 (151) Bourquin, J., Schmidli, H., van Hoogevest, P., & Leuenberger,
(130) Surini, S., Akiyama, H., Morishita, M., Nagai, T., & Kakayama, H. (1998) Eur. J. Pharm. Sci. 7, 17–28. http://dx.doi.
K. (2003) J. Controlled Release 90, 291–301. http://dx.doi. org/10.1016/S0928-0987(97)10027-6
org/10.1016/S0168-3659(03)00196-2 (152) Plumb, A.P., Rowe, R.C., York, P., & Doherty, Ch. (2003) Eur.
(131) Takahara, J., Takayama, K., Isowa, K., & Nagai, T. (1997) Int. J. Pharm. Sci. 18, 259–266. http://dx.doi.org/10.1016/S0928-
J. Pharm. 158, 203–210. http://dx.doi.org/10.1016/S0378- 0987(03)00016-2
5173(97)00260-3 (153) Bourquin, J., Schmidli, H., van Hoogevest, P., & Leuenberger,
(132) Takayama, K., Takahara, J., Fujikawa, M., Ichikawa, H., & H. (1998) Eur. J. Pharm. Sci. 6, 287–300. http://dx.doi.
Nagai, T. (1999) J. Controlled Release 62, 161–170. http:// org/10.1016/S0928-0987(97)10025-2
dx.doi.org/10.1016/S0168-3659(99)00033-4 (154) Plumb, A.P., Rowe, R.C., York, P., & Brown, M. (2005)
(133) Mendyk, A., & Jachowicz, R. (2005) Expert Syst. Appl. 28, Eur. J. Pharm. Sci. 25, 395–405. http://dx.doi.org/10.1016/j.
285–294. http://dx.doi.org/10.1016/j.eswa.2004.10.007 ejps.2005.04.010
(134) Kandimalla, K.K., Kanikkannan, N., & Singh, M. (1999) J. (155) Shao, Q., Rowe, R.C., & York, P. (2007) Eur. J. Pharm. Sci. 28,
Controlled Release 61, 71–82. http://dx.doi.org/10.1016/S0168- 394–404. http://dx.doi.org/10.1016/j.ejps.2006.04.007
3659(99)00107-8 (156) Mendyk, A., & Jachowicz, R. (2007) Expert Syst. Appl. 32,
(135) Barmpalexis, P., Kanaze, F.I., Kachrimanis, K., & Georgarakis, 1124–1131. http://dx.doi.org/10.1016/j.eswa.2006.02.019
E. (2010) Eur. J. Pharm. Biopharm. 74, 316–323. http://dx.doi. (157) Mendyk, A., Kleinebudde, P., Thommes, M., Yoo, A., Szlęk, J.,
org/10.1016/j.ejpb.2009.09.011 & Jachowicz, R. (2010) Eur. J. Pharm. Sci. 41, 421–429. http://
(136) Barmpalexis, P., Kachrimanis, K., & Georgarakis, E. (2011) Eur. dx.doi.org/10.1016/j.ejps.2010.07.010
J. Pharm. Biopharm. 77, 122–131. http://dx.doi.org/10.1016/j. (158) Belič, A., Škrjanc, I., Zupančič Božič, D., Karba, R., & Vrečer,
ejpb.2010.09.017 F. (2009) Eur. J. Pharm. Biopharm. 73, 172–178. http://dx.doi.
(137) Tűrkoğlu, M., Varol, H., & Çelikok, M. (2004) Eur. J. org/10.1016/j.ejpb.2009.05.005
Pharm. Biopharm. 57, 279–286. http://dx.doi.org/10.1016/j. (159) Chen, Y., McCall, T.W., Baichwal, A.R., & Meyer, M.C. (1999)
ejpb.2003.10.008 J. Controlled Release 59, 33–41. http://dx.doi.org/10.1016/
(138) Fan, T., Takayama, K., Hattori, Y., & Maitani, Y. (2004) S0168-3659(98)00171-0
Pharm. Res. 21, 1692–1697. http://dx.doi.org/10.1023/ (160) Huuskonen, J., Rantanen, J., & Livingstone, D. (2000) Eur. J.
B:PHAM.0000041467.28884.16 Med. Chem. 35, 1081–1088. http://dx.doi.org/10.1016/S0223-
(139) Ali, H.S.M., Blagden, N., York, P., Amani, A., & Brook, 5234(00)01186-7
T. (2009) Eur. J. Pharm. Sci. 37, 514–522. http://dx.doi. (161) Jouyban, A., Majidi, M.R., Jalilzadeh, H., & Asadpour-Zeynali,
org/10.1016/j.ejps.2009.04.007 K. (2004) Il Farmaco 59, 505–512. http://dx.doi.org/10.1016/j.
(140) Agatonovic-Kustrin, S., Glass, B.D., Wisch, M.H., & Alany, farmac.2004.02.005
R.G. (2003) Pharm. Res. 20, 1760–1765. http://dx.doi. (162) Kachrimans, K., Karamyan, V., & Malamataris, S. (2003)
org/10.1023/B:PHAM.0000003372.56993.39 Int. J. Pharm. 250, 13–23. http://dx.doi.org/10.1016/S0378-
(141) Lee, Y., Khemka, A., Yoo, J.W., & Lee, Ch.H. (2008) 5173(02)00528-8
Int. J. Pharm. 351, 119–126. http://dx.doi.org/10.1016/j. (163) Ebube, N.K., Owusu-Ababio, G., & Adeyeye, Ch.M. (2000)
ijpharm.2007.09.032 Int. J. Pharm. 196, 27–35. http://dx.doi.org/10.1016/S0378-
(142) Behzadi, S.S., Klocker, J., Hűttlin, H., Wolschann, P., & 5173(99)00405-6
Viernstein, H. (2005) Int. J. Pharm. 291, 139–148. http://dx.doi. (164) Soh, J.L.P., Chen, F., Liew, C.V., Shi, D., & Heng, P.W.S. (2004)
org/10.1016/j.ijpharm.2004.07.051 Pharm. Res. 21, 2360–2368. http://dx.doi.org/10.1007/s11095-
(143) Murtoniemi, E., Yliruusi, J., Kinnunen, P., Merkku, P., & 004-7690-6
Leiviskä, K. (1994) Int. J. Pharm. 108, 155–164. http://dx.doi. (165) Matas, M., Shao, Q., Richardson, C.H., & Chrystyn, H. (2008)
org/10.1016/0378-5173(94)90327-1 Eur. J. Pharm. Sci. 33, 80–90. http://dx.doi.org/10.1016/j.
(144) Peh, K.K., Lim, Ch.P., Quek, S.S., & Khoh, K.H. (2000) ejps.2007.10.001
Pharm. Res. 17, 1384–1388. http://dx.doi.org/10.1023/ (166) Parojčič, J., Ibrič, S., Djurič, Z., Jovanovič, M., & Corrigan,
A:1007578321803 O.I. (2007) Eur. J. Pharm. Sci. 30, 264–272. http://dx.doi.
(145) Ghaffari, A., Abdollahi, H., Khoshayand, M.R., Bozchalooi, I.S., org/10.1016/j.ejps.2006.11.010
Dadgar, A., & Rafiee-Tehrani, M. (2006) Int. J. Pharm. 327, (167) Richardson, C.J., Mbanefo, A., Aboofazeli, R., Lawrence, M.J.,
126–138. http://dx.doi.org/10.1016/j.ijpharm.2006.07.056 & Barlow, D.J. (1997) J. Colloid Interface Sci. 187, 296–303.
(146) Takayama, K., Morva, A., Fujikawa, M., Hattori, Y., Obata, Y., http://dx.doi.org/10.1006/jcis.1996.4678
& Nagai, T. (2000) J. Controlled Release 68, 175–186. http:// (168) Agatonovic-Kustrin, S., & Alany, R.G. (2001) Pharm. Res. 18,
dx.doi.org/10.1016/S0168-3659(00)00248-0 1049–1055. http://dx.doi.org/10.1023/A:1010913017092
(147) Takahara, J., Takayama, K., & Nagai, T. (1997) J. Controlled (169) Djekic, L., Ibric, S., & Primorac, M. (2008) Int. J. Pharm. 361,
Release 49, 11–20. http://dx.doi.org/10.1016/S0168- 41–46. http://dx.doi.org/10.1016/j.ijpharm.2008.05.002
3659(97)00030-8 (170) Alany, R.G., Agatonovic-Kustrin, S., Rades, T., & Tucker, I.G.
(148) Vaithiyalingam, S., & Khan, M.A. (2002) Int. J. Pharm. 234, (1999) J. Pharm. Biomed. Anal. 19, 443–452. http://dx.doi.
179–193. http://dx.doi.org/10.1016/S0378-5173(01)00959-0 org/10.1016/S0731-7085(98)00232-5
Copyright of Journal of AOAC International is the property of AOAC International and its content may not be
copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written
permission. However, users may print, download, or email articles for individual use.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy