0% found this document useful (0 votes)
49 views8 pages

Ieee - Intrusion Detection System Using Neural

This document discusses accuracy detection of network intrusion detection systems using neural network classifiers on the KDD dataset. It compares the performance of five neural network classifiers - Feed Forward Neural Network, Elman Neural Network, Generalized Regression Neural Network, Probabilistic Neural Network, and Radial Basis Neural Network - on the full KDD dataset. It also discusses related work on intrusion detection techniques including data mining, hidden Markov models, genetic algorithms, and feature reduction approaches applied to the KDD cup 99 dataset.

Uploaded by

Dheresh Soni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views8 pages

Ieee - Intrusion Detection System Using Neural

This document discusses accuracy detection of network intrusion detection systems using neural network classifiers on the KDD dataset. It compares the performance of five neural network classifiers - Feed Forward Neural Network, Elman Neural Network, Generalized Regression Neural Network, Probabilistic Neural Network, and Radial Basis Neural Network - on the full KDD dataset. It also discusses related work on intrusion detection techniques including data mining, hidden Markov models, genetic algorithms, and feature reduction approaches applied to the KDD cup 99 dataset.

Uploaded by

Dheresh Soni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

ACCURACY DETECTION OF NETWORK INTRUSION

DETECTION SYSTEM USING NEURAL NETWORK


CLASSIFIER ON THE KDD DATASET
S. Devaraju1, M.Thenmozhi2, S. Jawahar3, Dheresh Soni4 and A. Somasundaram5

1,4
School of Computing Science and Engineering, VIT Bhopal University, Bhopal-Indore Highway, Kothrikalan,
Sehore, Madhya Pradesh, India.
2
Department of Artificial Intelligence and Data Science, Sri Eshwar College of Engineering, Coimbatore, Tamil
Nadu, India
3
School of Sciences, Christ Deemed-to-be University, Delhi NCR, India
5
Department of Computer Science and Applications, Sri Krishna Arts and Science College, Coimbatore, Tamil
Nadu, India

1
Corresponding Author Email:
devamcet@gmail.com

Abstract – Network Intrusion Detection System, Security has recently become a vital component of all industrial and
organizational information systems. As an effective technique to dealing with network difficulties, intrusion detection systems
employ several classifiers to detect various types of attacks. The performance of intrusion detection with several neural network
classifiers is compared in this research. There are five types of classifiers employed in this proposed study. Feed Forward
Neural Network (FFNN), Elman Neural Network (ENN), Generalized Regression Neural Network (GRNN), Probabilistic
Neural Network (PNN), and Radial Basis Neural Network (RBNN) are the neural networks. The feature reduction approaches
are used to a specific KDD dataset in this problem. The results of the full-featured KDD dataset are compared.

Keywords - Intrusion detection, Neural networks, FFNN, ENN, GRNN, PNN, RBNN, KDD Cup, MATLAB.

the database. It's an incursion if the detecting assaults and


1. INTRODUCTION signatures match. Signature-based intrusions are referred to
be known assaults since users discover the intrusion by
Network Intrusion Detection System (NIDS), matching signature log files. The log file stores a list of
there have been few intruders in recent years, so the user known assaults detected on the computer system or
may easily control them from known or unknown threats. networks. Unknown attacks are anomaly-based intrusion
In recent years, security has emerged as the most critical detection attacks that are detected on the network because
concern in terms of securing data or information. they deviate from conventional attacks.
Because intruders introduce new types of incursion into
the market, the user is unable to govern his computer Network intrusion detection systems distinguish
system or network. between network-based and host-based threats. Misuse or
anomaly-based network assaults are also possible [1]. The
Intrusion detection attacks are divided into two types: interconnectedness of computer systems detects network-
signature-based and anomaly-based. The signature-based based assaults. When the systems communicate with one
intrusion detection system identifies intrusions by another, the attack is transmitted from one computer system
comparing their parameters to the signatures already in to another via routers and switches. Host-based assaults are
detected only from a single computer system and are simple

1
to prevent. These assaults are mostly carried out by Layered technique, this technique displays great attack
externally linked devices. Pen drives, CDs, VCDs, and detection accuracy and efficiency. For detecting attacks, this
floppy discs are examples of external devices. Web-based method employs the KDD dataset intrusion detection data set
assaults are conceivable when systems are connected via [1].
the internet, and the attacks can be distributed across The Rough Set Neural Network Algorithm is used to
multiple computers. decrease the amount of computer resources needed to identify
an assault. The KDD dataset is used to test the data and get
The neural network classifiers Feed Forward more reliable results [3][4].
Neural Network (FFNN), Elman Neural Network (ENN), Anomaly detection is determined using Multivariate
Generalized Regression Neural Network (GRNN), Statistical Analysis techniques. The statistical methods are
Probabilistic Neural Network (PNN), and Radial Basis used to compare the system's performance [7][8].
Neural Network (RBNN) are proposed in this system to
detect signature-based infiltration. MATLAB application Data mining techniques such as decision trees are
is used to address this problem utilizing several strategies used to detect assaults. The KDD 99 dataset is used for data
for enhancing performance on the KDD dataset. The training and testing. This model performs better at detecting
performance of the full-featured dataset is compared and novel types of anomalies [9][10].
analyzed with that of the reduced dataset. The Figure 1 Anomaly detection and analysis is based The Hidden
shows the classification of proposed systems: Markov Model is used to implement and determine anomaly
intrusion detection based on system calls [11][12]. on
approaches that can accurately detect and classify various
1. Data Collection (KDD dataset) anomaly behaviors (network scanning and DDoS attacks) in
network data using an analysis method based on the
2. Preprocessing Correlation Coefficient Matrix [13][14]. The Hidden Markov
Model is used to implement and determine anomaly intrusion
detection based on system calls [15][16][17].
3. Normalization
Using statistical classification techniques, the
Hierarchical Gaussian Mixture Model detects network-based
4. Training and Testing attacks as anomalies. The well-known KDD99 dataset is
used to evaluate this model. Six categorization strategies are
utilized to validate the feasibility and effectiveness. This
5. Classification using Neural Network
approach is utilized in Intrusion Detection Systems [18][19]
to reduce false alarms and improve attack accuracy.
Fig. 1: Classification of Intrusion Detection
The Genetic Algorithm is used to detect network
The following is the remainder of this paper: intrusions. During the encoding of the problem using the
The related work utilized for intrusion detection in Genetic Algorithm, it takes into account both temporal and
Section 2. Section 3 describes the KDD Cup dataset. spatial information about network connections. The Genetic
Section 4 discusses the proposed methodologies. Section Algorithm is more useful for detecting network anomalies
5 presents our experimental data and discussion, and [20][21][22]. To lower the computational intensity, several
Section 6 concludes. feature reduction approaches such as Independent
Component Analysis, Linear Discriminant Analysis, and
2. RELATED WORK Principal Component Analysis are applied. The KDD cup 99
dataset is utilized to minimize computation time and increase
system accuracy [23][24][25].
Network Intrusion Detection System (NIDS), for
the past 30 years, the field of intrusion detection systems The various strategies are examined using various
in network security has been evolving. Many approaches criteria. The suggested system takes into account the
and strategies have been developed, and many systems shortcomings of the present system and proposes to fix the
have been impacted by various invasions. Data mining, issues with the KDD dataset using a neural network
neural networks, and statistical algorithms are among the classifier.
techniques used to detect intrusions [2]. The many
strategies and tactics are explained in this linked paper.
3. KDD DATASET DESCRIPTION
The two challenges of Accuracy and Efficiency
are handled by Conditional Random Fields and Layered The KDD dataset was used to evaluate anomaly detection
Approach. Using Conditional Random Fields and a algorithms. The KDD training dataset contains roughly

2
4,900,000 single connection vectors, each with 41 U2R 52
features and labelled as either normal or attack, with only R2L 1126
one unique attack type. The simulated attacks fall into
one of four categories [5]. There are a total of 24 training Probe 4107
attack types in the datasets, with an additional 14 types in Normal 97278
the test data only.
Total 494021

3.1 Data Collection The attacks in the KDD dataset are detected
using the 41 features dataset and the 13 features dataset.
Back, buffer_overflow, ftp_write, The webpage [5] lists the 41 features.
guess_passwd, imap, ipsweep, land, loadmodule,
multihop, neptune, nmap, normal, perl, phf, pod,
3.2 Preprocessing
portsweep, rootkit, satan, smurf, spy, teardrop,
warezclient, warezmaster are all attacks in the KDD The neural network's input data must be in the range
dataset. These attacks can be classified into four types [0 1] or [-1 1]. As a result, data preparation and
[5][13]. normalization are necessary. KDD format data has been
preprocessed. Each KDD record comprises 41 attributes,
Table 1 shows a collection of attacks organized each of which is a continuous, discrete, or symbolic form
by category: with widely variable ranges.

Table 1: List of attacks - category wise Each symbol is assigned an integer code for conversion into
numerical form. In the case of the protocol_type feature, for
DoS R2L U2R Probe example, 0 is allocated to tcp, 1 to udp, and 2 to the icmp
symbol, and so on. The titles of attacks are initially assigned
back ftp_write buffer_overflow ipsweep
land guess_passwd loadmodule nmap to one of the five classes: 'A' for DoS, 'B' for U2R, 'C' for
neptune imap perl portsweep R2L, 'D' for Probe, and 'E' for Normal. src_bytes [0, 1.3
pod multihop rootkit satan billion] and dst_bytes [0, 1.3 billion] are two features with an
smurf phf extremely large integer range. These attributes are subjected
teardrop spy to logarithmic scaling (with base 10) to narrow the range to
warezclient [0.0, 9.14]. All other features are Boolean, with values
warezmaster between [0.0, 1.0]. As a result, scaling is not required for
these properties.
Denial of Service (DoS) attacks: deny legitimate
system requests, such as flooding. User-to-Root (U2R) 3.3 Normalization
attacks: unauthorized access to local super user (root)
privileges, such as different buffer overflow attacks; A statistical analysis is done on the values of each feature
Remote-to-Local (R2L) attacks: unauthorized access based on the existing data from the KDD dataset to identify
from a remote system, such as password guessing; and an acceptable maximum value for each characteristic.
Probing: surveillance as well as other probing, such as Normalisation of feature values in the range [0,1] is
port scanning [7]. calculated using the maximum values and the following
simple formula.
The sets are labelled A, B, C, D, and E, in that
order. The set 'A' obtains data from the DoS class. The set If ( f > MaxF ) Nf=1; Otherwise Nf = ( f / MaxF)
'B' collects data from the U2R class. The set 'C' obtains F: Feature f: Feature value
data from the R2L class. The set 'D' collects data from the MaxF: Maximum acceptable value for F
Probe Class. The set 'E' obtains data from the Normal Nf: Normalized or scaled value of F
class. The following data sets can be used to train and
evaluate the data from the KDD dataset.
4. PROPOSED METHODOLOGIES
Table 2: Training and Testing Data Set
4.1. Radial Basis Neural Network (RBNN)
10% dataset is used for 41
features and 13 features for A Radial Basis Neural Network (RBNN) includes
classification three layers: an input layer, a hidden layer, and an output
DoS 391458 layer. The neurons in the hidden layer have Gaussian transfer
functions with outputs that are inversely proportional to the

3
distance from the neuron's centre. Figure 2 shows the GRNN networks are composed of four layers:
structure. The RBNN is considered as a curve-fitting
issue in high-dimensional space. RBF networks feature 1. Input layer – Each predictor variable has one
three layers: the input layer, the hidden layer, and the neuron in the input layer. When dealing with
summation layer. categorical variables, N-1 neurons are employed,
The RBF is applied to the distance to compute where N is the number of categories. The values are
the weight (influence) for each neuron. then fed to each of the neurons in the hidden layer
by the input neurons.
Weight = RBF(distance) (1) 2. Hidden layer – In the training data set, one
neuron is assigned to each case. Along with the
The training method determines the following target value, the neuron retains the values of the
parameters: predictor variables for the case. The resulting value
is sent to the pattern layer neurons.
1. The quantity of neurons in the buried layer.
2. The centre coordinates of each hidden-layer RBF 3. Pattern layer / Summation layer —The pattern layer has
function. only two neurons. The denominator summation unit is one
3. Each RBF function's radius (spread) in each neuron, while the numerator summation unit is the other.
dimension.
4. The weights that are applied to the RBF function 4.3 Feed Forward Neural Network (FFNN)
outputs as they move through the summing
layer.
Signals can only go from input to output using the
RBF approaches were utilized to train the FFNN. The FFNNs are typically simple networks that
networks. K-means clustering is utilized to locate cluster connect inputs to outputs. They are often employed in
centers, which are then used as the centers for the RBF pattern recognition. Single-layer FFNN and Multi-layer
functions, and a random subset of the training points is FFNN are the two types of FFNN.
used as the centers.
The first and most basic learning machine is the
4.2 Generalized Regression Neural Network single-layer neural network. The term "single layer" refers to
(GRNN) having only two layers, such as an input layer and an output
layer. There are three layers in multi-layer feed forward
When the goal variable is continuous, the General networks: the input layer, the hidden layer, and the output
Regression Neural Networks execute regression. If you layer.
choose a GRNN network, DTREG will automatically
choose the appropriate network type based on the kind of In multi-layer FFNN, there are two sorts of phases.
target variable. Multilayer Perceptron Neural Networks The Forward Phase is used to set the free parameter in the
and Cascade Correlation Neural Networks are also network and ends with the computing of an error signal.
available from DTREG. ei  d i  y i (2)

When compared to Multilayer Perceptron networks, Where di is the desired response and yi is the network's
GRNN networks offer the following advantages and actual output in response to the input. The error signal ei is
disadvantages: propagated through the network during the Backward Phase.
During this phase, adjustments are made to the network's
 Why Training a GRNN network is typically free parameters in order to minimize the error ei in a
significantly faster than training a multilayer statistical sense.
perceptron network.
 • GRNN networks frequently outperform
multilayer perceptron networks in terms of 4.4 Probabilistic Neural Network (PNN)
accuracy.
 • GRNN networks are mostly unaffected by The PNN is a natural extension of the work on Bayes
outliers. classifiers. To be more specific, the PNN is viewed as a
 • When it comes to classifying new cases, function that approximates the probability density of the
GRNN networks are slower than multilayer distribution. The PNN is made up of nodes that are
perceptron networks. organized into three layers following the input layers: the
 • To store the model, GRNN networks demand pattern layer, the summation layer, and the output layer.
greater memory space.

4
A. Pattern Layer: Each training phase has one pattern input, and backpropagation of error is employed to change
node. Each pattern node is a product of the weight vector connection strengths incrementally. Recurrent connections
and for classification, with the weights entering a node are set to 1.0 and cannot be changed. The preceding
coming from a specific node. Following that, the product sequence is repeated at step t+1. This time, the context units
goes through the activation function: include values that match the hidden unit values at time t
exp   xT wki 1  /  2  (3)
precisely.

B. Summation Layer: Each summation node gets the The following happens while using the function
outputs of pattern nodes belonging to a specific class: train to train an Elman network. At every epoch:

1. The network is fed the whole input sequence, and


i 1 exp   x w ki  1  its outputs are calculated and compared to the target
Nk T
/  2
(4) sequence to yield an error sequence.
2. The error is back propagated for each step to
C. Output Layer: The categorization decision is made by discover gradients of errors for each weight and
binary neurons at the output nodes. bias. Because the contributions of weights and
biases to mistakes via the delayed recurrent
connection are disregarded, this gradient is an
i1Nk exp   xT wki 1  /  2   i1Nj exp   xT wkj 1  /  2 
approximation.
3. The gradient is then utilized to update the weights
with the user-selected back prop training function.
(5) The function traingdx is suggested.
The smoothing factor, which is the deviation of the
5. RESULTS AND DISCUSSION
Gaussian functions, is the single factor that must be
chosen for training: Based on the KDD dataset, intrusion detection algorithms
are utilized to detect intrusions. This dataset includes 41
• too tiny deviations result in a very spiky
features from various types of attacks. Using the
approximation that does not generalize Probabilistic Neural Network, the accuracy was enhanced by
96.23% by decreasing 41 characteristics to 13. These
well;
datasets may be used using the MATLAB software [6], and
• too big deviations smooth out details. when compared to the other five Neural Network classifiers,
the Probabilistic Neural Network has the highest accuracy
[2].
4.5 Elman Neural Network (ENN)
Attack Detection Rate (ADR): This is the ratio of overall
Elman networks are feedforward networks with attacks detected by the system to total attacks present in the
layer recurrent links that have tap delays. A three-layer dataset.
network is employed, with a set of "context units" True Positive + True Negative
ADR= *100
included in the input layer. There are weighted True Positive + False Positive + False Negative + True Negative
connections from the hidden layer to these context units. (6)
The input is propagated in a normal feed-forward method The FPR is the ratio of the total number of misclassified
at each step, and then a learning rule is implemented. occurrences to the total number of normal instances.
Because the fixed back connections propagate over the
connections before the learning rule is applied, the FPR= Total Number of Instances Misclassified *100
context units always keep a copy of the prior values of Total Normal Instances
the hidden units. As a result, the network can maintain a (7)
state, allowing it to do tasks like sequence prediction that
are beyond the capabilities of a normal network. 5.1 Total 41 Features Dataset

The input and context units both activate the hidden units,
The table below shows the five types of classes, the
which then feed forward to activate the output units. The
five types of neural network classifiers utilized, and the
context units are also activated by the hidden units. This
efficiency. The classification of 41 highlighted
is known as forward activation. This time cycle may or
datasets is shown in Table 3.
may not include a learning phase, depending on the work.
If this is the case, the output is compared to a teacher

5
Table 3: Results for 41 Features Dataset

Classes/ Efficiency False Time


DoS U2R R2L Probe Normal
Networks (%) Positive Rate Taken (s)

FFNN 376281 43 719 3846 89722 95.26 2.47

ENN 372814 45 719 3641 88798 94.33 2.89

GRNN 365818 39 703 3587 85327 92.20 4.02 127

PNN 390964 48 943 3876 91148 98.57 1.3

RBNN 370946 47 861 3789 89421 94.14 1.87


percentages for five neural networks. The FPR of feed
Based on these findings, a graphical representation is forward neural networks is 2.47%, elman neural network is
provided in the chart below. Figure 3 depicts the 2.89%, generalized regression neural network is 4.02%,
detection rate results for 41 features. probabilistic neural networks are 1.3%, and radial basic
network is 1.87%. Minimum time is take for processing.

Results for 41 Features 5.2 Total 13 Features Dataset


95.26 94.33 92.20 98.57 94.14
Efficiency (%)

100.00 One of the most extensively used dimensionality reduction


90.00
80.00 techniques for data analysis and compression is Principal
70.00 Component Analysis. PCA is a valuable analysis tool
60.00 because patterns might be difficult to uncover in high-
50.00
dimensional data. Once patterns in the data have been
FFNN ENN GRNN PNN RBNN
identified, the data can be compressed to reduce the number
of dimensions without significant information loss [6].
Network Classifiers
Given the data, if each datum contains N characteristics,
Fig. 3: Percentage of accuracy analysis for 41 features such as x11 x12... x1N, x21 x22....x2N, the data set can be
represented by a matrix Xnm.
The categorization of the KDD Cup '99 data set was
performed here using a 41-feature dataset. Table 3 shows The average observation is defined as
the accuracy percentages for five neural networks. The
accuracy of feed forward neural networks is 95.26%, 1 n
elman neural network is 94.33%, generalized regression 
n
x
i 1
i (8)
neural network is 92.20%, probabilistic neural networks
is 98.57%, and radial basic network is 94.14%. Figure 4 The deviation from the average is defined as
depicts the FPR results for 41 features.

\ i  X i   (9)
The top 13 features Table 4 displays the classification
of 13 featured datasets [13], which were chosen using
principal component analysis.

Table 4: Reduced 13 Features Dataset for Classification

Reduced 13 Features Dataset


0- duration -Continuous 8- logged_in- Continuous
1- flag -Symbolic 9- dst_host_serror_rate-
2- src_bytes- Continuous Continuous
3- dst_bytes- Continuous 10- dst_host_srv_serror_rate-
4- land- Symbolic Continuous
5- wrong_fragment- 11- dst_host_rerror_rate-
Fig. 4: Percentage of accuracy analysis for 41 features
Continuous Continuous
6- urgent- Continuous 12- dst_host_srv_rerror_rate-
The categorization of the KDD data set was performed 7- num_failed_logins- Continuous
here using a 41-feature dataset. Table 4 shows the FPR Continuous

6
After picking 13 features, the reduced dataset is utilized to categories the neural networks. The classification of 13 highlighted
datasets is shown in Table 5.

Table 5: Results for 13 Features Dataset

False
Classes/ Efficiency Time
DoS U2R R2L Probe Normal Positive
Networks (%) Taken (s)
Rate
FFNN 387663 45 782 3923 90987 97.85 1.78
ENN 380814 45 719 3641 88798 95.95 2.09
GRNN 379526 42 839 3773 90421 96.07 2.67 103
PNN 390913 49 974 3911 93214 99.00 0.92
RBNN 373442 45 8591 3547 85477 95.36 1.49

Based on these findings, a pictorial representation is The categorization of the KDD dataset has been completed
provided in the chart below. Figure 5 depicts the here. Table 5 shows the FPR percentages of five neural
detection rate results for 13 features. networks. Here, the accuracy of feed forward neural
networks is 1.78, that of elman neural networks is 2.09%,
that of generalized regression neural networks is 2.67%, that
Results for 13 Features of probabilistic neural networks is 0.92%, and that of radial
97.85 95.95 96.07 99.00 95.36 basic networks is 1.49%. Minimum time is take for
100
90 processing compare with the full features.
Efficiency

80
70
60
50 6. SUMMARY
FFNN ENN GRNN PNN RBNN
In this article, five different types of neural network
Network Classifiers classifiers are used to classify the detection rate, FPR and
time taken. KDD dataset is the benchmarking dataset which
Fig. 5: Percentage of accuracy analysis for 13 features
is used in this study for experiments. It is observed that the
The categorization of the KDD dataset has been reduced feature is outperforming better than full features.
completed here. Table 5 shows the accuracy percentages Detection rate is somewhat increased for all five neural
of five neural networks. Here, the accuracy of feed network classifiers based on the comparison between full
forward neural networks is 97.85%, that of elman neural feature detection rate and reduced feature detection rate.
networks is 95.95%, that of generalized regression neural Similarly, the FPR is outperforming better than the full
networks is 96.07%, that of probabilistic neural networks features. Also reduce the time take in the reduced features.
is 99%, and that of radial basic networks is 95.36%. Based on this comparison, the reduced features are
outperforming better than full features in terms of improving
detection rate, reduces the FPR and minimizes the time
taken.

7. CONCLUSION
This research proposes a novel approach for
detecting network intrusions using five classifiers. This study
shows that Probabilistic Neural Networks outperform the
Feed Forward Neural Network, Elman Neural Network,
Generalized Regression Neural Network, and Radial Basis
Neural Network in terms of accuracy. Feature reduction
strategies are used to improve the outcomes. The Principal
Component Analysis is used to decrease the characteristics
of the KDD dataset and is implemented using MATLAB
Fig. 6: Percentage of FPR for 13 features software. PCA selects 13 features from a data set of 41
features. The reduced features are fed into various classifiers,

7
and the results are compared. The results reveal that 13 13. Devaraju S. &Ramakrishnan S., (2015). Detection of Attacks
features are more efficient than 41 features, with shorter for IDS using Association Rule Mining Algorithm, IETE
training and testing timeframes. When comparing these Journal of Research, vol 61, iss 6, pp.624-633.
five classifiers, PNN outperforms FFNN, ENN, GRNN, 14. Wei-Chao Lin, et al., (2015). CANN: An intrusion detection
and RBN. The PCA-reduced KDD dataset yields system based on combining cluster centers and nearest
encouraging results. As a result, it is suggested that we neighbors, Elsevier Knowledge-Based Systems, vol 78,
pursue feature reduction strategies in our future research pp.13–21.
to enhance efficiency and lower the false alarm rate. 15. Ramakrishnan S. & Devaraju S., (2017). Attack’s Feature
Selection-Based Network Intrusion Detection System Using
Fuzzy Control Language, Springer-International Journal of
REFERENCES Fuzzy Systems, vol 19, iss 2, pp.316-328.
16. Devaraju S., (2019). Evaluation of Efficiency for Intrusion
1. Gupta, K. K., et al., R., (2008). Layered approach using
etection System Using Gini Index C5 Algorithm,
conditional random fields for intrusion detection, IEEE
International Journal of Engineering and Advanced
Transactions on dependable and secure Computing, vol 7,
Technology (IJEAT), vol 8, iss 6, pp.2196-2200.
iss 1, pp. 35-49.
17. Suseela T. et al., (2005). Hierarchical Kohonenen Net for
2. Devaraju S. & Ramakrishnan S., (2013). Performance
Anomaly Detection in Network Security, IEEE Tr. on Sys.
Comparison of Intrusion Detection System using Various
Man and Cybernetics, vol 35, iss 2, pp.302-312.
Techniques – A Review, ICTACT Journal on
Communication Technology, vol 4, iss 3, pp.802-812. 18. Devaraju S. & SaravanaPrakash D., (2019). Developing
3. Devaraju S., Ramakrishnan S., (2011). Performance Efficient Web-Based XML Tool, International Journal of
Analysis of Intrusion Detection System Using Various Recent Technology and Engineering (IJRTE), vol 8, iss 3,
Neural Network Classifiers, IEEE International pp.8580-8584.
Conference on Recent Trends in Information Technology 19. Devaraju S. & SaravanaPrakash D., (2019). Total Benefit
(ICRTIT 2011), pp.3-5. Administration for Industry Environment, TEST Engineering
4. Gang Wang, et al. (2010). A new approach to intrusion and Management, vol 81, pp.4594-4599.
detection using Artificial Neural Networks and fuzzy
clustering, Elsevier Expert Sys. with Appl., vol 37, 20. Shih-Wei Lin, et al., (2012). An intelligent algorithm with
pp.6225–6232. feature selection and decision rules applied to anomaly
5. KDD Intrusion Detection Data, intrusion detection, Elsevier Applied Soft Compu., vol 12, iss
10, pp.3285-3290.
http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html,
2010. 21. Jawahar S., et al., (2020). Efficiently Mining Closed
6. MATLAB (MATrix Laboratory) tutorials, Sequence Patterns in DNA without Candidate Genertion,
http://terpconnect.umd.edu/~nsw/ench250/matlab.htm International Journal of Life science and Pharma Research
(Special issue on Advancements in Applications of
7. Devaraju S. & Ramakrishnan S., (2014). Performance Microbiology and Bioinformatics Inpharmacology), SP-08,
Comparison for Intrusion Detection System using Neural iss 8, pp.14-18.
Network with KDD Dataset, ICTACT Journal on Soft
Computing, vol 4, iss 3, pp.743-752. 22. Gaik-Yee Chan, et al., (2013). Discovering fuzzy association
rule patterns and increasing sensitivity analysis of XML-
8. Devaraju S. & Ramakrishnan S., (2013). Detection of related attacks, Elsevier Jr. of Netw. and Compu. Appl., vol
Accuracy for Intrusion Detection System using Neural 36, pp.829–842.
Network Classifier, International Journal of Emerging 23. Devaraju S and Ramakrishnan S, (2019). Association Rule-
Technology and Advanced Engineering, vol 3, iss 1, Mining-Based Intrusion Detection System with Entropy-
pp.338-345. Based Feature Selection: Intrusion Detection System
9. Nadiammai GV, Hemalatha M, (2014). Effective (Chapter 1), Handbook of Research on Intelligent Data
Approach toward Intrusion Detection System using Data Processing and Information Security Systems, IGI Global,
Mining Techniques, Elsevier Egyptian Informatics DOI: 10.4018/978-1-7998-1290-6, pp. 1-24, Pages: 24.
Journal, vol 15, pp.37-50. 24. Devaraju S and Ramakrishnan S, (2020). Fuzzy Rule-Based
10. Minjie Wang, Anqing Zhao, (2012). Investigations of Layered Classifier and Entropy-Based Feature Selection for
Intrusion Detection Based on Data Mining, Springer Intrusion Detection System (Chapter 15), Handbook of
Recent Advances in Computer Science and Information Research on Cyber Crime and Information Privacy (2
Engineering Lecture Notes in Electrical Engineering, vol Volumes), IGI Global, DOI: 10.4018/978-1-7998-5728-0,
124, pp.275-279. Pages: 753.
11. Shingo Mabu, et al., (2011). An Intrusion-Detection 25. Devaraju, S., et al., (2022). Entropy-Based Feature Selection
Model Based on Fuzzy Class-Association-Rule Mining for Network Intrusion Detection Systems, In Methods,
Using Genetic Network Programming, IEEE Transactions Implementation, and Application of Cyber Security
on Systems, Man, and Cybernetics—Part C: Applications Intelligence and Analytics, IGI Global, pp. 201-225.
and Reviews, vol 41, iss 1, pp.130-139.
12. Adel SabryEesa, et al., (2015). A novel feature-selection
approach based on the cuttlefish optimization algorithm
for intrusion detection systems, Elsevier Expert Systems
with Applications, vol 42, pp.2670–2679.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy