Mbec S 25 00042
Mbec S 25 00042
Manuscript Number:
Full Title: ANFFractalNet: Adaptive Neuro Fuzzy FractalNet for Iris Recognition
Keywords: Iris recognition; Kuwahara Filter; Daugman Rubber sheet model; Adaptive Neuro
Fuzzy interference system; FractalCovNet
Nagarajan R
Funding Information:
Abstract: During the past few years, iris recognition is a trending research topic owing to its
broad security applications from airports to homeland security border control.
Nevertheless, because of the maximum cost of tools and several shortcomings of the
module, iris recognition failed to apply in real life on large scale applications. Moreover,
the segmentation methods of iris region are tackled with more issues like invalid off-
axis rotations, and non-regular reflections in eye region. To address this issue, iris
recognition enabled Adaptive Neuro Fuzzy FractalNet (ANFFractalNet) is designed. In
this novel investigation, Kuwahara Filter and Region of Interest (RoI) extraction are
employed to pre-process an image. Moreover, the Daugman Rubber sheet model is
considered for segmenting pre-processed image and then feature extraction is
performed to reduce the dimensionality of data. Hence, in this framework, the iris
recognition is performed utilizing the module named ANFFractalNet. Furthermore, the
efficacy of ANFFractalNet utilized some analytic metrics namely, Accuracy, False
Acceptance Rate (FAR) and False Rejection Rate (FRR) obtained effectual values of
91.594%, 0.537% and 2.482%.
Powered by Editorial Manager® and ProduXion Manager® from Aries Systems Corporation
Click here to access/download;Manuscript;manuscript.docx
*prabhur85r@gmail.com
2
Professor, Department of Electrical and Electronics Engineering
Abstract: During the past few years, iris recognition is a trending research topic owing to its
broad security applications from airports to homeland security border control. Nevertheless,
because of the maximum cost of tools and several shortcomings of the module, iris
recognition failed to apply in real life on large scale applications. Moreover, the segmentation
methods of iris region are tackled with more issues like invalid off-axis rotations, and non-
regular reflections in eye region. To address this issue, iris recognition enabled Adaptive
Filter and Region of Interest (RoI) extraction are employed to pre-process an image.
Moreover, the Daugman Rubber sheet model is considered for segmenting pre-processed
image and then feature extraction is performed to reduce the dimensionality of data. Hence,
in this framework, the iris recognition is performed utilizing the module named
namely, Accuracy, False Acceptance Rate (FAR) and False Rejection Rate (FRR) obtained
1. Introduction
Human iris is a dominant biometric pattern, which has possibility for delivering maximum
identification precision at minimum false match rate. This is owing to composite textural
pattern, which is supposed to be distinctive for each eye and constrained genetic penetration
of iris texture. The effectual achievement of iris identification is based on the involvement of
its attractive physical features which is embedded in the enhancement of effectual feature
descriptors, particularly the establishment of iris code in Daugman’s pioneering work and
more other descriptors, which have been consequently developed [1] [2]. Due to the
is considered as the most enhanced technology of biometric recognition in the current world
[3]. The identification of iris is proved by involving the usage of iris pattern with high degree
of choice among sclera and pupil. Thus, the iris detection is normally utilized in several
regions since the iris pattern failed to modify with aging and it is not damaged without any
difficulty. While visible light camera is employed, the iris detection is carried out with built-
in camera device that also has a benefit of attaining three-channel image comprising color
information. As a result, the investigations are being performed on iris detection by attaining
iris images from face images taken by high-resolution visible light camera using smartphones
[4]. Owing to the distinctiveness and flexibility in iris pattern, it is extensively considered for
its dependability in applications which is extending from privacy as well as access control to
supposed to be arbitrarily determined [5]. Thus, iris pattern of every eye is generally showed
as distinct biometric feature even among the twins. Since one of the most safe and
dependable biometric recognition models, iris identification has been extensively employed
in banking sector, mobile phones and border security control [6]. The benefits of iris
identification have influenced more efforts to investigate precise and effectual iris feature
extraction techniques under several environments [7]. Identification and authentications are
processed widely based on biometric techniques and it is also employed for elevating the
security and privacy involved. The measures of biometrics are referred as physical
characteristics features, which are employed for discriminating a person [2]. Thus, these
metrics deploys behavioral and physical features for differentiating and detecting the persons,
and so the techniques of biometric measures are deployed as an effectual solution of privacy
wherein the features cannot be lost, faked or stolen [8]. The scientific community examined
various biometric techniques for human detection namely, hand geometry, face, palm print,
voice, gait, fingerprint, retina, iris and so on. These kinds of entities are more reliable even
though they did not have particular constraints while compared to the conventional privacy
techniques [9]. While comparing with other traditional biometric techniques, which includes
the identification of face, fingerprint, and iris is secure and it is more hygienic due to its
In recent years, the application of image detection domain based on Deep Learning (DL)
technology is observed and attained great interest. When iris detection in terms of Machine
Learning (ML) has provided effectual outcomes in biometric authentication, there are some
disadvantages to be considered. The first one is the need of high-quality iris images for
precise detection, since poor image quality may lead to errors in recognition. DL based
methods showed dominate result for various problems relevant to assignment and catching
biometrics [10]. Nonetheless, in DL approaches like Convolutional Neural Network (CNN)
has attained significant achievement in more computer vision tasks. CNN is one kind of Non-
Natural intelligence and it has the capability for learning and extracting the features
automatically as a human, wherein the traditional NN was unable to investigate the features
with its ability for automatically learning the appropriate features from adequate training data
[12]. Current achievements in iris detection have investigated the probability of using CNN to
iris image segmentation and processing [13]. Former investigations relating to iris
identification specified that CNN based techniques might effectually learn inherent features
of iris images and attain extreme performance than conventional iris matching technique
specified by iris code. Even though, the CNN is performed in a greater number of tasks in a
better way, it still experiences some complexities while dealing with difficult and variant
tasks like iris identification [14] [10]. The achievement of premature efforts influenced to
investigate new system for overcoming the complex issues faced in real-time iris detection
[7].
An input iris image described in the database is used to pre-process employing Kuwahara
filter and RoI extraction. From pre-processed image, the segmentation is carried out to
segment more regions, which is done with Daugman Rubber sheet model. Additionally,
features are extracted from segmented image, where extracted features are Local Directional
Pattern Variance (LDPV), Fast Retina Key point (FREAK) and statistical features. Thus, the
amalgamated using Adaptive Neuro Fuzzy Interference System (ANFIS) and FractalNet.
The remaining sections are outlined as: Part 2 explicates the other techniques
investigation. Part 3 elucidates the developed technique’s methodology. Part 4 expresses the
outputs gained from the evaluation of new module. Part 5 concludes the technique with future
work.
2. Motivation
Human iris is a vision organ which maintains amount of light attaining retina by varying
pupil size. Iris recognition defines to the automated progression of identifying the individuals
in terms of their iris patterns. In order to recognize the iris, several existing works are
Alinia Lat, R., et al. [15] introduced light-weight MobileNet architecture with customized
ArcFace and Triplet loss functions (MobileNet+ArcFace+Triplet) for iris identification. This
approach attained maximal compactness within the class and discrepancies among classes
biometrics that employs various sources of information and did not eliminate the noisy data
and identified the similarity among classes. Adnan, M., et al. [16] designed Mobile Net v2
model for iris landmarks detection. This module predicted the iris localization repeatedly,
accurately and rapidly. However, the response time of this method was less and attained
minimal output. Nguyen, K., et al. [2] presented fully complex-valued neural network for iris
recognition task. This method efficiently extracted the primarily various features from iris
texture. Yet, it was not appropriate for explicitly progressing phase of several domains, which
utilizes complex filter routine for texture modeling. Saraf, et al. [17] developed Variable
Length Black Hole Optimization (VLBHO) for iris recognition to select variable length of
features. Although, this approach eradicated feature space with high precision, it was unable
Mostofa, M., et al. [18] established Deep Convolutional Generative Adversarial Network
(DCGAN) to enhance the precision of cross-spectral iris detection techniques. This method
maximized pair-wise correlation through contrastive loss in prognosis for precise cross-
spectral iris matching. But this technique did not have enough availability of database for
learning cross-resolution effect and cross-spectral mismatch. Mostofa, M., et al. [19]
introduced Alex Net for iris regional characteristics and classification. This technique
obtained minimal computational time with high level of accuracy and it showed uniform
implement real time applications with appropriate hardware. Liu, G., et al. [7] designed
Condensed 2-channel CNN (2-ch CNN) in detecting and verifying the iris. Even though, this
method was robust to diverse image contaminations, other biometrics like fingerprint or palm
identification was not determined. Wei, Y., et al. [20] presented Lightweight CNN for
recognition and verification of iris. This method effectively extracted inner and outer
boundaries of iris. Yet, an actual embedded system did not acquire real-world iris detection
and optimized iris positioning technique for attaining the localization under visible lighting
conditions.
2.2 Challenges
The previous work’s limitations based on iris recognition are investigated below.
improve the margin among classes, which operates superior on iris recognition task.
However, it failed to eradicate the noise while protecting the edges as well as
effectual analysis of particular regions since it did not employ pre-processing stages.
robustness against rotational dissimilarities for attaining precise matching due to the
Nonetheless, it did not enhance accuracy, resilience, and flexibility using multimodal
feature extraction techniques that is significant for handling variations and noise,
optimization and also provided precise outcomes based on convergence. Yet, the
major limitation was the inability to learn complex non-linear mappings between
input features (iris patterns) and output classes (individual identities) since it did not
affect their performance and applicability in several practical scenarios like rapid
reflections, which is difficult to understand the part of iris image, ensuring the
Iris is the internal tissue of protected eye, behind cornea and crystal fluid with fabric-like ring
of several colors in the pupil of eye. Every iris comprises of specific feature in terms of lens,
spots, wrinkles and stripes. The major intention of this investigation is to bridge the
occlusion, and image quality. Thus, in this research an iris recognition module is developed
using ANFFractalNet. At first, an input iris image taken from the database [21] is fed to pre-
processing phase for removing irrelevant noise and external calamities utilizing Kuwahara
filter [22] and RoI extraction [23]. Afterwards, the filtered image is forwarded to
segmentation phase, where it is carried out by employing Daugman Rubber sheet Model [24]
[25]. Thereafter, segmented image is allowed for extracting features. Here, the extracted
features are named as, Local Directional Pattern Variance (LDPV) [26], Fast Retina Key
point (FREAK) [27] and statistical features [28] [29] [30] involves mean, variance, entropy,
contrast, energy and homogeneity. Finally, the iris recognition is performed employing
FractalNet [32]. Figure 1 states the functional pattern of ANFFractalNet for iris recognition.
Image
Image preprocessing segmentation
Input Iris
image Kuwahara filter Daugman Rubber
sheet Model
RoI extraction
LDPV
Feature Extraction FREAK
Statistical
features
Adaptive Neuro
Fuzzy
interference Iris recognition
System (ANFIS)
Proposed Adaptive Neuro
Fuzzy FractalNet
(ANFFractalNet)
FractalNet
Recognized
output
The database ' R ' is considered for iris recognition comprised of 'n' number of images, which is
computed as,
Here, Rn signifies total images and Rm specifies m th iris image which is taken for the entire
process.
3.2 Image pre-processing using Kuwahara filter and RoI extraction
This method manipulates unrefined image into usable and meaningful format, which
image. This phase is processed with an input of Rm , in which the inappropriate noise is
Kuwahara filtering [22] ensured that the image did not impact the quality of edge and contour
through smoothing filtering technique that is very significant for the process and evaluation
of image. This filter is categorized the filter window as four regions that is signified as ,
filter dimension. When the window filter is 3 3, then o 1 and dimension of area is 2 2 .
Consider that the mean for every region is and its variance is 2 , which is expressed
by,
,
1
o 1 o 1 ,
(2)
1
,
o 1 o 1 ,
2 (3)
Here, 1,2,3, , indicates pixel value of coordinates , and represents gray
While comparing the difference of four regions, a region with minimal dissimilarity is
attained. After that, the gray values of center pixel are the average of region with small
dissimilarity. By increasing the dimension of window filter, the superior filtering image is
The filtered image K m is forwarded to ROI extraction for extracting interesting regions. ROI
[23] is computed utilizing pixel intensity values with mask resultant. This process is exploited
for acquiring concerned regions with tedious regions are performed using ROI. The pixel
intensity value is utilized as density portion in which the neighboring recognition of pixel
This operation categorizes an image into more number of segment regions, which frequently
vary the representation of image. Here, an input E m is fed to segmentation phase in order to
divide an image into segments wherein every pixel is mapped to an object, which is
This technique [24] [25] is exploited for iris image segmentation. Here, pupil’s center point is
assumed as reference point, wherein radial vectors are allowed to iris region. The radial lines
placed near the iris region are known as angular resolution. As pupil is non-concentric to iris,
a general expression is needed for rearranging points relating to a direction nearly the circle.
This method is applied foe segmenting the iris region, where the entire region is specified by
grey values of these pixels. This information is computed by the combination of inner and
outer boundaries’ coordinates. This technique reevaluates every unique point in an iris region
to a pair of polar coordinates q, . Here, refers angle is in the range of 0,360 degree and
When zi, j is considered an iris image specified in Cartesian coordinates and zq,
specifies illustration in polar coordinate. When is , js and i , j is expressed as the units of
k l
Here, q , wherein k 1,2,......, Y and , l 1,2,......., Z and Y , Z represents
Y 1 Z
sample rate with angle and radial direction. The algorithmic steps considered for this model is
demonstrated as,
Stage 1: Based on iris image’s zi, j boundary localization, the factors is , js , qs and
i , j , q are achieved. Here, the factor s, specifies inner and outer boundaries.
Stage 2: Distance amongst iris and pupil center is determined using,
q is i 2 js j 2 (7)
js j
arctan (8)
is i
Stage 4: The pupil’s center is chosen as pole in polar coordinates q qk . For iris outer
boundary, l , where r 1,2,....., Z ,
180
W cos q2 q2 q cos
2
(9)
Stage 5: Grey values in every pixel is attained by those grey of i, j locations, which is
determined by,
k
q W
k
Wk 1 (10)
Y 1 Y 1
represented in Figure 2.
Outer boundary
Inner boundary
Inner boundary 0
Outer boundary
Pupil
0 1
1
q
00 3600
It is a function of transforming original data into numerical features and for reducing the
amount of irrelevant data. Here, an input Dm is allowed to extract the suitable feature that is
mentioned below.
3.4.1 LDPv
In this feature, the segmented image Dm is allowed to extract a feature LDPv [26], which is
composed of local structure distribution wherein the low contrast structure fluctuates
uniformly with maximal contrast one in LDPv histogram, which is determined as,
Tm1 LDPv LDP a, b , (12)
a 1 b 1
LDP a, b
1 7
cd c
8 d 0
2
(14)
Here, c signifies mean of directional responses cd computed for a, b position,
th
specifies variance, , symbolizes dimension, represents LDP code value and the textual
3.4.2 FREAK
The input Dm is subjected to FREAK [27] that is determined as a compact, robust and
descriptor. While comparing pair of image intensities without image sampling pattern, the
binary strings flow is computed. Additionally, electing the pairs to eliminate the descriptor
Tm 2 2 (15)
0
1
if 1 2 0,
(16)
0 otherwise
where, 1 implies smoothed intensity of initial receptive field of , thus textual
Tm Tm1 , Tm 2 (17)
a) Mean
f1 p q Dm p, q (18)
p q
b) Variance
It [28] defines the dissimilarity value of grey level image relevant to mean grey level that is
manifested as,
f 2 p q f1 Dm p, q (19)
p q
c) Entropy
It [29] computes the improbability linked with arbitrary variables, in which the entropy of
gray scale images is employed to differentiate the input image’s texture that is determined by,
d) Contrast
It [30] determines the local contrast of an input image, which is supposed to be low when the
f 4 p q Dm p, q
2
(21)
p q
It [30] estimates the total repeated pairs, which is supposed to be maximum when the
f5 Dm p, q
2
(22)
p q
f) Homogeneity
It [30] evaluates the local homogeneity of a pixel pair, which is supposed to be maximum
when the gray levels of every pixel pair is equivalent, which is computed by,
Dm p, q
f 6 (23)
p q 1 p q
Fm f1 , f 2 ,.......... f 6 (24)
The advancement of iris recognition is a constant and suitable biometric system, which is
broadly employed in privacy applications. Even though, the iris recognition techniques in
terms of DL attained better accuracy, the aim is to improve the system’s complexity. Hence,
outcome Em and segmented outcome Dm is subjected to ANFIS module, in which this module
results the output as Bm1 . ANFFractalNet layer is carried out by means of previous model
output Bm1 and X m that contains textual features Tm and feature vector Fm , which is resulted as
Bm 2 . Moreover, the input Em and Bm 2 is fed to the FractalNet module that is processed and
Input Am Input Em
Bm1 Bm 2
Fusion
Output
Regression Bm 3
Xm
ANFIS [31] is based on the integration of Artificial Neural Networks (ANN) as well as Fuzzy
Inference Systems (FIS), in which this module is applied for image classification into
different class labels. This method has the ability of fast learning for obtaining non-linear
system and variation and this is exploited for optimizing a parameter. The utilization of
designed technique for maximum frequency data is effectual. This network determines
M 2
I M exp
(25)
2
I M
1
2
(26)
M
1
3, (30)
Here, signifies standardized firing power from previous layer and u M v N w
The layer 5 that is the final output of ANFIS module is calculated as,
t
Bm1 t
(32)
Thus, the resultant of ANFIS is indicated as Bm1 and its general outline is elucidated in
figure 4.
I1
1 1 Input
Figure 4. General outline of ANFIS
It is functioned with an input of Bm1 and Fm . Moreover, fusion and regression are conducted
in this layer, where the two networks are incorporated by fusion technique and the similarity
of targeted and recognized outputs are validated by regression technique. The arithmetic
2 x
Tm , , , , (33)
1 1 1
g
1 Fm (34)
1
represents the output at e th interval and 1 defines the output at e 1 interval. By utilizing
th
Assume,
e (36)
e 1 1 (37)
e 1 Bm 2 (39)
Substitute Eq. (36), (37), (38), (39) in Eq. (35), the expression becomes,
1 1 Bm1
1 1
Bm 2 (40)
2 6
Substitute Eq. (32), (33), (34) in Eq. (40), the resultant is determined as,
2 3 g t
Tm , , , , Fm 1
1 1
Bm 2 (41)
1 1 1 2 1 6
FractalNet [32] is developed on the basis of UNet structure, which is composed of two paths
for both expansion and contraction. This network contains two U-shape models, where
features from first contracting path are transmitted to last expanding path using fractal block.
Consider C m as an input image for this network, which is forwarded to first contracting
path with a set of convolutional (conv) layer U1 to U 5 . After that, the resultant is subjected to
bottleneck layer U 6 , where its resultant is allowed to expanding path that is illustrated from
U 7 to U 9 . Then, the first expanding path’s output is fed to second bottleneck layer V1 in which
its resultant is given to contracting path V2 to V4 . Here, third bottleneck layer of the module is
symbolized as V5 . The final expanding layer of this network is formed using the layers V6 to V9
reaches last expanding path, which facilitate accurate localization integrated with appropriate
path with suitable information attained from upsampling path. From contracting and
expanding path, the perspective information and localization is achieved. Here, every layer in
first contracting path is combined with expanding path’s features, which is progressed by
conv block and then integrated with features acquired from second U-Net’s downsampling
path. The features from U 5 is incorporated the upsampling features from U 6 , which is
functioned by conv block at U 7 and exhibited as input to U 8 and V4 . The V4 ’s features follows
the similar process and forwarded them to V6 that is acted as fractal block. This functions the
features at several levels of conv, and therefore it facilitates the module to precisely compute
the recognition regions of image. Once the final conv and pooling layer process is conducted,
the sigmoid activation function is applied for pixel value classification and localization of iris
in image pixel. The resultant of this FractalNet is determined in the form of,
Here, FC denotes fully connected layer, it is used since the input image is concatenated
and subjected to this layer for recognition. By substituting Eq. (41) in Eq. (42), the resultant
becomes,
2 x g t
FC Em Tm , , , , Fm 1
1 1
Bm3 (43)
1 1 1 2 1 6
Here, symbolizes concatenation of pre-processed image and previous layer’s output.
Here, definite cross-entropy loss function is employed to train purpose that is depicted as,
L
loss Bm 3 log Bm3 (44)
m 1
Here, Bm 3 indicates recognized output, Bm 3 implies ground truth label, and L signifies
(a) (b)
(c) (d)
(e) (f)
Figure 6. Image results of module ANFFractalNet, a) input image-1, b) input image-2, c)
2004 in two different sessions. The prime feature is facilitated from the fact that, against
previous public as well as free datasets namely, CASIA and UPOL, it incorporated the
images with different noise parameters, therefore allowing the analysis of robustness iris
detection techniques.
4.4.1 Accuracy
It [33] estimates the total accurateness of the module by evaluating the total precise
OP ON
Acc (45)
OP ON P N
Here, OP , ON , P, N depicts true positive, true negative, false positive and false
negative.
4.4.2 FAR
It [33] measures the probability of inaccurately identified the patient, which is formulated by,
G
FAR (46)
H
4.4.3 FRR
Here, the ANFFractalNet’s performance is evaluated with accordance of training data and K-
fold is demonstrated.
Based on different values of training data, the ANFFractalNet is evaluated in accordance with
performance metrics from epochs 20 to 100 is mentioned in figure 7. In this analysis, the
training data with 90% is considered. The analysis in regards of accuracy for ANFFractalNet
is specigies in figure 7 a). The ANFFractalNet relating to accuracy is attained with 85.961%,
86.502%, 87.565%, 88.319% and 91.355% for epochs 20 to 100. Figure 7 b) articulates
epoch 20 to 100 as 0.641%, 0.616%, 0.595%, 0.582% and 0.547% for 90% of training data.
In figure 7c), the ANFFractalNet assessment with respect to FRR is represented. With 90% of
training data, ANFFractalNet of FRR observed 9.905%, 8.234% 6.861%, 4.319% and
(a) (b)
(c)
Figure 7. Examination of ANFFractalNet on training data, a) Accuracy, b) FAR, c) FRR
With diverse values of K-fold, ANFFractalNet is computed in respect to analytic metrics for
varying the epochs from 20 to 100 specified in figure 8. Here, the value for K-fold is assumed
ANFFractalNet acquired 85.284%, 86.405%, 87.016%, 88.378% and 91.458% for epoch 20
to 100. In figure 8 b), ANFFractalNet evaluation with respect to FAR is represented. The
ANFFractalNet of FAR attained 0.644%, 0.626%, 0.620%, 0.580% and 0.527% for epoch 20
to 100. The analysis of FRR for ANFFractalNet is specified in figure 8 c). For epochs of 20
to 100, the ANFFractalNet with FRR achieved 9.077%, 7.628%, 6.633%, 3.433% and
1.995%.
(a) (b)
(c)
Figure 8. Evaluation of ANFFractalNet on K-fold, a) Accuracy, b) FAR, c) FRR
expressed beneath.
Here training data is valued as 90%. Figure 9 a) demonstrates the accuracy in accordance
with ANFFractalNet. The ANFFractalNet with accuracy has achieved the value of 91.594%.
When compared with prior modules like MobileNet+ArcFace+Triplet, MobileNet v2, Fully
complex-valued neural network and LBHO, the performance enhancement obtained for
ANFFractalNet is 7.642%, 6.812%, 5.030% and 3.360%. Based on FAR, the ANFFractalNet
is examined in Figure 9 b). The ANFFractalNet of FAR acquired 0.537%. Moreover, the
traditional methods, such as MobileNet+ArcFace+Triplet, MobileNet v2, Fully complex-
valued neural network and LBHO are 0.617%, 0.600%, 0.599%, and 0.571%. In figure 9c),
MobileNet v2, Fully complex-valued neural network and LBHO accomplished 9.147%,
(a) (b)
(c)
Figure 9. Estimation of ANFFractalNet on training data, a) Accuracy, b) FAR, c) FRR
LBHO are 6.019%, 4.825%, 4.203% and 3.465%. In figure 10 b), the evaluation of
MobileNet v2, Fully complex-valued neural network and LBHO acquired 0.647%, 0.639%,
0.622% and 0.604%. Figure 10 c) illustrates the FRR in regards of ANFFractalNet. The
and LBHO attained for ANFFractalNet is 8.505%, 7.339%, 5.384% and 3.788%.
(a) (b)
(c)
Figure 10. Evaluation of ANFFractalNet on K-fold, a) Accuracy, b) FAR, c) FRR
Table 1 mentions the analytic values achieved from comparative evaluation based on
ANFFractalNet in regards of accuracy has attained the value of 91.594%, while the former
network and LBHO achieved 84.595%, 85.355%, 86.987% and 88.517%. This exhibits that
the ANFFractalNet method can recognize the regions where the module needs to be
improved like adapting the threshold values. ANFFractalNet with respect to FAR gained
Fully complex-valued neural network and LBHO achieved 0.617%, 0.600%, 0.599%, and
0.571%. This demonstrates that the module can assess the risk parameters linked with iris
identification system like identity theft or unauthorized access. The FRR of ANFFractalNet
obtained 2.482%, while the preceding techniques attained 9.147%, 8.846%, 7.232% and
6.961%. Therefore, this explicates that this system helps to predict the errors in sensor
calibration issues, or software bugs. Finally, this analysis proved that ANFFractalNet can
balance the security requirements with the need for smooth experience.
5. Conclusion
Various biometrics like face and fingerprints, the diverse feature of iris originates from
arbitrarily distributed features, which leads to its maximal dependability for personal
recognition. However, the poor anti-noise capacity in image classification is easily impacted
by insignificant disturbances. Regardless of essential enhancements in iris recognition, the
effectual and robust at non-ideal conditions still facilitates performance issues and it is still an
ongoing exploration. Therefore, this novel technique is designed for iris recognition named
Kuwahara Filters and RoI extraction. After that, pre-processed image is segmented by
employing Daugman Rubber sheet Model. Then, segmented image is passed to feature
FRR acquired better values of 91.594%, 0.537% and 2.482%. In the developed technique, the
capsule (vector) feature learning network will be applied for dealing with the iris recognition
concerns of heterogeneous iris and will explore other vector modules with subsequent
iterations.
Declaration Statements:
this manuscript for their valuable and constructive suggestions during the planning and
Author Contribution: All authors have made substantial contributions to conception and
design, revising the manuscript, and the final approval of the version to be published. Also,
all authors agreed to be accountable for all aspects of the work in ensuring that questions
related to the accuracy or integrity of any part of the work are appropriately investigated and
resolved.
The data underlying this article are available in UBIRIS dataset taken from
“http://iris.di.ubi.pt/index_arquivos/Page374.html”.
References
[1] Hu, Y., Sirlantzis, K. and Howells, G., “Optimal generation of iris codes for iris
recognition" IEEE Transactions on Information Forensics and Security, vol. 12, no, 1,
pp.157-171, 2016.
[2] Nguyen, K., Fookes, C., Sridharan, S. and Ross, A., "Complex-valued iris recognition
network", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no,
1, pp.182-196, 2022.
[3] Kranthi Kumar, K., Bharadwaj, R., Ch, S. and Sujana, S., "Effective deep learning
approach based on VGG-mini architecture for iris recognition", Annals of the Romanian
[4] Lee, M.B., Kang, J.K., Yoon, H.S. and Park, K.R., "Enhanced iris recognition method by
pp.10120-10135, 2021.
[5] Bowyer, K.W., Hollingsworth, K. and Flynn, P.J., “Image understanding for iris
biometrics: A survey", Computer vision and image understanding, vol. 110, no, 2,
pp.281-307, 2008.
[6] [Sheela, S.V. and Vijaya, P.A., "Iris recognition methods-survey", International Journal
[8] Shaker, S.H., Al-Kalidi, F.Q. and Ogla, R., “Identification Based on Iris Detection
Technique", International Journal of Interactive Mobile Technologies, vol. 16, no, 24,
2022.
[9] Farouk, R.H., Mohsen, H. and El-Latif, Y.M.A., "A proposed biometric technique for
[10] He, S. and Li, X., "Enhance DeepIris Model for Iris Recognition Applications", IEEE
Access, 2024.
[11] Alwawi, B.K.O.C. and Althabhawee, A.F.Y., "Towards more accurate and efficient
2022.
[12] Ismail, N.A., Chai, C.W., Samma, H., Salam, M.S., Hasan, L., Wahab, N.H.A.,
Mohamed, F., Leng, W.Y. and Rohani, M.F., "Web-based university classroom
Internet and Information Systems (TIIS), vol. 16, no, 2, pp.503-523, 2022.
[13] Nogay, H.S., Akinci, T.C. and Yilmaz, M., "Detection of invisible cracks in ceramic
[14] Makowski, S., Prasse, P., Reich, D.R., Krakowczyk, D., Jager, L.A. and Scheffer, T.,
[15] Alinia Lat, R., Danishvar, S., Heravi, H. and Danishvar, [M], "Boosting iris recognition
[16] Adnan, M., Sardaraz, M., Tahir, M., Dar, M.N., Alduailij, M. and Alduailij, M., "A
robust framework for real-time iris landmarks detection using deep learning", Applied
[17] Saraf, T.O.Q., Fuad, N. and Taujuddin, N.S.A.M., "Feature encoding and selection for
Iris recognition based on variable length black hole optimization", Computers, vol. 11,
[18] Mostofa, M., Mohamadi, S., Dawson, J. and Nasrabadi, N.M., "Deep GAN-based cross-
[19] Balashanmugam, T., Sengottaiyan, K., Kulandairaj, M.S. and Dang, H., “An effective
model for the iris regional characteristics and classification using deep learning alex
[20] Wei, Y., Zhang, X., Zeng, A. and Huang, H., "Iris recognition method based on parallel
iris localization algorithm and deep learning iris verification", Sensors, vol. 22, no, 20,
pp.7723, 2022.
on June 2024.
[22] Guo, P., Gong, X., Zhang, L., Li, X., He, W. and Xu, T., “An Image Denoising
Learning Technique For Automated Detection Of Skin Cancer Using Twco (Taylor
[24] Podder, P., Khan, T.Z., Khan, M.H., Rahman, M.M., Ahmed, R. and Rahman, M.S., “An
efficient iris segmentation model based on eyelids and eyelashes detection in iris
[25] Shamsi, M., Saad, P.B. and Rasouli, A., “A New Iris Recognition Technique Using
Daugman Method”.
[26] Kabir, M.H., Jabid, T. and Chae, O., “Local directional pattern variance (ldpv): a robust
feature descriptor for facial expression recognition”, Int. Arab J. Inf. Technol., vol. 9, no.
4, pp.382-391, 2012.
[27] Alahi, A., Ortiz, R. and Vandergheynst, P., “Freak: Fast retina keypoint”, In proceedings
of 2012 IEEE conference on computer vision and pattern recognition, pp. 510-517, Ieee,
June 2012.
[28] Lessa, V. and Marengoni, M., “Applying artificial neural network for the classification
[29] Perveen, N., Guptaand, S. and Verma, K., “Facial expression classification using
[30] Mahmood, F.H. and Abbas, W.A., “Texture features analysis using gray level co-
Mansour, R.F., “A novel metaheuristics with adaptive neuro-fuzzy inference system for
[32] Munusamy, H., Muthukumar, K.J., Gnanaprakasam, S., Shanmugakani, T.R. and Sekar,
A., “FractalCovNet architecture for COVID-19 chest X-ray image classification and CT-
scan image segmentation”, biocybernetics and biomedical engineering, vol. 41, no. 3,
pp.1025-1038, 2021.
Graphical Abstract
In this paper, an Adaptive Neuro Fuzzy FractalNet (ANFFractalNet) is proposed for iris
recognition. The processes of the proposed approach are as follows: image preprocessing, image
segmentation, feature extraction and iris recognition. Moreover, the developed ANFFractalNet is
an integration of Adaptive Neuro Fuzzy Interference System (ANFIS) and FractalNet.
Image
Image preprocessing segmentation
Input Iris
Kuwahara filter Daugman Rubber
image
sheet Model
RoI extraction
LDPV
Feature Extraction FREAK
Statistical
features
Adaptive Neuro
Fuzzy
interference Iris recognition
System (ANFIS) Proposed Adaptive Neuro
Fuzzy FractalNet
(ANFFractalNet)
FractalN
et
Recognized
output