0% found this document useful (0 votes)
26 views39 pages

Mbec S 25 00042

Uploaded by

navijegan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views39 pages

Mbec S 25 00042

Uploaded by

navijegan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Medical & Biological Engineering & Computing

ANFFractalNet: Adaptive Neuro Fuzzy FractalNet for Iris Recognition


--Manuscript Draft--

Manuscript Number:

Full Title: ANFFractalNet: Adaptive Neuro Fuzzy FractalNet for Iris Recognition

Article Type: Original article

Keywords: Iris recognition; Kuwahara Filter; Daugman Rubber sheet model; Adaptive Neuro
Fuzzy interference system; FractalCovNet

Corresponding Author: Prabhu R.


Gnanamani College of Technology
Namakkal, INDIA

Corresponding Author Secondary


Information:

Corresponding Author's Institution: Gnanamani College of Technology

Corresponding Author's Secondary


Institution:

First Author: Prabhu R.

First Author Secondary Information:

Order of Authors: Prabhu R.

Nagarajan R

Order of Authors Secondary Information:

Funding Information:

Abstract: During the past few years, iris recognition is a trending research topic owing to its
broad security applications from airports to homeland security border control.
Nevertheless, because of the maximum cost of tools and several shortcomings of the
module, iris recognition failed to apply in real life on large scale applications. Moreover,
the segmentation methods of iris region are tackled with more issues like invalid off-
axis rotations, and non-regular reflections in eye region. To address this issue, iris
recognition enabled Adaptive Neuro Fuzzy FractalNet (ANFFractalNet) is designed. In
this novel investigation, Kuwahara Filter and Region of Interest (RoI) extraction are
employed to pre-process an image. Moreover, the Daugman Rubber sheet model is
considered for segmenting pre-processed image and then feature extraction is
performed to reduce the dimensionality of data. Hence, in this framework, the iris
recognition is performed utilizing the module named ANFFractalNet. Furthermore, the
efficacy of ANFFractalNet utilized some analytic metrics namely, Accuracy, False
Acceptance Rate (FAR) and False Rejection Rate (FRR) obtained effectual values of
91.594%, 0.537% and 2.482%.

Suggested Reviewers: Pramod B Deshmukh


VIT Bhopal University
bhausahebpramod@gmail.com
Expert in this Field

Bouya-Moko Brunel Elvire


Jiangsu University
brunelelvire@gmail.com
Expert in this Field

Powered by Editorial Manager® and ProduXion Manager® from Aries Systems Corporation
Click here to access/download;Manuscript;manuscript.docx

Click here to view linked References

ANFFractalNet: Adaptive Neuro Fuzzy FractalNet for Iris Recognition

1*R. Prabhu and 2R Nagarajan


1*
Assistant Professor, Department of Electronics and Communication Engineering

Gnanamani College of Technology, Anna University

Namakkal, Tamilnadu, India-637018

*prabhur85r@gmail.com
2
Professor, Department of Electrical and Electronics Engineering

Gnanamani College of Technology, Anna University

Namakkal, Tamilnadu, India-637018

Abstract: During the past few years, iris recognition is a trending research topic owing to its

broad security applications from airports to homeland security border control. Nevertheless,

because of the maximum cost of tools and several shortcomings of the module, iris

recognition failed to apply in real life on large scale applications. Moreover, the segmentation

methods of iris region are tackled with more issues like invalid off-axis rotations, and non-

regular reflections in eye region. To address this issue, iris recognition enabled Adaptive

Neuro Fuzzy FractalNet (ANFFractalNet) is designed. In this novel investigation, Kuwahara

Filter and Region of Interest (RoI) extraction are employed to pre-process an image.

Moreover, the Daugman Rubber sheet model is considered for segmenting pre-processed

image and then feature extraction is performed to reduce the dimensionality of data. Hence,

in this framework, the iris recognition is performed utilizing the module named

ANFFractalNet. Furthermore, the efficacy of ANFFractalNet utilized some analytic metrics

namely, Accuracy, False Acceptance Rate (FAR) and False Rejection Rate (FRR) obtained

effectual values of 91.594%, 0.537% and 2.482%.


Keywords: Iris recognition, Kuwahara Filter, Daugman Rubber sheet model, Adaptive

Neuro Fuzzy interference system, FractalCovNet.

1. Introduction

Human iris is a dominant biometric pattern, which has possibility for delivering maximum

identification precision at minimum false match rate. This is owing to composite textural

pattern, which is supposed to be distinctive for each eye and constrained genetic penetration

of iris texture. The effectual achievement of iris identification is based on the involvement of

its attractive physical features which is embedded in the enhancement of effectual feature

descriptors, particularly the establishment of iris code in Daugman’s pioneering work and

more other descriptors, which have been consequently developed [1] [2]. Due to the

durability, long-term stability, significance, individuality and inimitability, iris identification

is considered as the most enhanced technology of biometric recognition in the current world

[3]. The identification of iris is proved by involving the usage of iris pattern with high degree

of choice among sclera and pupil. Thus, the iris detection is normally utilized in several

regions since the iris pattern failed to modify with aging and it is not damaged without any

difficulty. While visible light camera is employed, the iris detection is carried out with built-

in camera device that also has a benefit of attaining three-channel image comprising color

information. As a result, the investigations are being performed on iris detection by attaining

iris images from face images taken by high-resolution visible light camera using smartphones

[4]. Owing to the distinctiveness and flexibility in iris pattern, it is extensively considered for

its dependability in applications which is extending from privacy as well as access control to

border control and healthcare.


During, fetal development of the eye and are invariant to age, iris texture patterns are

supposed to be arbitrarily determined [5]. Thus, iris pattern of every eye is generally showed

as distinct biometric feature even among the twins. Since one of the most safe and

dependable biometric recognition models, iris identification has been extensively employed

in banking sector, mobile phones and border security control [6]. The benefits of iris

identification have influenced more efforts to investigate precise and effectual iris feature

extraction techniques under several environments [7]. Identification and authentications are

processed widely based on biometric techniques and it is also employed for elevating the

security and privacy involved. The measures of biometrics are referred as physical

characteristics features, which are employed for discriminating a person [2]. Thus, these

metrics deploys behavioral and physical features for differentiating and detecting the persons,

and so the techniques of biometric measures are deployed as an effectual solution of privacy

wherein the features cannot be lost, faked or stolen [8]. The scientific community examined

various biometric techniques for human detection namely, hand geometry, face, palm print,

voice, gait, fingerprint, retina, iris and so on. These kinds of entities are more reliable even

though they did not have particular constraints while compared to the conventional privacy

techniques [9]. While comparing with other traditional biometric techniques, which includes

the identification of face, fingerprint, and iris is secure and it is more hygienic due to its

characteristics of less exposure and owing to its non-contact behavior [7].

In recent years, the application of image detection domain based on Deep Learning (DL)

technology is observed and attained great interest. When iris detection in terms of Machine

Learning (ML) has provided effectual outcomes in biometric authentication, there are some

disadvantages to be considered. The first one is the need of high-quality iris images for

precise detection, since poor image quality may lead to errors in recognition. DL based

methods showed dominate result for various problems relevant to assignment and catching
biometrics [10]. Nonetheless, in DL approaches like Convolutional Neural Network (CNN)

has attained significant achievement in more computer vision tasks. CNN is one kind of Non-

Natural intelligence and it has the capability for learning and extracting the features

automatically as a human, wherein the traditional NN was unable to investigate the features

automatically [11] [8]. Handcrafted feature extraction technique is outperformed by CNN

with its ability for automatically learning the appropriate features from adequate training data

[12]. Current achievements in iris detection have investigated the probability of using CNN to

iris image segmentation and processing [13]. Former investigations relating to iris

identification specified that CNN based techniques might effectually learn inherent features

of iris images and attain extreme performance than conventional iris matching technique

specified by iris code. Even though, the CNN is performed in a greater number of tasks in a

better way, it still experiences some complexities while dealing with difficult and variant

tasks like iris identification [14] [10]. The achievement of premature efforts influenced to

investigate new system for overcoming the complex issues faced in real-time iris detection

[7].

The goal of this exploration is to establish iris recognition technique by ANFFractalNet.

An input iris image described in the database is used to pre-process employing Kuwahara

filter and RoI extraction. From pre-processed image, the segmentation is carried out to

segment more regions, which is done with Daugman Rubber sheet model. Additionally,

features are extracted from segmented image, where extracted features are Local Directional

Pattern Variance (LDPV), Fast Retina Key point (FREAK) and statistical features. Thus, the

iris is recognized with the discovered module ANFFractalNet, where ANFFractalNet is

amalgamated using Adaptive Neuro Fuzzy Interference System (ANFIS) and FractalNet.

 Proposed ANFFractalNet for iris recognition: The novelty of iris recognition is

introduced utilizing ANFFractalNet. Here, the iris recognition is accomplished by


ANFFractalNet, wherein the novel framework named ANFFractalNet is presented by

merging ANFIS and FractalNet.

The remaining sections are outlined as: Part 2 explicates the other techniques

investigation. Part 3 elucidates the developed technique’s methodology. Part 4 expresses the

outputs gained from the evaluation of new module. Part 5 concludes the technique with future

work.

2. Motivation

Human iris is a vision organ which maintains amount of light attaining retina by varying

pupil size. Iris recognition defines to the automated progression of identifying the individuals

in terms of their iris patterns. In order to recognize the iris, several existing works are

researched by the investigators and to develop a module by overcoming its problems.

2.1 Literature Survey

Alinia Lat, R., et al. [15] introduced light-weight MobileNet architecture with customized

ArcFace and Triplet loss functions (MobileNet+ArcFace+Triplet) for iris identification. This

approach attained maximal compactness within the class and discrepancies among classes

due to the incorporation of loss functions. Nevertheless, it neglected to design multimodal

biometrics that employs various sources of information and did not eliminate the noisy data

and identified the similarity among classes. Adnan, M., et al. [16] designed Mobile Net v2

model for iris landmarks detection. This module predicted the iris localization repeatedly,

accurately and rapidly. However, the response time of this method was less and attained

minimal output. Nguyen, K., et al. [2] presented fully complex-valued neural network for iris

recognition task. This method efficiently extracted the primarily various features from iris

texture. Yet, it was not appropriate for explicitly progressing phase of several domains, which
utilizes complex filter routine for texture modeling. Saraf, et al. [17] developed Variable

Length Black Hole Optimization (VLBHO) for iris recognition to select variable length of

features. Although, this approach eradicated feature space with high precision, it was unable

to deploy effectual feature selection technique with variable length functionality.

Mostofa, M., et al. [18] established Deep Convolutional Generative Adversarial Network

(DCGAN) to enhance the precision of cross-spectral iris detection techniques. This method

maximized pair-wise correlation through contrastive loss in prognosis for precise cross-

spectral iris matching. But this technique did not have enough availability of database for

learning cross-resolution effect and cross-spectral mismatch. Mostofa, M., et al. [19]

introduced Alex Net for iris regional characteristics and classification. This technique

obtained minimal computational time with high level of accuracy and it showed uniform

distribution of images between positive and negative levels. However, it neglected to

implement real time applications with appropriate hardware. Liu, G., et al. [7] designed

Condensed 2-channel CNN (2-ch CNN) in detecting and verifying the iris. Even though, this

method was robust to diverse image contaminations, other biometrics like fingerprint or palm

identification was not determined. Wei, Y., et al. [20] presented Lightweight CNN for

recognition and verification of iris. This method effectively extracted inner and outer

boundaries of iris. Yet, an actual embedded system did not acquire real-world iris detection

and optimized iris positioning technique for attaining the localization under visible lighting

conditions.

2.2 Challenges

The previous work’s limitations based on iris recognition are investigated below.

 The MobileNet +ArcFace + Triplet model introduced in [15] was employed to

improve the margin among classes, which operates superior on iris recognition task.
However, it failed to eradicate the noise while protecting the edges as well as

effectual analysis of particular regions since it did not employ pre-processing stages.

 In [16], MobileNet v2 model was presented as a robust framework designed to

increase accurate localization of iris recognition. Nevertheless, it experienced

complexities in efficiently handling deformations and neglected to guarantee the

robustness against rotational dissimilarities for attaining precise matching due to the

negligence of Daugman's rubber sheet model segmentation.

 The Fully complex valued network utilized automatic complex-valued feature

learning to integrate advanced DL modules with expert knowledge in iris recognition.

Nonetheless, it did not enhance accuracy, resilience, and flexibility using multimodal

feature extraction techniques that is significant for handling variations and noise,

despite providing increased flexibility [2].

 VLBHO method developed in [17] enabled Segment-based feature decomposition

based on relevance, leading to more effectual relating to memory and computational

optimization and also provided precise outcomes based on convergence. Yet, the

major limitation was the inability to learn complex non-linear mappings between

input features (iris patterns) and output classes (individual identities) since it did not

incorporate with hybrid DL approaches.

 Conventional iris recognition techniques experiences more real-time complexities that

affect their performance and applicability in several practical scenarios like rapid

processing of images, dealing with occlusions namely, eyeglasses, eyelids, or

reflections, which is difficult to understand the part of iris image, ensuring the

integrity of iris recognition techniques against spoofing attacks. Hence, designing an

efficient module for iris recognition is still remains as a challenging task.


3. Proposed ANFFractalNet for iris recognition

Iris is the internal tissue of protected eye, behind cornea and crystal fluid with fabric-like ring

of several colors in the pupil of eye. Every iris comprises of specific feature in terms of lens,

spots, wrinkles and stripes. The major intention of this investigation is to bridge the

shortcomings occurred in prior techniques of iris recognition, which involves noise,

occlusion, and image quality. Thus, in this research an iris recognition module is developed

using ANFFractalNet. At first, an input iris image taken from the database [21] is fed to pre-

processing phase for removing irrelevant noise and external calamities utilizing Kuwahara

filter [22] and RoI extraction [23]. Afterwards, the filtered image is forwarded to

segmentation phase, where it is carried out by employing Daugman Rubber sheet Model [24]

[25]. Thereafter, segmented image is allowed for extracting features. Here, the extracted

features are named as, Local Directional Pattern Variance (LDPV) [26], Fast Retina Key

point (FREAK) [27] and statistical features [28] [29] [30] involves mean, variance, entropy,

contrast, energy and homogeneity. Finally, the iris recognition is performed employing

ANFFractalNet. Here, ANFFractalNet is introduced by combining ANFIS [31] and

FractalNet [32]. Figure 1 states the functional pattern of ANFFractalNet for iris recognition.
Image
Image preprocessing segmentation
Input Iris
image Kuwahara filter Daugman Rubber
sheet Model
RoI extraction

 LDPV
Feature Extraction  FREAK
 Statistical
features

Adaptive Neuro
Fuzzy
interference Iris recognition
System (ANFIS)
Proposed Adaptive Neuro
Fuzzy FractalNet
(ANFFractalNet)
FractalNet

Recognized
output

Figure 1. Schematic outlook of ANFFractalNet for iris recognition

3.1 Image Acquisition

The database ' R ' is considered for iris recognition comprised of 'n' number of images, which is

computed as,

R  R1 , R2 ,....Rm ,...Rn  (1)

Here, Rn signifies total images and Rm specifies m th iris image which is taken for the entire

process.
3.2 Image pre-processing using Kuwahara filter and RoI extraction

This method manipulates unrefined image into usable and meaningful format, which

fluctuates to eradicate redundant distortions and improve certain qualities present in an

image. This phase is processed with an input of Rm , in which the inappropriate noise is

eliminated by Kuwahara filter and ROI extraction.

3.2.1 Kuwahara filter

Kuwahara filtering [22] ensured that the image did not impact the quality of edge and contour

through smoothing filtering technique that is very significant for the process and evaluation

of image. This filter is categorized the filter window as four regions that is signified as   ,

 1,2,3with window dimension of 3 3 . When the dimension of window filter is specified

as 2  o  1 2  o  1 , then the dimension of area is o  1 o  1 , where o implies window

filter dimension. When the window filter is 3 3, then o  1 and dimension of area is 2 2 .

Consider that the mean for every region is  and its variance is 2 , which is expressed

by,

 , 
1
o  1 o  1  ,
   (2)
  

1
 ,    
o  1 o  1  ,
2   (3)
  

Here,  1,2,3, ,  indicates pixel value of coordinates ,  and  represents gray

value of individual pixel.

While comparing the difference of four regions, a region with minimal dissimilarity is

attained. After that, the gray values of center pixel are the average of region with small
dissimilarity. By increasing the dimension of window filter, the superior filtering image is

attained that is implies by K m .

3.2.2 ROI extraction

The filtered image K m is forwarded to ROI extraction for extracting interesting regions. ROI

[23] is computed utilizing pixel intensity values with mask resultant. This process is exploited

for acquiring concerned regions with tedious regions are performed using ROI. The pixel

intensity value is utilized as density portion in which the neighboring recognition of pixel

determines value 1 or 0. Thus, the resultant is indicated as E m .

3.3 Image segmentation using Daugman Rubber Sheet

This operation categorizes an image into more number of segment regions, which frequently

vary the representation of image. Here, an input E m is fed to segmentation phase in order to

divide an image into segments wherein every pixel is mapped to an object, which is

conducted with Daugman rubber sheet.

3.3.1 Daugman rubber sheet

This technique [24] [25] is exploited for iris image segmentation. Here, pupil’s center point is

assumed as reference point, wherein radial vectors are allowed to iris region. The radial lines

placed near the iris region are known as angular resolution. As pupil is non-concentric to iris,

a general expression is needed for rearranging points relating to a direction nearly the circle.

This method is applied foe segmenting the iris region, where the entire region is specified by

grey values of these pixels. This information is computed by the combination of inner and

outer boundaries’ coordinates. This technique reevaluates every unique point in an iris region
to a pair of polar coordinates q,   . Here,  refers angle is in the range of 0,360 degree and

q is in the range of 0,1 .

When zi, j  is considered an iris image specified in Cartesian coordinates and zq,  

specifies illustration in polar coordinate. When is , js  and i , j  is expressed as the units of

inner and outer boundaries in Cartesian coordinates, which is formulated by,

ziq,  , jq,    pq,   (4)

iq,   1  q is    qi   (5)

j q,   1  q  js    qj   (6)

k l
Here, q  , wherein k  1,2,......, Y and   , l  1,2,......., Z and Y , Z represents
Y 1 Z

sample rate with angle and radial direction. The algorithmic steps considered for this model is

demonstrated as,

Stage 1: Based on iris image’s zi, j  boundary localization, the factors is , js , qs  and

i , j , q  are achieved. Here, the factor s, specifies inner and outer boundaries.
Stage 2: Distance amongst iris and pupil center is determined using,

q  is  i 2   js  j 2 (7)

Stage 3: The determination of connection direction angle is expressed by,

js  j
  arctan (8)
is  i

Stage 4: The pupil’s center is chosen as pole in polar coordinates q   qk . For iris outer


boundary,   l , where r  1,2,....., Z ,
180


W    cos       q2  q2  q cos     
2
 (9)
Stage 5: Grey values in every pixel is attained by those grey of i, j  locations, which is

determined by,

 k 
  q   W  
k
Wk  1  (10)
 Y 1 Y 1

i  Pk  Wk cos  , j  Qk  Wk sin  (11)

The segmented image is symbolized as Dm .The illustration of Daugman rubber sheet is

represented in Figure 2.

Outer boundary
Inner boundary

Inner boundary  0

Outer boundary
Pupil
0 1
1
q
00 3600

Figure 2. Demonstration of Daugman rubber sheet

3.4 Feature Extraction based on segmented image

It is a function of transforming original data into numerical features and for reducing the

amount of irrelevant data. Here, an input Dm is allowed to extract the suitable feature that is

mentioned below.

3.4.1 LDPv

In this feature, the segmented image Dm is allowed to extract a feature LDPv [26], which is

composed of local structure distribution wherein the low contrast structure fluctuates

uniformly with maximal contrast one in LDPv histogram, which is determined as,
 
Tm1  LDPv     LDP a, b ,   (12)
a 1 b 1

 LDP a, b  LDP a, b  


 LDP a, b ,     (13)
0, otherwise

 LDP a, b  
1 7

 cd  c
8 d 0

2
(14)

Here, c signifies mean of directional responses cd  computed for a, b  position, 
th

specifies variance, ,  symbolizes dimension,  represents LDP code value and the textual

feature attained from LDPv is implied as Tm1 .

3.4.2 FREAK

The input Dm is subjected to FREAK [27] that is determined as a compact, robust and

descriptor. While comparing pair of image intensities without image sampling pattern, the

binary strings flow is computed. Additionally, electing the pairs to eliminate the descriptor

dimension provides high structured pattern. It is indicated as,

Tm 2   2   (15)

0 

where, descriptor dimension indicates  and pair of receptive fields indicates  .

1
   
    
if  1    2  0,
(16)
0 otherwise

 
where,  1 implies smoothed intensity of initial receptive field of  , thus textual

feature attained from FREAK is specified as Tm 2 .

Thus, the textual features are indicated as,

Tm  Tm1 , Tm 2  (17)

Then, apply statistical features over Dm for attaining feature vector Fm .


3.4.3 Statistical features

The features exploited in this phase are elucidated as,

a) Mean

It [28] specifies the data concentration of a distribution, which is specified as,

 
f1    p  q Dm  p, q (18)
p q

Here, f1 defines mean,  and  implies the size of an image Dm  p, q .

b) Variance

It [28] defines the dissimilarity value of grey level image relevant to mean grey level that is

manifested as,

 
f 2    p  q   f1  Dm  p, q (19)
p q

Here, f 2 exploits variance.

c) Entropy

It [29] computes the improbability linked with arbitrary variables, in which the entropy of

gray scale images is employed to differentiate the input image’s texture that is determined by,

f3  sumDm  log 2Dm  (20)

Here, f3 expresses entropy.

d) Contrast

It [30] determines the local contrast of an input image, which is supposed to be low when the

gray levels of every pixel pair is equivalent that is computed as,

 
f 4    p  q  Dm  p, q
2
(21)
p q

Here, f 4 signifies contrast.


e) Energy

It [30] estimates the total repeated pairs, which is supposed to be maximum when the

occurrence of repeated pixel pairs is maximum that is formulated by,

 
f5   Dm  p, q
2
(22)
p q

Here, f 5 explicates energy.

f) Homogeneity

It [30] evaluates the local homogeneity of a pixel pair, which is supposed to be maximum

when the gray levels of every pixel pair is equivalent, which is computed by,

 
Dm  p, q 
f 6   (23)
p q 1 p  q

Here, f 6 depicts homogeneity.

Thus, feature vector is manifested as,

Fm   f1 , f 2 ,.......... f 6  (24)

3.5 Iris recognition using ANFFractalNet

The advancement of iris recognition is a constant and suitable biometric system, which is

broadly employed in privacy applications. Even though, the iris recognition techniques in

terms of DL attained better accuracy, the aim is to improve the system’s complexity. Hence,

the iris recognition technique is designed employing ANFFractalNet.

In figure 3, the ANFFractalNet is designed, which is composed of ANFIS,

ANFFractalNet layer and FractalCovNet. First, an input Am comprises of pre-processed

outcome Em and segmented outcome Dm is subjected to ANFIS module, in which this module

results the output as Bm1 . ANFFractalNet layer is carried out by means of previous model

output Bm1 and X m that contains textual features Tm and feature vector Fm , which is resulted as
Bm 2 . Moreover, the input Em and Bm 2 is fed to the FractalNet module that is processed and

provided the final outcome as B3 .

Input Am Input Em

Bm1 Bm 2
Fusion
Output
Regression Bm 3
Xm

ANFIS model ANFFractalNet layer FractalNet

Figure 3. Overall structure of ANFFractalNet

3.5.1 ANFIS model

ANFIS [31] is based on the integration of Artificial Neural Networks (ANN) as well as Fuzzy

Inference Systems (FIS), in which this module is applied for image classification into

different class labels. This method has the ability of fast learning for obtaining non-linear

system and variation and this is exploited for optimizing a parameter. The utilization of

designed technique for maximum frequency data is effectual. This network determines

complex and non-linear issue and provided less error.

 J1 =When M is considered as I1 and N is considered as J1 , then t1  u1M  v1 N  w1  .

 J 2 =When M is assumed as I 2 and N is assumed as J 2 , then t 2  u2 M  v2 N  w2  .

Here, u1 , u2 v1 , v2 , w1 , w2 refers linear factors and I1 , I 2 , J1 , J 2 defines membership functions

of input M , N . Meanwhile, this structure is composed of five layers, which is determined in

mathematical expression as,


The determination of layer 1 is,

  M   2 
I M   exp   
  (25)
  2   

I M  
1
2
(26)
M  
1


1,  I Em ,   1,2 (27)

1,  J2 Dm ,   3,4 (28)

The computation of layer 2 is,

 2,  I M    J M ,   1,2 (29)

The layer 3 is expressed as,


 3,       (30)


The determination of layer 4 is computed as,

 4,  t   u M  v N  w  (31)

Here,  signifies standardized firing power from previous layer and u M  v N  w 

specifies a variable in node.

The layer 5 that is the final output of ANFIS module is calculated as,

 t  
Bm1   t  
(32)
  

Thus, the resultant of ANFIS is indicated as Bm1 and its general outline is elucidated in

figure 4.

Layer 1 Layer 2 Layer 3 Layer 4 Layer 5

I1
1 1 Input
 
Figure 4. General outline of ANFIS

3.5.2 ANFFractalNet layer

It is functioned with an input of Bm1 and Fm . Moreover, fusion and regression are conducted

in this layer, where the two networks are incorporated by fusion technique and the similarity

of targeted and recognized outputs are validated by regression technique. The arithmetic

calculation of ANFFractalNet layer is computed as follows,

2  x
   Tm , ,    , ,  (33)
 1  1  1

g
1   Fm    (34)
 1

Here,  symbolizes weight, Tm implies textual feature, Fm indicates feature vector, 

represents the output at e th interval and 1 defines the output at e  1 interval. By utilizing
th

fractional calculus, this is expressed as,

e  1  e   e  1  1   e  2


1 1
(35)
2 6

Assume,
e   (36)

e  1  1 (37)

e  2  Bm1 (38)

e  1  Bm 2 (39)

Substitute Eq. (36), (37), (38), (39) in Eq. (35), the expression becomes,

1  1   Bm1
1 1
Bm 2    (40)
2 6

Substitute Eq. (32), (33), (34) in Eq. (40), the resultant is determined as,

2  3 g   t
    Tm   , ,     , ,      Fm       1   
1 1
Bm 2 (41)
 1  1  1 2  1 6 

Therefore, the output of ANFFractalNet is implied as Bm 2 .

3.5.3 FractalNet model

FractalNet [32] is developed on the basis of UNet structure, which is composed of two paths

for both expansion and contraction. This network contains two U-shape models, where

features from first contracting path are transmitted to last expanding path using fractal block.

Consider C m as an input image for this network, which is forwarded to first contracting

path with a set of convolutional (conv) layer U1 to U 5 . After that, the resultant is subjected to

bottleneck layer U 6 , where its resultant is allowed to expanding path that is illustrated from

U 7 to U 9 . Then, the first expanding path’s output is fed to second bottleneck layer V1 in which

its resultant is given to contracting path V2 to V4 . Here, third bottleneck layer of the module is

symbolized as V5 . The final expanding layer of this network is formed using the layers V6 to V9

which provides the recognized output for the given C m .


From first contracting layers, the features are forwarded to two set of conv before it

reaches last expanding path, which facilitate accurate localization integrated with appropriate

information’s extraction. U-Net incorporates the localization information from downsampling

path with suitable information attained from upsampling path. From contracting and

expanding path, the perspective information and localization is achieved. Here, every layer in

first contracting path is combined with expanding path’s features, which is progressed by

conv block and then integrated with features acquired from second U-Net’s downsampling

path. The features from U 5 is incorporated the upsampling features from U 6 , which is

functioned by conv block at U 7 and exhibited as input to U 8 and V4 . The V4 ’s features follows

the similar process and forwarded them to V6 that is acted as fractal block. This functions the

features at several levels of conv, and therefore it facilitates the module to precisely compute

the recognition regions of image. Once the final conv and pooling layer process is conducted,

the sigmoid activation function is applied for pixel value classification and localization of iris

in image pixel. The resultant of this FractalNet is determined in the form of,

Bm3  FC Em  Bm 2  (42)

Here, FC denotes fully connected layer, it is used since the input image is concatenated

and subjected to this layer for recognition. By substituting Eq. (41) in Eq. (42), the resultant

becomes,


 2  x g  t 
 FC  Em     Tm   , ,     , ,      Fm       1   
1 1
Bm3  (43)


 1  1  1 2  1 6 
 

Here,  symbolizes concatenation of pre-processed image and previous layer’s output.

Here, definite cross-entropy loss function is employed to train purpose that is depicted as,

L
loss   Bm 3 log Bm3  (44)
m 1
Here, Bm 3 indicates recognized output, Bm 3 implies ground truth label, and L signifies

total classes. Figure 5 illustrates demonstration of FractalNet.

Figure 5. Demonstration of FractalNet

4. Results and Discussions

The ANFFractalNet’s outcomes achieved by comparing existing methods based on analytic

metrics and implementation details are expressed in the beneath sections.

4.1 Experimental Setup

ANFFractalNet is performed in PYTHON.


4.2 Experimental Outcomes

The image results of ANFFractalNet are demonstrated in figure 6. Figure 6 a) represents

input image-1, Figure 6 b) signifies input image-2, Figure 6 c) explicates preprocessed

image-1, Figure 6 d) enumerates preprocessed image-2, Figure 6 e) depicts segmented image-

1, and Figure 6 f) represents segmented image-2.

(a) (b)

(c) (d)

(e) (f)
Figure 6. Image results of module ANFFractalNet, a) input image-1, b) input image-2, c)

preprocessed image-1, d) preprocessed image-2, e) segmented image-1, f) segmented image-

4.3 Database Description


UBIRIS database [21] has1877 images gathered from 241 individuals during September,

2004 in two different sessions. The prime feature is facilitated from the fact that, against

previous public as well as free datasets namely, CASIA and UPOL, it incorporated the

images with different noise parameters, therefore allowing the analysis of robustness iris

detection techniques.

4.4 Performance Measures

The evaluation metrics of ANFFractalNet is delineated as,

4.4.1 Accuracy

It [33] estimates the total accurateness of the module by evaluating the total precise

recognitions over entire recognition values that is computed as,

OP  ON
Acc  (45)
OP  ON  P  N

Here, OP , ON , P, N depicts true positive, true negative, false positive and false

negative.

4.4.2 FAR

It [33] measures the probability of inaccurately identified the patient, which is formulated by,

G
FAR  (46)
H

Here, G enumerates overall false acceptances, and H explicates identification attempts.

4.4.3 FRR

It [33] represents the probability of inaccurate rejections of an identity that is a matched,

which is illustrated as,


W
FRR  (47)
H

Here, W refers overall false rejections.

4.5 Performance Analysis

Here, the ANFFractalNet’s performance is evaluated with accordance of training data and K-

fold is demonstrated.

4.5.1 Evaluation of ANFFractalNet based on training data

Based on different values of training data, the ANFFractalNet is evaluated in accordance with

performance metrics from epochs 20 to 100 is mentioned in figure 7. In this analysis, the

training data with 90% is considered. The analysis in regards of accuracy for ANFFractalNet

is specigies in figure 7 a). The ANFFractalNet relating to accuracy is attained with 85.961%,

86.502%, 87.565%, 88.319% and 91.355% for epochs 20 to 100. Figure 7 b) articulates

ANFFractalNet examination in respect of FAR. The FAR of ANFFractalNet obtained for

epoch 20 to 100 as 0.641%, 0.616%, 0.595%, 0.582% and 0.547% for 90% of training data.

In figure 7c), the ANFFractalNet assessment with respect to FRR is represented. With 90% of

training data, ANFFractalNet of FRR observed 9.905%, 8.234% 6.861%, 4.319% and

1.399% for epoch 20 to 100.

(a) (b)
(c)
Figure 7. Examination of ANFFractalNet on training data, a) Accuracy, b) FAR, c) FRR

4.5.2 Evaluation of ANFFractalNet on K-fold

With diverse values of K-fold, ANFFractalNet is computed in respect to analytic metrics for

varying the epochs from 20 to 100 specified in figure 8. Here, the value for K-fold is assumed

as 9. Figure 8 a) demonstrates ANFFractalNet analysis based on accuracy. The accuracy of

ANFFractalNet acquired 85.284%, 86.405%, 87.016%, 88.378% and 91.458% for epoch 20

to 100. In figure 8 b), ANFFractalNet evaluation with respect to FAR is represented. The

ANFFractalNet of FAR attained 0.644%, 0.626%, 0.620%, 0.580% and 0.527% for epoch 20

to 100. The analysis of FRR for ANFFractalNet is specified in figure 8 c). For epochs of 20

to 100, the ANFFractalNet with FRR achieved 9.077%, 7.628%, 6.633%, 3.433% and

1.995%.

(a) (b)
(c)
Figure 8. Evaluation of ANFFractalNet on K-fold, a) Accuracy, b) FAR, c) FRR

4.6 Comparative techniques

MobileNet+ArcFace+Triplet [1], MobileNet v2 [2], Fully complex-valued neural network [3]

and LBHO [4] are the comparative approaches of proposed ANFFractalNet.

4.7 Comparative Analysis

The ANFFractalNet’s evaluation is performed in terms of training data and K-fold is

expressed beneath.

4.7.1 Evaluation of ANFFractalNet on training data

Figure 9 elucidates ANFFractalNet evaluation by utilizing various values of training data.

Here training data is valued as 90%. Figure 9 a) demonstrates the accuracy in accordance

with ANFFractalNet. The ANFFractalNet with accuracy has achieved the value of 91.594%.

When compared with prior modules like MobileNet+ArcFace+Triplet, MobileNet v2, Fully

complex-valued neural network and LBHO, the performance enhancement obtained for

ANFFractalNet is 7.642%, 6.812%, 5.030% and 3.360%. Based on FAR, the ANFFractalNet

is examined in Figure 9 b). The ANFFractalNet of FAR acquired 0.537%. Moreover, the
traditional methods, such as MobileNet+ArcFace+Triplet, MobileNet v2, Fully complex-

valued neural network and LBHO are 0.617%, 0.600%, 0.599%, and 0.571%. In figure 9c),

ANFFractalNet’s estimation with FRR is depicted. The FRR of ANFFractalNet is observed

as 2.482%, while comparing conventional approaches namely, MobileNet+ArcFace+Triplet,

MobileNet v2, Fully complex-valued neural network and LBHO accomplished 9.147%,

8.846%, 7.232% and 6.961%.

(a) (b)

(c)
Figure 9. Estimation of ANFFractalNet on training data, a) Accuracy, b) FAR, c) FRR

4.7.2 Evaluation of ANFFractalNet on K-fold

Figure 10 enumerates ANFFractalNet assessment by using various values of K-fold. Here, K-

fold is taken as 9. The ANFFractalNet of accuracy is illustrated in Figure 10 a). The

ANFFractalNet obtained with accuracy of 91.189%. Meanwhile, the performance

enhancement is achieved by comparing preceding techniques, like


MobileNet+ArcFace+Triplet, MobileNet v2, Fully complex-valued neural network and

LBHO are 6.019%, 4.825%, 4.203% and 3.465%. In figure 10 b), the evaluation of

ANFFractalNet is demonstrated relating to FAR. The ANFFractalNet in terms of FAR

acquired 0.567%, while comparing the former methods, such as MobileNet+ArcFace+Triplet,

MobileNet v2, Fully complex-valued neural network and LBHO acquired 0.647%, 0.639%,

0.622% and 0.604%. Figure 10 c) illustrates the FRR in regards of ANFFractalNet. The

ANFFractalNet of FRR reached 1.291%. When compared with traditional methodologies

namely, MobileNet+ArcFace+Triplet, MobileNet v2, Fully complex-valued neural network

and LBHO attained for ANFFractalNet is 8.505%, 7.339%, 5.384% and 3.788%.

(a) (b)

(c)
Figure 10. Evaluation of ANFFractalNet on K-fold, a) Accuracy, b) FAR, c) FRR

4.8 Comparative Discussion

Table 1 mentions the analytic values achieved from comparative evaluation based on

performance metrics. For this analysis, MobileNet+ArcFace+Triplet, MobileNet v2, Fully


complex-valued neural network and LBHO are used as existing methods. Here, the

ANFFractalNet in regards of accuracy has attained the value of 91.594%, while the former

modules such as MobileNet+ArcFace+Triplet, MobileNet v2, Fully complex-valued neural

network and LBHO achieved 84.595%, 85.355%, 86.987% and 88.517%. This exhibits that

the ANFFractalNet method can recognize the regions where the module needs to be

improved like adapting the threshold values. ANFFractalNet with respect to FAR gained

0.537%, while the other techniques, such as MobileNet+ArcFace+Triplet, MobileNet v2,

Fully complex-valued neural network and LBHO achieved 0.617%, 0.600%, 0.599%, and

0.571%. This demonstrates that the module can assess the risk parameters linked with iris

identification system like identity theft or unauthorized access. The FRR of ANFFractalNet

obtained 2.482%, while the preceding techniques attained 9.147%, 8.846%, 7.232% and

6.961%. Therefore, this explicates that this system helps to predict the errors in sensor

calibration issues, or software bugs. Finally, this analysis proved that ANFFractalNet can

balance the security requirements with the need for smooth experience.

Table 1. Comparative Discussion

Alteration Metrics/ MobileNe Mobile Fully complex- VLBHO Proposed


based on Methods t+ArcFac Net v2 valued neural ANFFractal
e+Triplet network Net
Training Accuracy (%) 84.595 85.355 86.987 88.517 91.594
data=90% FAR (%) 0.617 0.600 0.599 0.571 0.537
FRR (%) 9.147 8.846 7.232 6.961 2.482
K-fold=9 Accuracy (%) 85.700 86.789 87.356 88.029 91.189
FAR (%) 0.647 0.639 0.622 0.604 0.567
FRR (%) 8.505 7.339 5.384 3.788 1.291

5. Conclusion

Various biometrics like face and fingerprints, the diverse feature of iris originates from

arbitrarily distributed features, which leads to its maximal dependability for personal

recognition. However, the poor anti-noise capacity in image classification is easily impacted
by insignificant disturbances. Regardless of essential enhancements in iris recognition, the

effectual and robust at non-ideal conditions still facilitates performance issues and it is still an

ongoing exploration. Therefore, this novel technique is designed for iris recognition named

ANFFractalNet in this research. Firstly, an input iris image is pre-processed by utilizing

Kuwahara Filters and RoI extraction. After that, pre-processed image is segmented by

employing Daugman Rubber sheet Model. Then, segmented image is passed to feature

extraction process. Lastly, iris recognition is accomplished by ANFFractalNet. Here,

ANFFractalNet is combined by two modules, such as ANFIS and FractalNet. The

efficaciousness of ANFFractalNet utilizes performance metrics, like Accuracy, FAR and

FRR acquired better values of 91.594%, 0.537% and 2.482%. In the developed technique, the

capsule (vector) feature learning network will be applied for dealing with the iris recognition

concerns of heterogeneous iris and will explore other vector modules with subsequent

iterations.

Declaration Statements:

Funding: This research did not receive any specific funding

Conflict of Interest: The authors declare no conflict of interest

Acknowledgements: I would like to express my very great appreciation to the co-authors of

this manuscript for their valuable and constructive suggestions during the planning and

development of this research work.

Informed consent: Not Applicable

Ethical approval: Not Applicable

Author Contribution: All authors have made substantial contributions to conception and

design, revising the manuscript, and the final approval of the version to be published. Also,

all authors agreed to be accountable for all aspects of the work in ensuring that questions
related to the accuracy or integrity of any part of the work are appropriately investigated and

resolved.

Data Availability Statement:

In case of benchmark data:

The data underlying this article are available in UBIRIS dataset taken from

“http://iris.di.ubi.pt/index_arquivos/Page374.html”.

References

[1] Hu, Y., Sirlantzis, K. and Howells, G., “Optimal generation of iris codes for iris

recognition" IEEE Transactions on Information Forensics and Security, vol. 12, no, 1,

pp.157-171, 2016.

[2] Nguyen, K., Fookes, C., Sridharan, S. and Ross, A., "Complex-valued iris recognition

network", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no,

1, pp.182-196, 2022.

[3] Kranthi Kumar, K., Bharadwaj, R., Ch, S. and Sujana, S., "Effective deep learning

approach based on VGG-mini architecture for iris recognition", Annals of the Romanian

Society for Cell Biology, pp.4718-4726, 2021.

[4] Lee, M.B., Kang, J.K., Yoon, H.S. and Park, K.R., "Enhanced iris recognition method by

generative adversarial network-based image reconstruction", IEEE Access, vol. 9,

pp.10120-10135, 2021.

[5] Bowyer, K.W., Hollingsworth, K. and Flynn, P.J., “Image understanding for iris

biometrics: A survey", Computer vision and image understanding, vol. 110, no, 2,

pp.281-307, 2008.

[6] [Sheela, S.V. and Vijaya, P.A., "Iris recognition methods-survey", International Journal

of Computer Applications, vol. 3, no, 5, pp.19-25, 2010.


[7] Liu, G., Zhou, W., Tian, L., Liu, W., Liu, Y. and Xu, H., "An efficient and accurate iris

recognition algorithm based on a novel condensed 2-ch deep convolutional neural

network", Sensors, vol. 21, no, 11, pp.3721, 2021.

[8] Shaker, S.H., Al-Kalidi, F.Q. and Ogla, R., “Identification Based on Iris Detection

Technique", International Journal of Interactive Mobile Technologies, vol. 16, no, 24,

2022.

[9] Farouk, R.H., Mohsen, H. and El-Latif, Y.M.A., "A proposed biometric technique for

improving iris recognition", International Journal of Computational Intelligence

Systems, vol. 15, no, 1, pp.79, 2022.

[10] He, S. and Li, X., "Enhance DeepIris Model for Iris Recognition Applications", IEEE

Access, 2024.

[11] Alwawi, B.K.O.C. and Althabhawee, A.F.Y., "Towards more accurate and efficient

human iris recognition model using deep learning technology", TELKOMNIKA

(Telecommunication Computing Electronics and Control), vol. 20, no, 4, pp.817-824,

2022.

[12] Ismail, N.A., Chai, C.W., Samma, H., Salam, M.S., Hasan, L., Wahab, N.H.A.,

Mohamed, F., Leng, W.Y. and Rohani, M.F., "Web-based university classroom

attendance system based on deep learning face recognition", KSII Transactions on

Internet and Information Systems (TIIS), vol. 16, no, 2, pp.503-523, 2022.

[13] Nogay, H.S., Akinci, T.C. and Yilmaz, M., "Detection of invisible cracks in ceramic

materials using by pre-trained deep convolutional neural network", Neural Computing

and Applications, vol. 34, no, 2, pp.1423-1432, 2022.

[14] Makowski, S., Prasse, P., Reich, D.R., Krakowczyk, D., Jager, L.A. and Scheffer, T.,

"DeepEyedentificationLive: Oculomotoric biometric identification and presentation-


attack detection using deep neural networks", IEEE Transactions on Biometrics,

Behavior, and Identity Science, vol. 3, no, 4, pp.506-518.

[15] Alinia Lat, R., Danishvar, S., Heravi, H. and Danishvar, [M], "Boosting iris recognition

by margin-based loss functions", Algorithms, vol. 15, no, 4, pp.118, 2022.

[16] Adnan, M., Sardaraz, M., Tahir, M., Dar, M.N., Alduailij, M. and Alduailij, M., "A

robust framework for real-time iris landmarks detection using deep learning", Applied

Sciences, vol. 12, no, 11, pp.5700, 2022.

[17] Saraf, T.O.Q., Fuad, N. and Taujuddin, N.S.A.M., "Feature encoding and selection for

Iris recognition based on variable length black hole optimization", Computers, vol. 11,

no, 9, pp.140, 2022.

[18] Mostofa, M., Mohamadi, S., Dawson, J. and Nasrabadi, N.M., "Deep GAN-based cross-

spectral cross-resolution iris recognition", IEEE Transactions on Biometrics, Behavior,

and Identity Science, vol. 3, no, 4, pp.443-463, 2021.

[19] Balashanmugam, T., Sengottaiyan, K., Kulandairaj, M.S. and Dang, H., “An effective

model for the iris regional characteristics and classification using deep learning alex

network”, IET Image Processing, vol. 17, no, 1, pp.227-238, 2023

[20] Wei, Y., Zhang, X., Zeng, A. and Huang, H., "Iris recognition method based on parallel

iris localization algorithm and deep learning iris verification", Sensors, vol. 22, no, 20,

pp.7723, 2022.

[21] UBIRIS dataset taken from “http://iris.di.ubi.pt/index_arquivos/Page374.html”, accessed

on June 2024.

[22] Guo, P., Gong, X., Zhang, L., Li, X., He, W. and Xu, T., “An Image Denoising

Algorithm based on Kuwahara Filter”, In proceedings of 2018 Chinese Automation

Congress (CAC), pp. 2307-2311, IEEE, November 2018.


[23] Shirke, M.P.P., Patil, P.R. and Potgantwar, A.D., “A Hybrid Optimization Driven Deep

Learning Technique For Automated Detection Of Skin Cancer Using Twco (Taylor

Water Cycle Optimization), Approach”, 2022.

[24] Podder, P., Khan, T.Z., Khan, M.H., Rahman, M.M., Ahmed, R. and Rahman, M.S., “An

efficient iris segmentation model based on eyelids and eyelashes detection in iris

recognition system”, In proceedings of 2015 International Conference on Computer

Communication and Informatics (ICCCI), pp. 1-7, IEEE, January 2015.

[25] Shamsi, M., Saad, P.B. and Rasouli, A., “A New Iris Recognition Technique Using

Daugman Method”.

[26] Kabir, M.H., Jabid, T. and Chae, O., “Local directional pattern variance (ldpv): a robust

feature descriptor for facial expression recognition”, Int. Arab J. Inf. Technol., vol. 9, no.

4, pp.382-391, 2012.

[27] Alahi, A., Ortiz, R. and Vandergheynst, P., “Freak: Fast retina keypoint”, In proceedings

of 2012 IEEE conference on computer vision and pattern recognition, pp. 510-517, Ieee,

June 2012.

[28] Lessa, V. and Marengoni, M., “Applying artificial neural network for the classification

of breast cancer using infrared thermographic images”, In Computer Vision and

Graphics: International Conference, ICCVG 2016, Warsaw, Poland, September 19-21,

2016, Proceedings, vol. 8, pp. 429-438, Springer International Publishing, 2016.

[29] Perveen, N., Guptaand, S. and Verma, K., “Facial expression classification using

statistical, spatial features and neural network”, International journal of advances in

engineering & technology, vol. 4, no. 1, pp.424, 2012.

[30] Mahmood, F.H. and Abbas, W.A., “Texture features analysis using gray level co-

occurrence matrix for abnormality detection in chest CT images”, Iraqi Journal of

Science, vol. 57, no. 1A, pp.279-288, 2016.


[31] Ragab, M., Ashary, E.B., Aljedaibi, W.H., Alzahrani, I.R., Kumar, A., Gupta, D. and

Mansour, R.F., “A novel metaheuristics with adaptive neuro-fuzzy inference system for

decision making on autonomous unmanned aerial vehicle systems”, ISA transactions,

vol. 132, pp.16-23, 2023.

[32] Munusamy, H., Muthukumar, K.J., Gnanaprakasam, S., Shanmugakani, T.R. and Sekar,

A., “FractalCovNet architecture for COVID-19 chest X-ray image classification and CT-

scan image segmentation”, biocybernetics and biomedical engineering, vol. 41, no. 3,

pp.1025-1038, 2021.

[33] De Mel, V.L.B., “Survey of Evaluation Metrics in Facial Recognition Systems”.


attachment to manuscript

Click here to view linked References

Click here to access/download


attachment to manuscript
Author Biography (1).docx
Graphical Abstract (GA) Click here to access/download;Graphical Abstract
(GA);Graphical Abstract.docx

Graphical Abstract
In this paper, an Adaptive Neuro Fuzzy FractalNet (ANFFractalNet) is proposed for iris
recognition. The processes of the proposed approach are as follows: image preprocessing, image
segmentation, feature extraction and iris recognition. Moreover, the developed ANFFractalNet is
an integration of Adaptive Neuro Fuzzy Interference System (ANFIS) and FractalNet.

Image
Image preprocessing segmentation
Input Iris
Kuwahara filter Daugman Rubber
image
sheet Model
RoI extraction

 LDPV
Feature Extraction  FREAK
 Statistical
features

Adaptive Neuro
Fuzzy
interference Iris recognition
System (ANFIS) Proposed Adaptive Neuro
Fuzzy FractalNet
(ANFFractalNet)
FractalN
et

Recognized
output

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy