'
Yergesh A.K.
DEVELOPMENT OF AN ADVANCED BIOMETRIC AUTHENTICATION SYSTEM USING IRIS RECOGNITION BASED ON A CONVOLUTIONAL NEURAL NETWORK *
Аннотация:
accurate identification of individuals is crucial for security purposes in various domains, such as information security and public safety. Biometric methods have emerged as the most promising and relevant approaches for personal identification. This study focuses on the development and evaluation of a neural network-based method for iris-based personal identification. The proposed approach leverages convolutional neural networks to perform iris image segmentation and generate effective feature representations. The paper provides a comprehensive description of the dataset used for training the segmentation algorithms, including access to the segmentation masks for the entire dataset. Additionally, a novel method is proposed for generating feature representations using pre-trained convolutional neural networks for iris classification. A comparative analysis is conducted to evaluate different approaches for feature representation, including both classical methods and neural network-based techniques. Furthermore, various classification methods, such as support vector machines, random forests, and k-nearest neighbors, are examined and compared. The experimental results demonstrate the high classification accuracy achieved by the proposed approach.
Ключевые слова:
biometric authentication, iris recognition, neural networks, convolutional neural networks, feature representation, image segmentation, classification algorithms, support vector machines, random forests, k-nearest neighbors, Gabor filters, iris dataset, machine learning, security systems
DOI 10.24412/2712-8849-2024-574-615-625
Introduction.The task of personal identification based on biometric data is important and relevant in information security. The use of biometric data for identification significantly increases the reliability of the identification process. In recent years, identification methods based on the face [1], hand geometry [2], fingerprints [3] and other biometric data have become widespread.Iris-based personal identification is one of the most promising biometric technologies. The prospects of using the iris as a biometric material are explained by the following characteristics [4]:Almost every person has an iris.The structure of the iris of the eye practically does not change with time.The iris of the eye is a unique identifier, and the probability that the irises of two different personalities will match is extremely small [4], amounting to approximately ~10-78.In the classical approach to the task of identifying a person by the iris, the following stages are distinguished: segmentation [5], normalization [6], formation of a characteristic representation and classification.The segmentation stage in the task of identifying a person by the iris of the eye consists in highlighting the iris area in the image of the eye. This area is an annular structure that is located betw E En the pupil and the sclera. The purpose of segmentation is to accurately determine the boundaries of the iris for subsequent processing and analysis. This stage includes various methods and algorithms that allow you to automatically or semi-automatically highlight the iris in the image of the eye. This is an important step, since the quality of segmentation directly affects the accuracy of identification of a person based on the iris. Various methods are used in the works, such as the Hough transform for localization of circles [7], the Daugman integro-differential operator [7] [8] [9] and methods based on the analysis of boundary points. However, these approaches are not always able to correctly localize noise caused by overlapping eyelashes, eyelids or the appearance of glare. In this regard, researchers are interested in developing segmentation methods that provide higher quality results.In many studies, the normalization stage in the iris identification task is implemented using the Daugman normalization [7] [8] [9].Various methods were used to form the characteristic representation of the iris, including Gabor filters [10], log-Gabor filters [10], discrete cosine/sine transform, wavelet transform and others. In some studies, the stage of feature representation formation is accompanied by a step of dimension reduction. These methods and algorithms help to identify the characteristic features of the iris, which are then used for classification and identification of personality.In modern research, the classification stage in the task of identifying the iris is implemented using various methods. Among them are the random forest [11], the k-nearest neighbor classifier [12], the support vector machine, and the neural network approach [7]. These methods are popular in machine learning and have the ability to efficiently process and classify iris data. The use of these methods makes it possible to achieve high accuracy in the identification of a person based on the iris of the eye.Fig. 1. Schematic representation of the iris identification procedure.This article describes a method for identifying a person by the iris, in which neural networks are used at the stages of segmentation and the formation of a feature representation. This means that specially designed neural networks are used to highlight the iris and create a characteristic description. This approach allows for automatic and efficient processing of iris images and extraction of information that is further used to identify a person. The use of neural networks at these stages can lead to an improvement in the accuracy and reliability of the identification process.Methods.To use the iris identification method, an image of the human iris is required. This image can be obtained using various methods, such as using an infrared camera. Infrared captures details and features of the iris that might not be visible in conventional video images. The resulting image of the iris serves as input for the subsequent application of the identification method based on the analysis and classification of these characteristics. Thus, using a camera in the IR range is one of the ways to obtain the necessary images for the operation of the iris identification method. As part of the image preprocessing stage, certain operations are performed in the iris identification method. First, the RGB image is converted to a grayscale image. This is necessary to unify the images and comply with the requirements of the segmentation algorithm, which usually works with one image format. After converting to a grayscale image, scaling is applied. It is performed to ensure that the images are of the same size and aspect ratio, allowing comparison and analysis of the irises in a single coordinate system.It is important to note that in the task of identification by the iris, shooting in the IR range is often used. In this regard, color images are converted to grayscale due to the n E Ed to unify data processing and analysis. Converting to a grayscale image allows you to more effectively highlight and analyze the characteristics of the iris as part of the identification method.The segmentation stage is a critical component of the iris identification system, since the accuracy of the entire system depends on its results. A pre-processed image is received at the input of this stage. The output of the segmentation algorithm is a binary mask, which indicates that each pixel belongs to the iris of the eye. In the mask, a value of 1 corresponds to pixels belonging to the iris of the eye, and a value of 0 indicates pixels that do not belong to the iris. By creating such a mask, the segmentation algorithm determines the boundaries of the iris in the image, which allows subsequent processing and analysis of the iris for identification.The segmentation step is followed by a normalization step that transforms the localized area of the iris to a unified view. This is necessary to ensure the same spatial arrangement of the characteristic features of the iris. The goal of normalization is to standardize the size, shape, and position of the iris in the image. The normalization process ensures comparability betw E En different iris images and improves further analysis and iris identification.After the normalization step, the normalized iris image is redundant for direct use in the classification algorithm. Therefore, it is necessary to carry out the stage of forming a feature representation of data. At this stage, the characteristic features are extracted from the normalized image, which uniquely describe each instance of a particular class. The result of this step is a feature vector, which is a compact and informative representation of the iris, ready to be used in a classification algorithm.The resulting vector representation of the iris after the stage of feature formation is input to the classification algorithm. The classification algorithm uses this feature vector to decide whether a given instance belongs to a particular class or to identify an individual. The algorithm can be based on various classification methods such as random forest, k-nearest neighbor classifier, support vector machine or neural network approach. It analyzes the feature vector and makes a decision based on the trained models and the given classification criteria.Gabor filter 2D. The Gabor filter is a commonly employed tool in image processing for tasks such as texture analysis, edge detection, and feature extraction. Its application enables the detection of specific frequency and orientation components within an image by modulating a harmonic function with a Gaussian function. It works by detecting the presence of a given frequency and component direction in the vicinity of each image point. Applying the Gabor filter [10] allows you to highlight textural features and structures in the image.In practical application, to improve the quality of recognition, a set of optimal Gabor filters is usually used. The optimal set consists of filters with different direction values that show the best recognition quality. This allows you to more effectively detect and describe various textural features and structures in the image, improving the results of analysis and classification.Fig. 2. The original image (left) and the result of normalization after applying the transformation (right): a) Daugman transformation, b) framing and scaling.In the spatial domain, the Gabor filter is a harmonic function that is modulated by a Gaussian function. The Gabor filter formula in the spatial domain is where the harmonic function specifies the fundamental frequency and direction, and the Gaussian function defines the modulation window and controls the energy distribution around the fundamental frequency. This combination of the harmonic function and the Gaussian function allows the Gabor filter to detect and analyze various textural features and structures in the image.Let be the wavelength of the modulated harmonic function, be the orientation of the normal to the parallel bands of the Gabor function, be the phase shift of the harmonic function, be the standard deviation of the Gaussian function, be the compression factor characterizing the ellipticity of the Gabor function, then the Gabor filter is constructed in the spatial domain according to the following formula [13]:After generating a set of Gabor filters [10], the normalized iris image undergoes convolution with each filter in the set to obtain a feature representation of the data. The resulting output is determined based on the signs of the real and imaginary parts of a complex number as follows [13]:If the real part is negative and the imaginary part is negative, the output is ‘00’.If the real part is negative and the imaginary part is non-positive, the output is ‘01’.If the real part is non-negative and the imaginary part is negative, the output is ‘10’.If the real part is non-negative and the imaginary part is non-negative, the output is ‘11’.Test and results. For experimental studies in this work, the open data set Kaggle Iris Database was chosen. This data set contains 700 images of the irises taken using a near infrared camera. The size of each image is 28?28 pixels.In this work, the iris segmentation stage was implemented using d E Ep learning methods, in particular, convolutional neural networks were trained. To train these networks, it was required to have pairs of original eye images (input instances) and corresponding segmentation mask images (true outputs). In this regard, the entire data set was subjected to manual segmentation, and the prepared set of segmentation masks is available in the public domain at the link [14].As part of the study, a series of experiments was carried out, as a result of which the following tasks were performed:The best way to normalize has b E En determined data.The best method of indicative data representation was chosen.The best size of the feature vectors input to the machine learning algorithms described above was determined.The best algorithm for classification has b E En determined.As the first method of forming a feature representation of data, a two-dimensional Gabor filter was used. After normalization by the Daugman method and the formation of a feature representation, the number of principal components was enumerated and the classifiers were trained.In the course of the study, a neural network approach was considered for the formation of an indicative representation of data. In particular, a modified pre-trained convolutional neural network with the DenseNet121 architecture was used.To normalize the data in this approach, framing followed by scaling was applied. Cropping made it possible to highlight the area of interest of the iris in the image. Then, scaling was performed to bring the resulting area to the same size required to enter the convolutional neural network.Thus, in this neural network approach, a modified DenseNet121 convolutional neural network was used, and data normalization was performed by framing and scaling.The results of the conducted studies showed that the approach based on the use of the DenseNet convolutional neural network to form a feature representation of data demonstrates the best classification. In this approach, a normalization method based on framing followed by scaling was applied.When adjusting parameters such as the number of principal components and the choice of classifier, a classification accuracy of 99.78% was achieved. This value exc E Eds the results of classification obtained using other combinations of methods for generating a feature representation, the number of principal components, and classifiers.Thus, the DenseNet convolutional neural network with the framing and scaling normalization method showed the highest results in classification accuracy in the framework of the studies.Table 1. Classification accuracy (%) obtained using various modifications of the Gabor filter and neural networks as a way to form a feature representation of data.Conclusion.In this article, a study was made of the method of identifying a person by the iris of the eye using a neural network approach at the stages of segmentation and the formation of an indicative representation of data.At the segmentation stage, a method based on convolutional neural networks was proposed to solve the iris segmentation problem. A database was formed, which was used to train convolutional neural networks. This database is available in the public domain. Comparative analysis with other works using the Kaggle Iris Database dataset showed that the proposed neural network approach achieves better accuracy in the segmentation problem.Next, a comparative analysis of the methods for forming the feature representation of the iris was carried out. Including the classical approach based on various modifications of the Gabor filter and the neural network approach. The results of the study showed that the use of the proposed approach to the formation of a feature representation of data together with a classifier based on a support vector machine with a sigmoidal kernel achieves the highest classification accuracy for 45 classes - 99.78%. This means that the proposed method is highly effective in identifying a person by the iris.
Номер журнала Вестник науки №5 (74) том 2
Ссылка для цитирования:
Yergesh A.K. DEVELOPMENT OF AN ADVANCED BIOMETRIC AUTHENTICATION SYSTEM USING IRIS RECOGNITION BASED ON A CONVOLUTIONAL NEURAL NETWORK // Вестник науки №5 (74) том 2. С. 615 - 625. 2024 г. ISSN 2712-8849 // Электронный ресурс: https://www.вестник-науки.рф/article/14422 (дата обращения: 05.11.2024 г.)
Вестник науки СМИ ЭЛ № ФС 77 - 84401 © 2024. 16+
*