• 検索結果がありません。

Comparison between Physiological and Behavioral Characteristics of Biometric System

N/A
N/A
Protected

Academic year: 2021

シェア "Comparison between Physiological and Behavioral Characteristics of Biometric System"

Copied!
9
0
0

読み込み中.... (全文を見る)

全文

(1)

2019 年 12 月

JOURNAL OF SOUTHWEST JIAOTONG UNIVERSITY

Dec. 2019

ISSN: 0258-2724 DOI:10.35741/issn.0258-2724.54.6.43

Research article

Computer and Information Science

C

OMPARISON BETWEEN

P

HYSIOLOGICAL AND

B

EHAVIORAL

C

HARACTERISTICS OF

B

IOMETRIC

S

YSTEM

Azal Habeeba

a University of Thi-qar, Department of Computer, Nasiriyah, Iraq.

E-mail: azal.alamery2@gmail.com

Abstract

Biometrics is a technical aspect to identify each person from others. It is one of the ways to distinguish a person’s identity. The biometric system plays a vital role in data security. There are two types of biometric systems, i.e., physiological and behavioral biometrics. Physiological biometrics involves the fingerprint, iris, and face, while behavioral biometrics includes the signature, stroke, and voice. This paper discussed the iris recognition technique using the Canny edge detector and Hough transform to separate iris region from the eye images. The voice recognition technique was discussed using mel-frequency cepstral coefficient (MFCC) method. Finally, the paper compared iris recognition and voice recognition according to their properties and their performance.

Keywords: iris recognition, voice recognition, biometric, segmentation, mel-frequency cepstral coefficient (MFCC)

摘要 生物识别技术是识别每个人彼此的技术方面。 这是区分一个人身份的方法之一。 生物识别系统 在数据安全中起着至关重要的作用。 有两种类型的生物识别系统,即生理和行为生物识别。 生 理生物特征包括指纹,虹膜和面部,而行为生物特征包括签名,中风和声音。 本文讨论了使用 Canny 边缘检测器和 Hough 变换将虹膜区域与眼睛图像分开的虹膜识别技术。 讨论了语音识别技 术,采用了密频倒谱系数(MFCC)方法。 最后,本文根据虹膜识别和语音识别的特性和性能进行 了比较。 关键词: 虹膜识别,语音识别,生物识别,分割,梅尔频率倒谱系数

I. I

NTRODUCTION

Regarding personal identification, traditional methods rely on changeable parameters, such as passwords or magnetic cards. These parameters can be easily stolen by others. This method has a lot of disadvantages, such as forgetting, losing, and stealing, and also the card can be cracked because of those disadvantages. We choose the biometric system for the traditional human identification method. Concerning personal identification, biometric systems are more secure and safer than traditional methods.

Biometrics has recently been attracting more attention in mass media. It deals with the identification of individuals based on their physiological or behavioral characteristics. Moreover, it is widely thought that biometrics can become an important component of the identification technology Biometrics can be divided into physiological and behavioral characterization [1]. The physiological biometrics involves the iris, fingerprint, palm print, ear, face , etc., and the behavioral biometrics includes gait, handwritten signature, keystroke voice, and human walking [2-5].This paper investigates the

(2)

comparative features of physiological and behavioral biometrics. In this regard, the iris recognition and voice recognition were chosen to make a comparison.

II. R

ESEARCH

M

ETHODOLOGY

A. Iris Recognition

Iris detection is one of the most dependable methods for individual identification. In this study, the iris recognition was chosen because of the following reasons:

 Iris recognition is more accurate than other biometric security alternatives;

 Iris style can last for more than 10 months and stay stable throughout one’s life; and

 People acceptance of iris recognition and its products is determined by its ease of use [6], [7].

The iris recognition involves four stages: 1) gaining an eye image;

2) iris segmentation; 3) normalization;

4) feature extraction and encoding and pattern matching.

Image acquisition is rather difficult since the eye image must be rich in iris texture, as the iris image relies upon the image quality. The image

must be taken by a good camera to create a meaningful detailed image. Figure 1 shows proper iris image.

Figure 1. Human eye

The segmentation step in iris recognition involves separation of the actual iris region from the eye image. The human eye consists of two circles, one being for the iris boundary and another, interior circle, being for pupil [8]. The human iris is located in the area between pupil and sclera. Before the segmentation, all the unwanted data must be removed from the iris image, i.e., sclera, lashes, and pupil. Furthermore, the brightness difference and change in camera-to-face distance may degrade the recognition rate. Hence, the removal process is adopted to localize iris and enhance the quality of images for iris recognition (see Figure 2).

Figure 2. Removed impurities [5]

Then, the Canny edge detector and Hough transform should be used to extract the iris area.

1) Canny Edge Detector

Canny edge detection is a method for extracting useful structural information from various vision objects, which dramatically reduces the amount of information to be processed. The main advantage of the Canny edge detector is a low error rate of edge detection; this means that the detection should accurately catch as many edges shown in the image as possible. The edge point detected from the operator should accurately localize in the center of the edge, and a given edge in the image should only be marked once, and where possible, image noise should not make false edges [9]-[11].

The process of the Canny edge detector involves five steps:

1. Smooth the image in order to remove the noise from it by using the Gaussian filter. Since all edge detection results are easily influenced by image noise, it is fundamental to remove the noise to prevent false detection. The Gaussian filter is applied to remove the noise from the image.

2. Calculate the gradient of the image.

Horizontal, vertical, and diagonal edges can be detected in the image by using the Roberts, Prewitt, or Sobel filter.

3. Calculate non-maximum suppression to get rid of the spurious response to edge detection.

(3)

Non-maximum suppression is used to extract the thick edge. After applying the gradient, the edge is still blurred. Thus, the non-maximum suppression assists in inhibiting all the gradient value (make it = 0) except the local maxima. The algorithm for every edge pixel is:

Check the edge strength of the current pixel with the edge strength of neighbors in edge direction. If the edge strength of the current pixel is the largest E value of this pixel, it is recorded. Otherwise, the value is inhibited.

4. Calculate two thresholds to determine potential edges.

This process is achieved by selecting high and low threshold values. If an edge pixel’s gradient value is higher than the high threshold value, this area should be marked as a strong edge pixel. If an edge pixel’s gradient value is smaller than the high threshold value and larger than the low threshold value, this area should be marked as a weak edge pixel. If an edge pixel's value is smaller than the low threshold value, it will be inhibited. The values of the threshold are selected randomly, and their definition will rely on the content of a given input image.

5. Edge tracking by hysteresis

Final edges are determined by suppressing all edges that are not connected to a very certain strong edge.

2) Hough Transform

The Hough transform is used to locate the parameters of geometric objects, such as lines and circles. The circular Hough transform can be employed to draw the radius and center coordinates of the pupil and iris regions [12]. According to this, the following equation can extract any circle:

𝑥2�杮+ 𝑦2𝑐 − 𝑟2= 0� (1) where x and y are the center coordinates, and r is the radius.

After the segmentation, the next stage is to normalize this area to be ready for comparison. The normalization process is employed to convert the iris area from polar form to rectangle form, wherethe Daugman’s Rubber sheet model is used [13]. Figure 3 shows the Daugman’s Rubber sheet model, where r represents radius while θ represents the angle.

Figure 3. Daugman’s Rubber sheet model

Chawla and Oberoi state that remapping of the iris area I (x, y), from Cartesian coordinates (x, y) to the dimensionless nonconcentric polar coordinate system (r, θ), can be represented as:”[14]

𝐼(𝑥(𝑟, 𝜃), 𝑦(𝑟, 𝜃)) → 𝐼(𝑟, 𝜃) (2) where x(r, θ) and y(r, θ) represent linear combinations of both the set of pupillary boundary points (xp[θ], yp[θ]) and the set of

limbus boundary points along the outer circumference of the iris (xs[θ], ys[θ]) bordering

the sclera:

𝑥(𝑟, 𝜃) = (1 − 𝑟) ∗ 𝑥𝑝(𝜃) + 𝑟 ∗ 𝑥𝑠(𝜃) 𝑦(𝑟, 𝜃) = (1 − 𝑟) ∗ 𝑦𝑝(𝜃) + 𝑟 ∗ 𝑦𝑠�(𝜃) (3) where I(x,y) represents the iris area, 𝑥 and 𝑦 are the Cartesian coordinates, r and θ are the corresponding normalized polar coordinates, and

(xp, yp) and (xi,yi) are the coordinates of the pupil

and iris boundaries along the θ direction [14]. For the process of feature extraction and encoding, the Gabor filter can be used to extract the feature from the iris area. The equation for the Gabor filter is difficult so the equation is split into two parts : odd filter and even filter .

𝐺𝑒(𝑥, 𝑦) = 𝑔(𝑥, 𝑦)cos�(2𝜋𝑓(𝑥𝑐𝑜𝑠𝜃 + 𝑦𝑠𝑖𝑛𝜃)) 𝐺𝑜(𝑥, 𝑦) = 𝑔(𝑥, 𝑦)sin�(2𝜋𝑓(𝑥𝑐𝑜𝑠𝜃 + 𝑦𝑠𝑖𝑛𝜃)) 𝑔(𝑥, 𝑦) = − exp (𝑥2+𝑦2

2ơ2 ) . 𝑗 = √−1 (4)

where G(x,y) is the Gabor filter’s kernel,

g(x,y) is an isotropic 2D Gaussian function, xGe (x, y) is the even filter and Go (x ,y) is the odd filter.

Matching can be done via Hamming distance. The Hamming distance measures the numbers of bits for which two iris codes disagree. If the Hamming distance between two images is 0, this result indicates that the two images are from the same person [15].

B. Voice Recognition

Speech or voice can be classified as behavioral characteristics, which can be used in biometric systems to recognize the person based on the stored voice in the enrollment phase. The

(4)

speech recognition process starts by taking the sound from the person viaa microphone. The speech recognition process involves the translation of spoken language into a sequence of words by using computers. This process also called automatic speech recognition (ASR). ASR system includes two portions, i.e., feature extraction technique and matching technique [16].

Feature extraction is the major process of the speech recognition system. The role of this process is to extract speech samples and convert speech from analog signals to digital signals. In

this regard, the Mel-frequency cepstral coefficient (MFCC) is one of the feature extraction techniques [17].

Mel-frequency cepstral coefficient is a powerful and accurate technique for speech feature extractiontechnique(MFCC),which is based on the known variations of the human ear’s critical bandwidths with frequencies thatare below a 1000 Hz. The main aim of the MFCC processor is to transfer the attitude of human ears[18]. Figure 4 shows the block diagram of MFCC.

Figure 4. Block diagram of MFCC

The matching technique is a process to compare the voices of the users with the other voice templates. There are many algorithms for this process. The first one of those algorithms is dynamic time warping (DTW).

DTW is a method for recognizing the similarity between two templates of the voice recognition process. The function of DTW is to calculate the minimum distance between the users [19], [20]. Supposed two time series s and t of length (n, m), respectively, represent (s1,s2,…,si,…sm) and (t1,t2,…tj,…tn).

An n-by-m matrix is constructed, where the element of the matrix contains distance d between the two points, i.e., qi and cj. The distance is calculated using the Euclidean distance computation in Equation (5):

𝑑(𝑠𝑖,𝑡𝑗) = (𝑠𝑖, 𝑡𝑗)2�������������������������������������������(5) While the cumulative distance is calculated using Equation (6):

D(i, j) = min[𝐷(𝑖 − 1, 𝑗 − 1), 𝐷 (𝑖 −

1, 𝑗), 𝐷(𝑖, 𝑗 − 1)] + 𝑑(𝑖, 𝑗) (6)

III. R

ESULTS

Iris pictures utilized in this paper were taken from the CASIA database with a size of 320×280 pixels. The MATLAB program was used to

implement the iris recognition code and voice recognition.

Figures 5 and 6 show the results of iris and voice recognition.

Figure 5. Iris recognition

Figure 6. Voice plot Original

signal

Pre-emphasis

Frame blocking Hamming

window Fast Fourier transform Discrete cosine transform Log energy CFF tesfrueaef

(5)

A. Comparison between the voice and iris recognition

There are many methods for evaluating biometric systems. The following list shows these methods:

1) properties-based comparison [21] (Table 1) 2) performance-based comparison (Table 2)

For performance-based comparison it is necessary to evaluate the performance of the biometric system. The performance is measured by various methods, such as the false rejection rate (FRR) or false acceptance rate (FAR). FAR measures the probability of accepting, while FRR measures the probability of rejecting. FAR can be calculated utilizing Equation (7) [22]:

𝐹𝐴𝑅 =

𝑁𝑂.𝑜𝑓�𝑓𝑎𝑙𝑠𝑒�𝑎𝑐𝑐𝑒𝑝𝑡𝑎𝑛𝑐𝑒𝑠

𝑁𝑂.𝑜𝑓�𝑖𝑑𝑒𝑛�捲𝑖𝑓𝑖𝑐𝑎𝑡𝑖𝑜𝑛�𝑎𝑡𝑡𝑒𝑚�‰𝑡𝑠��������������������(7) And FRR can be calculated by the following equation:

𝐹𝑅𝑅 =

𝑁𝑂.𝑜𝑓�𝑓𝑎𝑙𝑠𝑒�𝑟𝑒𝑗𝑒𝑐𝑡𝑖𝑜𝑛

𝑁𝑂.𝑜𝑓�𝑖𝑑𝑒𝑛�语𝑖𝑓𝑖𝑐𝑎𝑡𝑖𝑜𝑛�𝑎𝑡𝑡𝑒𝑚𝑝𝑡𝑠���������������(8)

The equal error rate (EER) is a threshold set to calculate the performance of the biometric system, which is a midpoint area between FRR and FAR. Figure 7 shows EER.

Figure 7. The equal error rate

If EERis small, the accuracy of the biometric system is high. Otherwise, the accuracy of the biometric system is low. Table 2 shows the performance of the voice and iris techniques.

Table 1

Comparison based on properties

Bio m etri c sy ste m Un iv ersa li ty Un iq u en ess Co ll ec tab il it y P erm an en ce P erfo rm an ce Ac ce p tab il it y Circu m v en ti o n

Iris high high high high high medium low Voice medium low medium low low high high

Table 2

Voice and iris performance

Biometric system FRR FAR

Iris technique 0.95% 90%

Voice technique 14% 2%

Table 3

Comparison between the iris and voice

Biometric system Advantages Disadvantages

Iris  Very accurate

 FAR is minimum

 Highly scalable as iris texture stays the same throughout the life

 Small templates size, so the recognition takes a few minutes

Accuracy is not influenced by wearing glasses or lenses.

 Iris scanners are expensive

 Iris scanners can be tricked by a high-quality image

 The iris recognition requires cooperation from the user

Voice  Cheap

 Easy to utilize and no new instruction need

 Usage is comfortable

 Influenced by the noisy environment

 The accurate is low

 Voice changes if the user is sick

 Can be easily misled Based on Tables 1 and 2, it can be concluded

that there are advantages and disadvantages in both the iris and voice recognitions (see Table 3).

Figures 8 and 9 display the values of FRR and FAR:

(6)

Figure 7. False acceptance rate values

Figure 8. False rejection rate values

IV. C

ONCLUSIONS

This paper discussed the iris recognition technique by using the Canny edge detector and Hough transform to the separate iris region from the eye images. The Daugman’s Rubber sheet model was used for converting the iris area from the polar form to the rectangle form. The Hamming distance was used for the matching process. After conducting those processes, the voice recognition technique was discussed. Voice recognition consists of two steps, i.e., the extraction technique and matching technique. The extraction technique (MFCC) was used to extract the voice sample. The matching technique (DTW) was used to compare the voices of users. Finally, the paper compared iris recognition and voice recognition according to their properties and their performance.The comparison concluded that iris recognition is better than voice recognition, whereas the iris has more security sinceit is difficult to fake.

R

EFERENCES

[1] ANWAR, A.S., GHANY, K.K.A., & ELMAHDY, H. (2015) Human Ear Recognition Using Geometrical Features Extraction. Procedia Computer Science, 65, pp. 529–37. doi: 10.1016/j.procs.2015.09.126 [2] FREIRE, M.R., FIERREZ, J., ORTEGA-&

GARCIA, J. (2008) Dynamic signature verification with template protection using helper data. Proceedings of the IEEE 2008

International Conference on Acoustics, Speech and Signal Processing, pp. 1713-1716.

doi: 10.1109/icassp.2008.4517959

[3] ALONSO-FERNANDEZ, F., FAIRHURST, M.C., FIERREZ, J., & ORTEGA-GARCIA, J. (2007) Impact of Signature Legibility and Signature Type in Off-Line Signature Verification. The IEEE 2007 Biometrics

Symposium Conference paper, pp. 1-6. doi:

10.1109/bcc.2007.4430548

[4] OOI, S.-Y., TEOH, A.B.J., & ONG, T.-S. (2008) Compatibility of biometric strengthening with probabilistic neural network. Proceedings of the 2008 International Symposium on Biometrics and Security Technologies, article no. 4547647,

doi: 10.1109/isbast.2008.4547647

[5] HARAKANNANAVAR, S.S.,

RENUKAMURTHY, P.C., & RAJA, K.B. (2019) Comprehensive Study of Biometric Authentication Systems, Challenges and Future Trends. International Journal of Advanced Networking and Applications.

International Journal of Advanced Networking and Applications, 10(4), pp.

3958–68, doi: 10.35444/ijana.2019.10048 [6] TOYGAR, Ö., ALQARALLEH, E., &

AFANEH, A. (2018) Person Identification Using Multimodal Biometrics under Different Challenges. In ANBARJAFARI, G. and ESCALERA, S. (Eds.) Human-Robot Interaction - Theory and Application. Chapter

5, pp. 81-96, doi: 10.5772/intechopen.71667 [7] OLATINWO, S.O., SHOEWU, O., &

OMITOLA, O.O. (2013) Iris Recognition Technology: Implementation, Application, and Security Consideration. The Pacific

Journal of Science and Technology, 14(2), pp.

228-33.Available from:

https://pdfs.semanticscholar.org/9ee4/28b847 7e75267e4694807e4d15d88fdfb808.pdf

[8] HARAKANNANAVAR, S.S.,

PRASHANTH, C., KANABUR, V., PURANIKMATH, V.I., & RAJA, K. (2019) An extensive study of issues, challenges and achievements in iris recognition. Asian

Journal of Electrical Sciences, 8, pp. 25-35.

[9] ALKASSAR, S., WOO, W.L., DLAY, S.S., & CHAMBERS, J.A. (2017) Robust Sclera Recognition System with Novel Sclera Segmentation and Validation Techniques.

IEEE Transactions on Systems, Man, and Cybernetics: Systems, 47(3), pp. 474–86.

doi: 10.1109/tsmc.2015.2505649

[10] RIZK, M.R.M., FARAG, H.H.A., & SAID, L.A.A. (2016) Neural Network Classification for Iris Recognition Using Both Particle Swarm Optimization and Gravitational Search Algorithm. Proceedings of the 2016 World

(7)

Symposium on Computer Applications & Research (WSCAR), IEEE; pp. 12-17,

doi: 10.1109/wscar.2016.10

[11] LI, Z. (2017) An Iris recognition algorithm based on coarse and fine location.

Proceedings of the 2017 IEEE 2nd International Conference on Big Data Analysis, pp. 744-747, doi: 10.1109/icbda.2017.8078735

[12] KAK, N., GUPTA, R., & MAHAJAN, S. (2010) Iris Recognition System. International

Journal of Advanced Computer Science and Applications, 1(1), pp. 34-40.

[13] DAUGMAN, J. (2007) New methods in iris recognition. IEEE Transactions on Systems,

Man, and Cybernetics, Part B (Cybernetics);

37(5), pp. 1167-75,

doi: 10.1109/TSMCB.2007.903540

[14] CHAWLA, S., & OBEROI, A. (2011) A robust algorithm for iris segmentation and normalization using Hough transform. Global

Journal of Business Management and Information Technology, 1(2), pp. 69-76.

[15] YU, L., ZHANG, D., WANG, K., & YANG, W. (2005) Coarse iris classification using box-counting to estimate fractal dimensions.

Pattern Recognition, 38(11), pp. 1791–1798,

doi:10.1016/j.patcog.2005.03.015

[16] GUPTA, A., RAIBAGKAR, P. PALSOKAR A. (2017) Speech Recognition Using Correlation Technique. International

Journal of Current Trends in Engineering & Research; 3(1), pp. 82-89.

[17] SALAH, M., NASIEF, M., & MANSOUR, H. (2008) Evaluation for the Best Combination of Feature Extraction and Matching Techniques used for Real-Time Arabic Speech Recognition. The Engineering

and Scientific Research Journal of the Faculty of Engineering, Shoubra, 1(1), pp. 1-17.

[18] NARANG, S., & GUPTA, M.D. (2015) Speech feature extraction techniques: a review.

International Journal of Computer Science and Mobile Computing; 4(3), pp. 107-114.

[19] MISHRA, N., SHRAWANKAR, U., & THAKARE, V.M. (2013) Automatic speech recognition using template model for man-machine interface. arXiv preprint arXiv:1305.2959; Available from:

https://arxiv.org/ftp/arxiv/papers/1305/1305.2 959.pdf

[20] SHAH, H.N., AB RASHID, M.Z., ABDOLLAH, M.F., KAMARUDIN, M.N., LIN, C.K., & KAMIS, Z. (2014) Biometric voice recognition in security system. Indian

Journal of Science and Technology; 7(2),

104-112.

[21] KAUR, G., & VERMA, D.C. (2014) Comparative analysis of biometric modalities.

International Journal of Advanced Research in Computer Science and Software Engineering, 4(4), pp. 603-13. Available from: https://pdfs.semanticscholar.org/d3e9/f7ea4e6 aa6025e040709cbb34a979e52bf59.pdf [22] SABHANAYAGAM, T., VENKATESAN, V.P., SENTHAMARAIKANNAN, K. A. (2018) Comprehensive survey on various biometric systems. International Journal of

Applied Engineering Research, 13(5), pp.

2276-2297. Available from: http://www.ripublication.com/ijaer18/ijaerv13 n5_28.pdf

.

参考:

[1] ANWAR,A.S.,GHANY,K.K.A。和

ELMAHDY,H.(2015)使用几何特征提

取的人耳识别。 Procedia 计算机科学,65

529-37

doi

10.1016/j.procs.2015.09.126

[2] FREIRE , M.R. , FIERREZ , J. , 和

ORTEGA-GARCIA,J.(2008)使用辅助

数据使用模板保护进行动态签名验证。

IEEE 2008 年声学,语音和信号处理国际

会 议 论 文 集 , 第 1713-1716 页 。 doi :

10.1109/icassp.2008.4517959

[3]

ALONSO-FERNANDEZ ,

F.

FAIRHURST , M.C. , FIERREZ , J. 和

ORTEGA-GARCIA,J.(2007)脱机签名

验证中签名易读性和签名类型的影响。

IEEE 2007 生物识别技术研讨会会议论文

1-6

doi

10.1109/bcc.2007.4430548

[4] OOI,S.-Y.,TEOH,A.B.J.,和 ONG

,T.-S. (2008)与概率神经网络的生物特

征识别增强的兼容性。 《 2008 年国际生

物识别和安全技术研讨会》的论文集。

4547647

doi

10.1109/isbast.2008.4547647

[5]

HARAKANNANAVAR , S.S. ,

RENUKAMURTHY,P.C.和 RAJA,K.B.

(2019)生物特征认证系统,挑战和未来

趋势的综合研究。国际先进网络与应用杂

志。国际先进网络与应用杂志,10(4)

3958–68

doi

10.35444/ijana.2019.10048

(8)

[6] TOYGAR,Ö。,ALQARALLEH,E.

和 AFANEH,A.(2018)在不同挑战下使

用多模式生物识别技术进行人员识别。在

G. ANBARJAFARI 和 S. ESCALERA,S

。(编辑)中,人机交互-理论与应用。

5

章 , 第

81-96

页 , doi :

10.5772/intechopen.71667

[7] OLATINWO,S.O.,SHOEWU,O.&

OMITOLA,O.O.。 (2013)虹膜识别技

术:实施,应用和安全考虑。太平洋科学

技术杂志,14(2),第 228-33 页,可从

https://pdfs.semanticscholar.org/9ee4/28b847

7e75267e4694807e4d15d88fdfb808.pdf

[8]

HARAKANNANAVAR , S.S. ,

PRASHANTH , C. , KANABUR , V. ,

PURANIKMATH , V.I. . 和 RAJA , K. (

2019)关于虹膜识别的问题,挑战和成就

的广泛研究。亚洲电气科学杂志,第 8 卷

,第 25-35 页。

[9] ALKASSAR , S. , WOO , W.L. ,

DLAY , S.S. . 和 CHAMBERS , J.A. (

2017)具有新型巩膜分割和验证技术的稳

健巩膜识别系统。 IEEE 关于系统,人与

控制论的交易:系统,47(3),第 474–

86 页。 doi:10.1109/tsmc.2015.2505649

[10] RIZK,M.R.M.,FARAG,H.H.A. .和

SAID,L.A.A. (2016)使用粒子群优化

和引力搜索算法进行虹膜识别的神经网络

分类。 IEEE 2016 年世界计算机应用与研

究研讨会(WSCAR)会议录;第 12-17

页,doi:10.1109/wscar.2016.10

[11] LI,Z.(2017)一种基于粗略和精细

定位的虹膜识别算法。 2017 年 IEEE 第 2

届国际大数据分析国际会议论文集,第

744-747

doi

10.1109/icbda.2017.8078735

[12] KAK , N. , GUPTA , R. 和

MAHAJAN,S.(2010)虹膜识别系统。

国际高级计算机科学与应用杂志,1(1)

,第 34-40 页。

[13] DAUGMAN,J.(2007)虹膜识别的

新方法。 IEEE 系统,人与控制论交易,B

部分(网络论), 37(5),第 1167-75 页

,doi:10.1109/TSMCB.2007.903540

[14] CHAWLA,S.和 OBEROI,A.(2011

)一种使用 Hough 变换进行虹膜分割和归

一化的鲁棒算法。 《全球企业管理和信息

技术杂志》,1(2),第 69-76 页。

[15] YU, L., ZHANG, D., WANG, K., 和

YANG, W.(2005)粗虹膜分类使用盒计

数法估计分形维数。模式识别,38(11)

1791-1798

doi

10.1016/j.patcog.2005.03.015

[16] GUPTA,A.,RAIBAGKAR,P。.和

PALSOKAR A.(2017)使用相关技术的

语音识别。国际工程与研究最新趋势杂志

; 3(1),第 82-89 页。

[17] SALAH , M. , NASIEF , M. 和

MANSOUR,H.(2008)评价用于实时阿

拉伯语音识别的特征提取和匹配技术的最

佳组合。工程学院的工程与科学研究杂志

,Shoubra,1(1),第 1-17 页。

[18] NARANG , S. 和 GUPTA , M.D. (

2015)语音特征提取技术:综述。国际计

算机科学与移动计算杂志 , 4(3),第

107-114 页。

[19] MISHRA,N.,SHRAWANKAR,U.

和 THAKARE,V.M. (2013)使用人机

界面模板模型进行自动语音识别。 arXiv

预印本 arXiv:1305.2959;可从以下网站获

https://arxiv.org/ftp/arxiv/papers/1305/1305.

2959.pdf

[20] SHAH,H.N.,AB RASHID,M.Z.,

ABDOLLAH , M.F. , KAMARUDIN ,

M.N.,LIN,C.K.,和 KAMIS,Z.(2014

)安全系统中的生物识别语音识别。印度

科学技术杂志; 7(2),第 104-112 页。

[21] KAUR,G.和 VERMA,D.C.(2014

)生物特征识别方法的比较分析。 国际计

算机科学与软件工程高级研究杂志,第 4

卷第 4 期,第 603-13 页。 可从以下网站

https://pdfs.semanticscholar.org/d3e9/f7ea4e

6aa6025e040709cbb34a979e52bf59.pdf

[22]

SABHANAYAGAM

T.

VENKATESAN

V.P.

SENTHAMARAIKANNAN,K.A。(2018

),各种生物识别系统的综合调查。 国际

应用工程研究杂志,13(5),第

(9)

2276-2297 页 。 可 从 以 下 网 站 获 得 :

http://www.ripublication.com/ijaer18/ijaerv1

3n5_28.pdf

Figure 2. Removed impurities [5]
Figure 3. Daugman’s Rubber sheet model
Figure 4. Block diagram of MFCC
Figure 7. The equal error rate

参照

関連したドキュメント

Moreover, to obtain the time-decay rate in L q norm of solutions in Theorem 1.1, we first find the Green’s matrix for the linear system using the Fourier transform and then obtain

H ernández , Positive and free boundary solutions to singular nonlinear elliptic problems with absorption; An overview and open problems, in: Proceedings of the Variational

Keywords: Convex order ; Fréchet distribution ; Median ; Mittag-Leffler distribution ; Mittag- Leffler function ; Stable distribution ; Stochastic order.. AMS MSC 2010: Primary 60E05

Keywords: continuous time random walk, Brownian motion, collision time, skew Young tableaux, tandem queue.. AMS 2000 Subject Classification: Primary:

Inside this class, we identify a new subclass of Liouvillian integrable systems, under suitable conditions such Liouvillian integrable systems can have at most one limit cycle, and

Splitting homotopies : Another View of the Lyubeznik Resolution There are systematic ways to find smaller resolutions of a given resolution which are actually subresolutions.. This is

Greenberg and G.Stevens, p-adic L-functions and p-adic periods of modular forms, Invent.. Greenberg and G.Stevens, On the conjecture of Mazur, Tate and

In order to be able to apply the Cartan–K¨ ahler theorem to prove existence of solutions in the real-analytic category, one needs a stronger result than Proposition 2.3; one needs