Mobile QR Code QR CODE

  1. (Research Institute, UNION COMMUNITY Co., Ltd. / Seoul, Korea neural76@unioncomm.co.kr )



Nose print, Pattern recognition, Minutiae matching, Object image analysis, Biometric

1. Introduction

Traditionally, plastic or electronic ear tags with barcodes, neck chains, paint branding, or tattooing are used as animal recognition methods. In recent times, RF technology, in which microchips and antennas are injected into ear tags or skin tissues (i.e., injectable transponders), has been utilized widely [1]. On the other hand, this method is less effective if the injected devices are deliberately damaged or tampered with. Furthermore, according to some studies, the materials surrounding the microchips and antennas injected into animals cause symptoms, such as tumors or tissue necrosis; hence, this method does not guarantee 100% safety. Research has been actively underway since the release of a study showing that animal nose prints with inherent biometric information have uniqueness for each animal, just like human fingerprints [2-4]. Therefore, this study evaluated the techniques of previous studies. The muzzle pattern recognition algorithm presented by Santosh Kumar et al. (2017) was used to acquire dog nose-print data using a non-contact filming method, such as face recognition, and was a study involving a template matching method using the SURF and Key Point algorithms [5-9]. These methods have problems, such as distortion by external light sources, position changes according to the location of the film, and resolution changes depending on the distance of the film. Another reference (Enis BILGIN et al., 2011) formed a triangle and determined the number of circle-like shapes by finding the angle values of the three largest holes in the nose-print with the location of the holes applied to the remaining data. This method used the distance value of the three largest nose holes [10]. Nevertheless, the error rate with this method is very high because it only uses distance information. This study applied the idea from animal nose printing in the late 20$^{\mathrm{th}}$ century, in which the noses of cows or sheep were dipped in ink, and rubbings were taken from the nose prints and compared to solve these problems and achieve a high certification rate. From this technique, this paper proposes an effective object recognition method for dog identification, in which a machine can determine and show the results of object recognition through digital image processing. Experimental dog nose print data were utilized by obtaining dog nose-prints as experimental data to test the reliability of the proposed method. A contact type optical scanner was used to obtain uniform nose prints as input data because dog noses are moist when the nose prints are obtained. When nose prints are obtained using a non-contact filming method, there is a high probability of false authentication rates because of the sparkling caused by illumination, resolution changes with distance, angles, and poses. Based on the nose-print data obtained, a template including the feature information was generated using an algorithm for extracting the nose print features. Based on the generated nose print template, the performance of the proposed method was evaluated and analyzed by a comparison with experimental data images.

2. Proposed Nose Print Recognition Algorithm

2.1 Definition of Animal Nose-print Components

Animal nose prints consist of patterns of various sizes that make up the inside and outside of the nose, depending on the animal type. Because no academic terms have been defined for each component, new terms were defined by referring to the terms of fingerprint-recognition technology [4,5]. The following gives a detailed description of the terms used in Fig. 1:

- Island: a pattern with an independent area that comprises a nose print.

- Couple Island: a pattern with two coupled neighboring islands.

- River: an area that can distinguish the boundaries between two islands.

- Ocean: an area of nostrils and philtrum excluding the outer shape of a nose.

The island is the main component to be used in the actual extraction process, which uses an independent form (i.e., an island) or a couple of islands, in which two islands are adjacent or attached. Other cases were excluded because they are regarded as foreign substances on the nose or abnormal patterns.

Fig. 1. Definitions of dog nose-print internal components.
../../Resources/ieie/IEIESPC.2021.10.2.109/fig1.png

2.2 Entering Text Obtaining and Preprocessing Images

The step of simplifying the image was performed to extract the center of gravity of the unique object from a nose-print image. The background and object were separated from the input image, and the nose print and background were separated through noise removal and image enhancement. Fig. 2 shows the result of separating only the background image and nose print island using the image preprocessing operation.

The outline extraction algorithm for the unique shape of a nose print was applied as a first step for extracting the features from the nose-print image after preprocessing, as shown in Fig. 3.

The attributes of object size, center of gravity according to object type, vertex calculation through the inner approximation of the nonlinear figure, and connection to the center point were assigned from the resulting image from nose-print outline extraction.

Fig. 2. Process of separating the background and object.
../../Resources/ieie/IEIESPC.2021.10.2.109/fig2.png
Fig. 3. Nose-print outline-extraction-processing image.
../../Resources/ieie/IEIESPC.2021.10.2.109/fig3.png

2.3 Generation of Feature Vectors

2.3.1 Finding the Center of Gravity of an Object

The feature template required for object recognition through the nose print was generated by connecting the inner approximation vertex and center-point based on each center of gravity. The center of gravity was obtained for each object shape generated individually by setting the independent nose-print shape as the first moment.

The spatial moments:

The moments $m_{ji}$ were calculated as

$m_{ji}=\sum _{x,y}\left(\textit{array}\left(x,y\right)\cdot x^{j}\cdot y^{i}\right) $

The central moments:

The moments $m_{ji}$ were computed as

$mu_{ji}=\sum _{x,y}\left(\textit{array}\left(x,~ y\right)\cdot \left(x-\overline{x}\right)^{j}\cdot \left(y-\overline{y}\right)^{i}\right)$

where $(\overline{x},~ ~ \overline{y}$) is the center of mass:

$ \overline{x}=\frac{m_{10}}{m_{00}},\,\,\,\,\overline{y}=\frac{m_{01}}{m_{00}} $

Normalized central moments:

The moments $nu_{ji}$ were calculated as

$ nu_{ji}=\frac{mu_{ji}}{m_{00}^{\frac{\left(i+j\right)}{2}+1}} $

Fig. 4. shows the resulting image of the center of gravity for an object and its coordinates.

Fig. 4. Independent island center of mass and coordinates.
../../Resources/ieie/IEIESPC.2021.10.2.109/fig4.png

2.3.2 Classification of the Effective Objects

Because the shapes and sizes of nose prints are diverse, it is essential to identify valid islands to recognize objects with the same features. In this paper, valid islands were selected by first finding the average area of the detected nose prints among the islands with independent centers and then removing the islands with significantly smaller or larger areas than the average. The reason for doing this is that false authentication rates increase when performing authentication if the abnormal areas are not removed. Such areas include an island that is much larger than the average due to the influence of the area generated by noise or the inputted area caused by abnormal pressing while obtaining the nose print.

The area of the shape required for classifying valid islands was obtained by calculating the vertices through an approximation of the line inside the figure consisting of closed curves, as shown in Fig. 5. The area of the entire shape was found by calculating the area of the triangle composed of one of the calculated vertices. Suppose that the coordinates of three points in the 2-D plane are $\left(x_{1},y_{1}\right)$, $\left(x_{2},y_{2}\right)$ and $\left(x_{3},y_{3}\right)$, then the area, S, of the triangle with three coordinates as the vertices are calculated using Heron's formula (1).

(1)
$$ \begin{array}{c} \mathrm{S}=\sqrt{s(s-a)(s-b)(s-c)}, \quad \mathrm{s}=\frac{a+b+c}{2} \\ \mathrm{a}=\sqrt{\left(x_{1}-x_{2}\right)^{2}+\left(y_{1}-y_{2}\right)^{2}} \\ \mathrm{~b}=\sqrt{\left(x_{2}-x_{3}\right)^{2}+\left(y_{2}-y_{3}\right)^{2}} \\ \mathrm{c}=\sqrt{\left(x_{3}-x_{1}\right)^{2}+\left(y_{3}-y_{1}\right)^{2}} \end{array} $$

When the lengths of the three sides of the triangle are a, b, and c, respectively, the area of the triangle can be calculated easily using the equations above.

On the other hand, a step is needed to calculate the direction of the vector because the nose print shape is a mixture of convex and concave areas. Here, Eq. (2) was used to obtain the area of the n-polygon because the sign changes with the direction when the outer product is used, even in a convex or concave shape, as shown in Fig. 6. That is, the area of n-polygon, where $P_{1}\left(x_{1},y_{1}\right)$, $P_{2}\left(x_{2},y_{2}\right)$, {\ldots}, $P_{n}\left(x_{n},y_{n}\right)$ are vertices, was calculated using Eq. (2) regardless of whether the regions were convex/concave:

(2)
$$ \begin{aligned} \mathrm{S} &=\mid \sum_{i=2}^{n-1} \operatorname{sign}\left(\Delta P_{1} P_{i} P_{i+1}\right) \text { area }\left(\Delta P_{1} P_{i} P_{i+1}\right) \mid \\ &=\left|\sum_{i=2}^{n-1} \frac{\left(x_{i}-x_{1}\right)\left(y_{i+1}-y_{1}\right)-\left(x_{i+1}-x_{1}\right)\left(y_{i}-y_{1}\right)}{2}\right| \end{aligned} $$

Fig. 7 shows the resulting image for calculating the valid centers of gravity after identifying the valid islands.

Fig. 5. Approximation processing image for components.
../../Resources/ieie/IEIESPC.2021.10.2.109/fig5.png
Fig. 6. Example of the area of one.
../../Resources/ieie/IEIESPC.2021.10.2.109/fig6.png
Fig. 7. Resulting image of valid islands and center of gravity processing.
../../Resources/ieie/IEIESPC.2021.10.2.109/fig7.png

2.3.3 Generation of Feature Vectors based on the Center of Gravity

Each object of the classified nose print has two features: information on the center of gravity and the area. These two types of information are insufficient for object recognition matching.

A correlation with the neighboring islands should be established for matching. This paper proposes the generation of feature vectors with the direction information and size by producing feature lines connecting the center of gravity of a valid island with the inner approximation vertices. Fig. 8 shows the resulting image of the inner feature lines in eight directions from the center of gravity. Fig. 9 presents the flowchart of the algorithm proposed in this paper for analyzing the shape of each unique pattern comprising a nose print, generating feature vectors for object recognition based on the analyzed information, and performing the template extraction using feature lines.

Fig. 10 shows the feature vectors used in the final matching by assigning weights depending on the distance to the final connection.

Fig. 8. Resulting image of all feature vectors.
../../Resources/ieie/IEIESPC.2021.10.2.109/fig8.png
Fig. 9. Final result image using the proposed algorithm.
../../Resources/ieie/IEIESPC.2021.10.2.109/fig9.png
Fig. 10. Flowchart of the proposed unique pattern extraction algorithm.
../../Resources/ieie/IEIESPC.2021.10.2.109/fig10.png

3. Simulation

In this research, dog nose prints were collected to evaluate the dog nose-print-based recognition rates. For the database (DB) used in the experiment, 3,000 pieces of data were collected by obtaining 30 nose prints per dog from 100 dogs ranging from companion dogs and dogs at animal hospitals and dog cafes and dogs at abandoned-dog centers. A contact-type optical scanner with a 500 dpi (dots per inch) resolution and an image size of 1320*560 pixels was used to obtain the nose prints. Fig. 11 shows the nose-print images inputted using the photographed dog nose prints and a nose-print recognition device for obtaining the actual nose prints. In the experimental method, each nose print obtained was first indexed and registered. Nose-print registration was achieved using the proposed algorithm, and the extracted feature template was stored and used for matching. To evaluate authentication performance based on the extracted-feature template, N:N authentication evaluation was performed by building a registered nose-print data group and an authentication data group.

Fig. 12 shows the error rate and false acceptance rate (FAR) curve according to the similarity between the registered dog nose prints and the other dog nose prints.

As indicated by the results shown in Fig. 12, the authentication tests conducted by setting the similarity threshold to 50% confirmed that the FAR, which recognized other dog nose prints as the registered nose prints, did not occur. Here, the FAR value converged to 0 because only one template was used. Fig. 13 shows the experimental results through a total of 10,000 authentications.

In Fig. 13, the x-axis represents the registered nose-print information, and Reg1, Reg2, ..., Reg100 refer to the number of registered objects. The y-axis represents authentication objects, and Aut1, Aut2, ..., Aut100 refer to the number of authentication objects. The z-axis represents the matching scores. Fig. 13 shows a 10,000-point distribution chart ranging from 0 to 9,999 points processed using the proposed algorithm, and the results of the similarity with other dogs are represented in the 3D chart. Here, the cases where registration and authentication are the same were excluded. Using these results, the shape and size information extracted from the nose prints were confirmed. The feature information extracted from the correlation between the center of gravity and the inner products were all critical elements for matching, highlighting the excellence of the proposed extraction algorithm.

Fig. 11. DB images of dogs’ unique nose prints (24 typical objects out of 100 objects).
../../Resources/ieie/IEIESPC.2021.10.2.109/fig11.png
Fig. 12. Distribution by similarity
../../Resources/ieie/IEIESPC.2021.10.2.109/fig12.png
Fig. 13. Matching score result chart between registration template and authentication template.
../../Resources/ieie/IEIESPC.2021.10.2.109/fig13.png

4. Conclusion

This paper proposed a feature extraction algorithm that could recognize objects using the feature analysis of the unique patterns that comprise a nose print. The components of the nose print were newly defined, and images for nose-print extraction were generated by image preprocessing based on the defined elements. To generate feature vectors from the processed images, the center of gravity of the islands and couple of islands were found, as defined in this paper. In addition, the valid data was classified to reduce unnecessary analysis when generating the final extraction template, resulting in improved authentication rates and speed. As a final step of the extraction algorithm, a template was produced by producing feature vectors based on the center of gravity and the recorded feature vector values and feature points between neighbors. The generated template has information on the center of gravity and direction vectors for each shape of the nose print. This information can be used to perform an effective matching process. The matching results using the proposed algorithm showed that the authentication rates of other dogs in 100 nose-print data were promising, confirming that it could be used as a dog recognition algorithm. Based on these experimental results, future studies will develop this algorithm into a more reliable dog nose-print recognition system by collecting and utilizing various nose-print data on 345 breeds of companion dogs in the world.

REFERENCES

1 
Noviyanto A., Arymurthy A. M., 2012, Automatic cattle identification based on nose photo using speed-up robust features approach, Proc. 3rd Eur. Conf. Comput. Sci., Vol. 110, pp. 110-114URL
2 
Awad A. I., Zawbaa H. M., Mahmoud H. A., Nabi E. H. H. A., Fayed R. H., Hassanien A. E., 2013, A Robust Cattle Identification Scheme Using Nose Print Images, in 2013 Fed. Conf. Comput. Sci. Inf. Syst., Vol. krakow, No. poland, pp. 529-534URL
3 
Zhou J., L Chen F., Gu J. W., 2009, A novel algorithm for detecting singular points from fingerprint image, IEEE Trans. Pattern Anal. Mach. Intell., Vol. 31, No. 7, pp. 1239-1250DOI
4 
Olsen M. A., Dusio M., Busch C., 2015, Fingerprint Skin Moisture Impact on Biometric Performance, in 3rd Int. Workshop Biom. Forensic,s Gjovik, NorwayDOI
5 
Lee Minjeong, Park Jonggeun, Jeong Jechang, 2015, An improved system of Dog Identification based on Muzzle Pattern, The Korean Society Of Broad Engineers Conference, pp. 199-202URL
6 
Awad A. I., Zawbaa H. M., Mahmoud H. A., Nabi E. H. H. A., Fayed R. H., Hassanien A. E., 2013, A Robust Cattle Identification Scheme Using Muzzle Print Images, IEEE Computer Science and Information Systems (FedCSIS), 2013 Federated Conference on., pp. 529-534URL
7 
Noviyanto A., Arymurthy A.M., 2013, Beef cattle identification based on muzzle pattern using a matching refinement technique in the SIFT method, Comput. Electron. Agric., pp. 77-84DOI
8 
Santosh Kumar , Shashank Chandrakar , Avinash Panigrahi , Sanjay Kumar Singh , 2017, Muzzle Point Pattern Recognition System Using Image Pre-Processing Techniques, Fourth International Conference on Image Information Processing(ICIIP), December, pp. 127-132DOI
9 
Kumar Santosh , Sanjay Kumar Singh , Ravi Shankar Singh , Amit Kumar Singh , Shrikant Tiwari. , 2017, Real-time recognition of cattle using animal biometrics., pp. 505-526DOI
10 
Enis BILGIN , Murat CEYLAN , Hakan YALCIN , 2011, A Digital Image Processing Based Bio-Identification Application from Planum Nasale of Kangal Dogs, IEEE 19th Signal Processing and Communications Applications Conference (SIU), pp. 275-278DOI

Author

Young-Hyun Baek
../../Resources/ieie/IEIESPC.2021.10.2.109/au1.png

Young-Hyun Baek is Chief Tech-nology Officer (CTO) of Union-Community R&D Center. He received his B.S. and M.S. degrees in Electronic Engineering from Wonkwang Univer-sity, Korea, in 2002 and 2004, respectively and his Ph.D. in Electronic Engineering from the University of Wonkwang in 2007. Dr. Baek was Assistant Professor of the Division of Electronic & Control Engineering at the Wonkwang University. He served or currently serving as a reviewer and Technical Program Committee for many important Journals, Conferences, Symposiums, Workshop in Biometrics, Image Processing, Optical Device area. His research interests include AI, Deep learning, Bio-Image Data, Biometrics Security System, Fake Biometric Technology. He is a member of the IEEE, IEEK, TTA, KISA Technical pool.