At present, three types of methods are generally used to improve character recognition performance: the first type is to find a better classification recognition algorithm; the second type is to combine several classifiers, complement each other, and classify according to different aspects of features, such as literature ; The third category is to extract the features with stronger description ability, combined with other auxiliary features for classification, such as literature.
This paper uses the support vector machine (SVM) method to solve the problem of license plate character recognition, which belongs to the first type of method. SVM can automatically find support vectors that have a good ability to distinguish the classification, and the classifier formed thereby can maximize the interval between the classes and achieve the purpose of correctly distinguishing the categories; it can be used to solve the problem of finite sample, nonlinear and high-dimensional pattern recognition It has many unique superior performances, and has the characteristics of strong adaptability and high efficiency.
2 Introduction to Support Vector Machine Support vector machine (SVM) is a classification technology proposed by Vapnik and his research team for the classification problem of two categories. It is a new and promising classification technology. The basic idea of â€‹â€‹support vector machine is to construct an optimal hyperplane in the sample space or feature space to maximize the distance between the hyperplane and different types of sample sets, so as to achieve the maximum generalization ability. For a detailed description of its algorithm, please refer to the literature .
The support vector machine method is based on Vapnik's principle of structural risk minimization to maximize the generalization ability of the learning machine, so that the decision rules obtained from a limited number of training samples can still get small errors for independent test sets. In this way, only a limited number of samples are required to participate in the training, and the classifier generated by the training can be guaranteed to have a small error. When recognizing the license plate characters, only a limited number of samples can participate in the training compared to the predicted samples. The support vector machine method can make the classifier generated by the training have only a small error when recognizing the license plate characters, and greatly reduce the training time. .
For the data classification problem, the mechanism of the general neural network method can be simply described as: the system randomly generates a hyperplane and moves it until the points in the training set that belong to different categories are located on different sides of the plane. This processing mechanism determines that the final segmentation plane obtained by the neural network method is not an optimal hyperplane, but only a local suboptimal hyperplane. The SVM transforms the optimal hyperplane solution problem into a quadratic function optimization problem under inequality constraints. This is a convex quadratic optimization problem. There is a unique solution, which can guarantee that the extreme value solution found is the global optimal solution.
SVM maps input data to a feature space with high or even infinite dimensions through a non-linear function, and performs linear classification in this high-dimensional feature space to construct an optimal classification hyperplane, but is solving optimization problems and calculating discriminant functions. It is not necessary to calculate the nonlinear function explicitly, but only the kernel function, so as to avoid the dimensional disaster problem of the feature space.
In the problem of license plate character recognition, each sample is a character image, and each character image is composed of many pixels and has high-dimensional characteristics. Through the calculation of kernel function, SVM avoids the network structure design problem caused by the high-dimensional problem of the sample space by the neural network, and makes the training model independent of the dimension of the input data; and the entire image of each character is input as a sample, No feature extraction is required, saving recognition time.
3 Construction of License Plate Character Classifier China's standard license plate format is: X1X2. X3X4X5X6X7, where X1 is the abbreviation of each province, municipality and autonomous region, X2 is the English alphabet, X3X4 is the English alphabet or Arabic numerals, X5X6X7 is the Arabic numerals, and the value range of X2 is different for different Xl. There is a small dot between X2 and X3.
According to the arrangement characteristics of license plate characters, in order to improve the overall recognition rate of license plates, four classifiers can be designed to recognize the license plate characters, namely Chinese character classifier, number classifier, English letter classifier, number + letter classifier. According to the serial numbers of the characters in the license plate, select the corresponding classifier for recognition, and then combine the recognition results according to the serial numbers of the characters to obtain the recognition result of the entire license plate. The four classifiers are shown in Figure 1.
There are more than 50 Chinese characters in the character set, 31 of which are abbreviations of provinces, municipalities and autonomous regions; all English letters are capital letters, excluding the letter "I", and the letter "o" is classified as the number "0", so the English alphabet set is composed of It consists of 24 capital letters; the numbers are Arabic numerals from 0 to 9.
Support vector machine is proposed for two-category classification, but the license plate character recognition is a multi-category classification problem. It is necessary to extend the two-category classification method to multi-category classification. This paper uses the one-to-one classification method to achieve. One-to-one differentiation method (one-against-one method) is to select two different categories to form an SVM sub-classifier, so that for k-type problems, there are k (k-1) / 2 SVM sub-classifiers. When constructing the SVM sub-classifier of category i and category j, select the sample data belonging to category i and category j as training sample data in the sample data set, and mark the data belonging to category i as positive, and mark the data belonging to category j Mark as negative. During the test, the test data is tested on k (k-1) / 2 SVM sub-classifiers respectively, and the scores of each category are accumulated, and the category corresponding to the highest score is selected as the category of the test data.
4 Selection of the best parameter model In this paper, after the license plate positioning and character segmentation of the 768 Ã— 576 pixel license plate image collected from an actual bayonet system, each segmented license plate character is binarized, and the character stroke corresponds Set the pixel to 1 and the background pixel to 0, then normalize each character to 13x24 pixels, and number them from 1 to 7 according to the position of each character in the license plate.
There are a total of 132 pictures of car license plates selected in this article, including nights, backlighting, severe character wear, tilting of license plates and other brands attached to the license plates, etc .; there are 129 pictures that can realize the correct positioning of license plates, and the positioning rate of license plates is 97.73%; 120 A picture can realize the correct segmentation of all characters, and the complete correct rate of character segmentation is 93.02%.
In this paper, each character is used as a sample, and the dimension of each sample is 312 (13x24), which is divided into 4 types of samples according to their serial numbers. Each type of sample is divided into two parts, 60% of the samples are trained to generate the model, and the other 40% are used for testing. The kernel function uses the radial basis function K (xi, x) = exp (-|| x-xi || 2 / Ïƒ2) , Train and generate four types of classifiers respectively, and select the optimal parameter model from them to form the four types of best classifiers for the overall recognition of license plate characters.
In order to solve the optimal classifier parameters (C, Ïƒ2), this paper chooses the bilinear method to solve the optimal parameters, and uses the following steps for each classifier model:
Step 1: Determine the optimal parameter C according to the recognition accuracy. First assume C = 10, take Ïƒ2 = 10-1, 100, 101, 102, 103 to get Ïƒ2 corresponding to the highest recognition accuracy rate, then fix Ïƒ2 and change the value of C to obtain the highest recognition accuracy rate at this time. C value, as the optimal parameter C.
The (C, Ïƒ2) corresponding to the highest recognition accuracy rate of the four types of classifiers are (10, 100), and the best C = 10 is determined.
Step 2: Determine the best parameter (C, Ïƒ2). Fix the best parameter C, take Ïƒ2 = l, 10, 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, and take (C, Ïƒ2) corresponding to the highest recognition accuracy rate as the classifier model The best parameters.
Observation found that when the value of Ïƒ2 becomes less than 100, the corresponding recognition accuracy rate of the four types of classifier models gradually decreases; when the value of Ïƒ2 becomes more than 100, the corresponding recognition accuracy rate increases first and then decreases. "Peak", the model parameter corresponding to "Peak" is taken as the best parameter. The four best classifiers are shown in Table 1 below.
Experimental observation analysis shows that the classifier has a certain bias when it is recognized, that is, the number of samples of a certain type participating in training is large, and the probability of predicting the sample to be identified as this type is large. If there are many "Zhe" characters in the training sample, the Chinese character classifier will It is more likely that the predicted sample is identified as "Zhe", and in fact, the number of "Zhe" in the predicted sample is larger, which improves the recognition accuracy rate virtually.
5 Experiments and results In this paper, the combined classifiers of the above four best classifiers are used for overall recognition of all license plate characters. The recognition results are shown in Table 2.
In actual application, the correct number of license plate characters can meet the requirements of more than five, the license plate character recognition results of this article and related literature are shown in Table 3.
Observation and analysis found that the main reason for affecting the recognition effect is the misrecognition of similar characters, such as the characters "D" and "0", "B" and "8", etc .; there are many strokes of Chinese characters, and the binarization operation is easy to cause blurry strokes To misrecognize Chinese characters.
6 Conclusion In this paper, the SVM method is introduced into the license plate character recognition. Based on the detailed analysis of the arrangement characteristics of the license plate characters, four different types of SVM character classifiers are constructed. According to the serial numbers of the license plate characters, the corresponding recognition is respectively carried out. After the combination of the recognition results, the number of the entire license plate is obtained.
The SVM method uses a kernel function to solve the problem of high-dimensional sample recognition. No model network structure design is required, and no feature extraction is required. Only limited sample participation training is required, which saves recognition time. These are very in line with the requirements of license plate character recognition. . In this paper, the one-to-one discrimination method is used to extend the SVM method from two-class recognition to multi-class recognition, and satisfactory recognition results are obtained. However, the one-to-one discrimination method needs to ensure the sufficiency of the training samples, and all types of samples need to participate in the training.
The experimental results show that this method has good practicability, and further reducing the misrecognition of similar characters and Chinese characters is the direction of future efforts of this work. The key is to strengthen the image preprocessing, improve the character segmentation method and the binarization method, so that Character strokes are clearer.
The acrylic LED illuminated sign is an increasingly popular tool for promoting goods and services in point-of-purchase environments. When shopping online, there is a huge selection of signage equipment to browse, which can often be confusing for advertisers. So why is the LED lighted sign increasing in popularity? The reasons are numerous. First, LED Illuminated Signage are unique, especially the units for sale in these categories. Secondly, illuminated custom sign board provide increased exposure for advertisements.
Neon Restaurant Signs,Led Signs For Restaurants,Led Beer Signs,Led Pizza Sign
Shenzhen Oleda Technology Co.,Ltd , https://www.baiyangsign.com