Bibtex entry:
@inproceedings{ijcnn91:Kov, key = {ijcnn91:Kov}, keywords = {ocr, classifiers, algorithms, nearest neighbors}, author = {Zs. M. V.-Kovacs and R. Guerrieri}, title = {A Generalization Technique for Nearest-Neighbor Classifiers}, booktitle = {Proc. of IJCNN'91 Singapore - Singapore}, volume = {2}, pages = {1782--1788}, month = {Nov}, year = {1991}, abstract = { Classifiers based on the k-Nearest neighbors (k-NN) approach have recently received a significant attention because of their simple implementation and absence of training. However, these valuable properties have been somewhat outweighted by the computational cost required to classify a new sample and by the lack of adequate techniques to improve their generalization capabilities, once a training set has been given. This paper addresses these issues and presents a new algorithm which provides nearly optimal generalization capabilities to k-NN classifiers. Its computational cost is a weak function of the dimensionality of the space of the training set and it is not related to any specific formulation of the metric used in the classifier. The asymptotic computational cost of the proposed algorithm is a low-order polynomial of the number of elements in the training set. We also prove that, given a suitable performance measure, each step of the generalization algorithm monotonically improves the performaces of the classifier. Finally, experimental results stemming from the classification of handwritten digits show that the training set provided to the classifier can be reduced by more than one order of magnitude, thus dramatically reducing the CPU time and the memory required to classify new samples. }, }
Microelectronics Group Home Page | Staff |
Research Activities | Teaching Activity |
Publications | Local Stuff |
DEIS Home Page | University of Bologna Home Page |