A simple form of cooperation between the k-Nearest Neighbors approach to classification and the neural-like property of adaptation is explored. A tunable, high level k-Nearest Neighbors decision rule is defined which comprehends most previous generalizations of the common majority rule. A learning procedure is developed which applies to this rule and exploits those statistical features that can be induced from the training set. The overall approach is tested on a problem of handwritten character recognition. Performance measurements show that adaptivity in the decision rule may greatly improve the recognition ability of standard k-NN classifiers.
|Microelectronics Group Home Page||Staff|
|Research Activities||Teaching Activity|
|DEIS Home Page||University of Bologna Home Page|