Skip to main content

Active Learning by Using Function Adapted Kernels

24 August 2014

New Image

This paper presents an efficient active-transductive approach for classification. A common approach of active learning algorithms is to focus on quarrying points near the class boundary in order to refine it. This approach has been shown, for certain data distributions, to lead to uninformative sampling. On the other hand, other approaches consider data exploration or global risk minimization. Yet, exploration approaches typically suffer from the need to tune exploration tradeoff with refinement, and also involve a computational cost for finding unexplored data clusters. Risk minimization typically involves a heavy computational load, and requires optimal sampling of the data distribution. We present an algorithm designed to overcome these shortcoming by using an active-transductive learning approach that naturally switches between refinement and exploration. Its computational burden is significantly lower. The classification algorithm uses random-walk on the data graph that is based on current hypothesis information with the data distribution. Our experiments demonstrate improved results on various data sets when compared with current methods.