Learning with many experts: Model selection and sparsity


Experts classifying data are often imprecise. Recently, several models have been proposed to train classifiers using the noisy labels generated by these experts. How to choose between these models? In such situations, the true labels are unavailable. Thus, one cannot perform model selection using the standard versions of methods such as empirical risk minimization and cross validation. In order to allow model selection, we present a surrogate loss and provide theoretical guarantees that assure its consistency. Next, we discuss how this loss can be used to tune a penalization which introduces sparsity in the parameters of a traditional class of models. Sparsity provides more parsimonious models and can avoid overfitting. Nevertheless, it has seldom been discussed in the context of noisy labels owing to the difficulty in model selection and, therefore, in choosing tuning parameters. We apply these techniques to several sets of simulated and real data.

In Statistical Analysis and Data Mining
Rafael B. Stern
Rafael B. Stern
Professor of Statistics

I am an Assistant Professor at the University of São Paulo. I have a B.A. in Statistics from the University of São Paulo, a B.A. in Law from Pontifícia Universidade Católica in São Paulo, and a Ph.D. in Statistics from Carnegie Mellon University. I am currently a member of the Scientific Council of the Brazilian Association of Jurimetrics, an associate investigator at NeuroMat and a member of the Order of Attorneys of Brazil.