Active and passive learning of linear separators under log-concave distributions

Maria-Florina Balcan, Phil Long
Proceedings of the 26th Annual Conference on Learning Theory, PMLR 30:288-316, 2013.

Abstract

We prove that active learning provides an exponential improvement over PAC (passive) learning of homogeneous linear separators under nearly log-concave distributions. Building on this, we provide a computationally efficient PAC algorithm with optimal (up to a constant factor) sample complexity for such problems. This resolves an open question of (Long, 1995, 2003; Bshouty et al., 2009) concerning the sample complexity of efficient PAC algorithms under the uniform distribution in the unit ball. Moreover, it provides the first bound for a polynomial-time PAC algorithm that is tight for an interesting infinite class of hypothesis functions under a general class of data-distributions, providing significant progress towards a long standing open question of (Ehrenfeucht et al., 1989; Blumer et al., 1989). We also provide new bounds for active and passive learning in the case that the data might not be linearly separable, both in the agnostic case and and under the Tsybakov low-noise condition. To derive our results, we provide new structural results for (nearly) log-concave distributions, which might be of independent interest as well.

Cite this Paper


BibTeX
@InProceedings{pmlr-v30-Balcan13, title = {Active and passive learning of linear separators under log-concave distributions}, author = {Balcan, Maria-Florina and Long, Phil}, booktitle = {Proceedings of the 26th Annual Conference on Learning Theory}, pages = {288--316}, year = {2013}, editor = {Shalev-Shwartz, Shai and Steinwart, Ingo}, volume = {30}, series = {Proceedings of Machine Learning Research}, address = {Princeton, NJ, USA}, month = {12--14 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v30/Balcan13.pdf}, url = {https://proceedings.mlr.press/v30/Balcan13.html}, abstract = {We prove that active learning provides an exponential improvement over PAC (passive) learning of homogeneous linear separators under nearly log-concave distributions. Building on this, we provide a computationally efficient PAC algorithm with optimal (up to a constant factor) sample complexity for such problems. This resolves an open question of (Long, 1995, 2003; Bshouty et al., 2009) concerning the sample complexity of efficient PAC algorithms under the uniform distribution in the unit ball. Moreover, it provides the first bound for a polynomial-time PAC algorithm that is tight for an interesting infinite class of hypothesis functions under a general class of data-distributions, providing significant progress towards a long standing open question of (Ehrenfeucht et al., 1989; Blumer et al., 1989). We also provide new bounds for active and passive learning in the case that the data might not be linearly separable, both in the agnostic case and and under the Tsybakov low-noise condition. To derive our results, we provide new structural results for (nearly) log-concave distributions, which might be of independent interest as well.} }
Endnote
%0 Conference Paper %T Active and passive learning of linear separators under log-concave distributions %A Maria-Florina Balcan %A Phil Long %B Proceedings of the 26th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2013 %E Shai Shalev-Shwartz %E Ingo Steinwart %F pmlr-v30-Balcan13 %I PMLR %P 288--316 %U https://proceedings.mlr.press/v30/Balcan13.html %V 30 %X We prove that active learning provides an exponential improvement over PAC (passive) learning of homogeneous linear separators under nearly log-concave distributions. Building on this, we provide a computationally efficient PAC algorithm with optimal (up to a constant factor) sample complexity for such problems. This resolves an open question of (Long, 1995, 2003; Bshouty et al., 2009) concerning the sample complexity of efficient PAC algorithms under the uniform distribution in the unit ball. Moreover, it provides the first bound for a polynomial-time PAC algorithm that is tight for an interesting infinite class of hypothesis functions under a general class of data-distributions, providing significant progress towards a long standing open question of (Ehrenfeucht et al., 1989; Blumer et al., 1989). We also provide new bounds for active and passive learning in the case that the data might not be linearly separable, both in the agnostic case and and under the Tsybakov low-noise condition. To derive our results, we provide new structural results for (nearly) log-concave distributions, which might be of independent interest as well.
RIS
TY - CPAPER TI - Active and passive learning of linear separators under log-concave distributions AU - Maria-Florina Balcan AU - Phil Long BT - Proceedings of the 26th Annual Conference on Learning Theory DA - 2013/06/13 ED - Shai Shalev-Shwartz ED - Ingo Steinwart ID - pmlr-v30-Balcan13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 30 SP - 288 EP - 316 L1 - http://proceedings.mlr.press/v30/Balcan13.pdf UR - https://proceedings.mlr.press/v30/Balcan13.html AB - We prove that active learning provides an exponential improvement over PAC (passive) learning of homogeneous linear separators under nearly log-concave distributions. Building on this, we provide a computationally efficient PAC algorithm with optimal (up to a constant factor) sample complexity for such problems. This resolves an open question of (Long, 1995, 2003; Bshouty et al., 2009) concerning the sample complexity of efficient PAC algorithms under the uniform distribution in the unit ball. Moreover, it provides the first bound for a polynomial-time PAC algorithm that is tight for an interesting infinite class of hypothesis functions under a general class of data-distributions, providing significant progress towards a long standing open question of (Ehrenfeucht et al., 1989; Blumer et al., 1989). We also provide new bounds for active and passive learning in the case that the data might not be linearly separable, both in the agnostic case and and under the Tsybakov low-noise condition. To derive our results, we provide new structural results for (nearly) log-concave distributions, which might be of independent interest as well. ER -
APA
Balcan, M. & Long, P.. (2013). Active and passive learning of linear separators under log-concave distributions. Proceedings of the 26th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 30:288-316 Available from https://proceedings.mlr.press/v30/Balcan13.html.

Related Material