Lower Bounds and Hardness Amplification for Learning Shallow Monotone Formulas

Vitaly Feldman, Homin K. Lee, Rocco A. Servedio
Proceedings of the 24th Annual Conference on Learning Theory, PMLR 19:273-292, 2011.

Abstract

Much work has been done on learning various classes of “simple" monotone functions under the uniform distribution. In this paper we give the first unconditional lower bounds for learning problems of this sort by showing that polynomial-time algorithms cannot learn shallow monotone Boolean formulas under the uniform distribution in the well-studied Statistical Query (SQ) model. We introduce a new approach to understanding the learnability of “simple” monotone functions that is based on a recent characterization of Strong SQ learnability by Simon (2007). Using the characterization we first show that depth-3 monotone formulas of size $n^{o(1)}$ cannot be learned by any polynomial-time SQ algorithm to accuracy $1 - 1/(\log n)^{\Omega(1)}$. We then build on this result to show that depth-4 monotone formulas of size $n^{o(1)}$ cannot be learned even to a certain $\frac 1 2 + o(1)$ accuracy in polynomial time. This improved hardness is achieved using a general technique that we introduce for amplifying the hardness of “mildly hard” learning problems in either the PAC or SQ framework. Thi shardness amplification for learning builds on the ideas in the work of O’Donnell (2004) on hardness amplification for approximating functions using small circuits, and is applicable to a number of other contexts. Finally, we demonstrate that our approach can also be used to reduce the well-known open problem of learning juntas to learning of depth-3 monotone formulas.

Cite this Paper


BibTeX
@InProceedings{pmlr-v19-feldman11a, title = {Lower Bounds and Hardness Amplification for Learning Shallow Monotone Formulas}, author = {Feldman, Vitaly and Lee, Homin K. and Servedio, Rocco A.}, booktitle = {Proceedings of the 24th Annual Conference on Learning Theory}, pages = {273--292}, year = {2011}, editor = {Kakade, Sham M. and von Luxburg, Ulrike}, volume = {19}, series = {Proceedings of Machine Learning Research}, address = {Budapest, Hungary}, month = {09--11 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v19/feldman11a/feldman11a.pdf}, url = {https://proceedings.mlr.press/v19/feldman11a.html}, abstract = {Much work has been done on learning various classes of “simple" monotone functions under the uniform distribution. In this paper we give the first unconditional lower bounds for learning problems of this sort by showing that polynomial-time algorithms cannot learn shallow monotone Boolean formulas under the uniform distribution in the well-studied Statistical Query (SQ) model. We introduce a new approach to understanding the learnability of “simple” monotone functions that is based on a recent characterization of Strong SQ learnability by Simon (2007). Using the characterization we first show that depth-3 monotone formulas of size $n^{o(1)}$ cannot be learned by any polynomial-time SQ algorithm to accuracy $1 - 1/(\log n)^{\Omega(1)}$. We then build on this result to show that depth-4 monotone formulas of size $n^{o(1)}$ cannot be learned even to a certain $\frac 1 2 + o(1)$ accuracy in polynomial time. This improved hardness is achieved using a general technique that we introduce for amplifying the hardness of “mildly hard” learning problems in either the PAC or SQ framework. Thi shardness amplification for learning builds on the ideas in the work of O’Donnell (2004) on hardness amplification for approximating functions using small circuits, and is applicable to a number of other contexts. Finally, we demonstrate that our approach can also be used to reduce the well-known open problem of learning juntas to learning of depth-3 monotone formulas.} }
Endnote
%0 Conference Paper %T Lower Bounds and Hardness Amplification for Learning Shallow Monotone Formulas %A Vitaly Feldman %A Homin K. Lee %A Rocco A. Servedio %B Proceedings of the 24th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2011 %E Sham M. Kakade %E Ulrike von Luxburg %F pmlr-v19-feldman11a %I PMLR %P 273--292 %U https://proceedings.mlr.press/v19/feldman11a.html %V 19 %X Much work has been done on learning various classes of “simple" monotone functions under the uniform distribution. In this paper we give the first unconditional lower bounds for learning problems of this sort by showing that polynomial-time algorithms cannot learn shallow monotone Boolean formulas under the uniform distribution in the well-studied Statistical Query (SQ) model. We introduce a new approach to understanding the learnability of “simple” monotone functions that is based on a recent characterization of Strong SQ learnability by Simon (2007). Using the characterization we first show that depth-3 monotone formulas of size $n^{o(1)}$ cannot be learned by any polynomial-time SQ algorithm to accuracy $1 - 1/(\log n)^{\Omega(1)}$. We then build on this result to show that depth-4 monotone formulas of size $n^{o(1)}$ cannot be learned even to a certain $\frac 1 2 + o(1)$ accuracy in polynomial time. This improved hardness is achieved using a general technique that we introduce for amplifying the hardness of “mildly hard” learning problems in either the PAC or SQ framework. Thi shardness amplification for learning builds on the ideas in the work of O’Donnell (2004) on hardness amplification for approximating functions using small circuits, and is applicable to a number of other contexts. Finally, we demonstrate that our approach can also be used to reduce the well-known open problem of learning juntas to learning of depth-3 monotone formulas.
RIS
TY - CPAPER TI - Lower Bounds and Hardness Amplification for Learning Shallow Monotone Formulas AU - Vitaly Feldman AU - Homin K. Lee AU - Rocco A. Servedio BT - Proceedings of the 24th Annual Conference on Learning Theory DA - 2011/12/21 ED - Sham M. Kakade ED - Ulrike von Luxburg ID - pmlr-v19-feldman11a PB - PMLR DP - Proceedings of Machine Learning Research VL - 19 SP - 273 EP - 292 L1 - http://proceedings.mlr.press/v19/feldman11a/feldman11a.pdf UR - https://proceedings.mlr.press/v19/feldman11a.html AB - Much work has been done on learning various classes of “simple" monotone functions under the uniform distribution. In this paper we give the first unconditional lower bounds for learning problems of this sort by showing that polynomial-time algorithms cannot learn shallow monotone Boolean formulas under the uniform distribution in the well-studied Statistical Query (SQ) model. We introduce a new approach to understanding the learnability of “simple” monotone functions that is based on a recent characterization of Strong SQ learnability by Simon (2007). Using the characterization we first show that depth-3 monotone formulas of size $n^{o(1)}$ cannot be learned by any polynomial-time SQ algorithm to accuracy $1 - 1/(\log n)^{\Omega(1)}$. We then build on this result to show that depth-4 monotone formulas of size $n^{o(1)}$ cannot be learned even to a certain $\frac 1 2 + o(1)$ accuracy in polynomial time. This improved hardness is achieved using a general technique that we introduce for amplifying the hardness of “mildly hard” learning problems in either the PAC or SQ framework. Thi shardness amplification for learning builds on the ideas in the work of O’Donnell (2004) on hardness amplification for approximating functions using small circuits, and is applicable to a number of other contexts. Finally, we demonstrate that our approach can also be used to reduce the well-known open problem of learning juntas to learning of depth-3 monotone formulas. ER -
APA
Feldman, V., Lee, H.K. & Servedio, R.A.. (2011). Lower Bounds and Hardness Amplification for Learning Shallow Monotone Formulas. Proceedings of the 24th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 19:273-292 Available from https://proceedings.mlr.press/v19/feldman11a.html.

Related Material