## Attribute-Efficient and Non-adaptive Learning of Parities and DNF Expressions

** Vitaly Feldman**; 8(51):1431−1460, 2007.

### Abstract

We consider the problems of attribute-efficient PAC learning of two
well-studied concept classes: parity functions and DNF expressions
over {0,1}^{n}. We show that attribute-efficient learning of parities
with respect to the uniform distribution is equivalent to decoding
high-rate random linear codes from low number of errors, a
long-standing open problem in coding theory. This is the first
evidence that attribute-efficient learning of a natural PAC learnable
concept class can be computationally hard.

An algorithm is said to use membership queries (MQs)
*non-adaptively* if the points at which the algorithm asks MQs do not
depend on the target concept. Using a simple non-adaptive parity
learning algorithm and a modification of Levin's algorithm for
locating a weakly-correlated parity due to Bshouty et al. (1999), we give the
first non-adaptive and attribute-efficient algorithm for learning DNF
with respect to the uniform distribution. Our algorithm runs in time
*Õ*(*ns*^{4}/ε) and uses *Õ*(*s*^{4}
· log^{2}*n*/ε) non-adaptive MQs, where *s* is the number of
terms in the shortest DNF representation of the target concept. The
algorithm improves on the best previous algorithm for learning DNF
(of Bshouty et al., 1999) and can also be easily modified to tolerate
random persistent classification noise in MQs.

© JMLR 2007. (edit, beta) |