## Value Regularization and Fenchel Duality

** Ryan M. Rifkin, Ross A. Lippert**; 8(17):441−479, 2007.

### Abstract

Regularization is an approach to function learning that balances fit
and smoothness. In practice, we search for a function *f* with a
finite representation *f* = Σ* _{i} c_{i}*
φ

*(·). In most treatments, the*

_{i}*c*are the primary objects of study. We consider

_{i}*value regularization*, constructing optimization problems in which the predicted values at the training points are the primary variables, and therefore the central objects of study. Although this is a simple change, it has profound consequences. From convex conjugacy and the theory of Fenchel duality, we derive separate optimality conditions for the regularization and loss portions of the learning problem; this technique yields clean and short derivations of standard algorithms. This framework is ideally suited to studying many other phenomena at the intersection of learning theory and optimization. We obtain a value-based variant of the representer theorem, which underscores the transductive nature of regularization in reproducing kernel Hilbert spaces. We unify and extend previous results on learning kernel functions, with very simple proofs. We analyze the use of unregularized bias terms in optimization problems, and low-rank approximations to kernel matrices, obtaining new results in these areas. In summary, the combination of value regularization and Fenchel duality are valuable tools for studying the optimization problems in machine learning.

© JMLR 2007. (edit, beta) |