## Infinite-Ï Limits For Tikhonov Regularization

** Ross A. Lippert, Ryan M. Rifkin**; 7(30):855−876, 2006.

### Abstract

We consider the problem of Tikhonov regularization with a general
convex loss function: this formalism includes
support vector machines and regularized least squares. For a family of
kernels that includes the Gaussian, parameterized by a "bandwidth"
parameter σ, we characterize the limiting solution as σ
→ ∞. In particular, we show that if we
set the regularization parameter λ = ^{~}λ σ^{-2p}, the regularization term of the Tikhonov
problem tends to an indicator function on polynomials of degree
⌊*p*⌋ (with residual regularization in the case where *p*
∈ *Z*). The proof rests on two key ideas: *epi-convergence*, a
notion of functional convergence under which limits of minimizers
converge to minimizers of limits, and a *value-based formulation
of learning*, where we work with regularization on the function output
values (*y*) as opposed to the function expansion coefficients in the
RKHS. Our result generalizes and unifies previous results in this
area.

© JMLR 2006. (edit, beta) |