Home Page

Papers

Submissions

News

Editorial Board

Announcements

Proceedings

Open Source Software

Search

Statistics

Login



RSS Feed

A Stochastic Quasi-Newton Method for Online Convex Optimization

Nicol N. Schraudolph, Jin Yu, Simon Günter; JMLR W&P 2:436-443, 2007.

Abstract

We develop stochastic variants of the well-known BFGS quasi-Newton optimization method, in both full and memory-limited (LBFGS) forms, for online optimization of convex functions. The resulting algorithm performs comparably to a well-tuned natural gradient descent but is scalable to very high-dimensional problems. On standard benchmarks in natural language processing, it asymptotically outperforms previous stochastic gradient methods for parameter estimation in conditional random fields. We are working on analyzing the convergence of online (L)BFGS, and extending it to nonconvex optimization problems.



Home Page

Papers

Submissions

News

Editorial Board

Announcements

Proceedings

Open Source Software

Search

Statistics

Login



RSS Feed

Page last modified on Sat Oct 27 18:32:47 BST 2007.

webmasterjmlr.org Copyright © JMLR 2000. All rights reserved.