Home Page

Papers

Submissions

News

Editorial Board

Open Source Software

Proceedings (PMLR)

Transactions (TMLR)

Search

Statistics

Login

Frequently Asked Questions

Contact Us



RSS Feed

When Does Gradient Descent with Logistic Loss Find Interpolating Two-Layer Networks?

Niladri S. Chatterji, Philip M. Long, Peter L. Bartlett; 22(159):1−48, 2021.

Abstract

We study the training of finite-width two-layer smoothed ReLU networks for binary classification using the logistic loss. We show that gradient descent drives the training loss to zero if the initial loss is small enough. When the data satisfies certain cluster and separation conditions and the network is wide enough, we show that one step of gradient descent reduces the loss sufficiently that the first result applies.

[abs][pdf][bib]       
© JMLR 2021. (edit, beta)