Processing math: 100%



Home Page

Papers

Submissions

News

Editorial Board

Open Source Software

Proceedings (PMLR)

Transactions (TMLR)

Search

Statistics

Login

Frequently Asked Questions

Contact Us



RSS Feed

Approximation Hardness for A Class of Sparse Optimization Problems

Yichen Chen, Yinyu Ye, Mengdi Wang; 20(38):1−27, 2019.

Abstract

In this paper, we consider three typical optimization problems with a convex loss function and a nonconvex sparse penalty or constraint. For the sparse penalized problem, we prove that finding an O(nc1dc2)-optimal solution to an n×d problem is strongly NP-hard for any c1,c2[0,1) such that c1+c2<1. For two constrained versions of the sparse optimization problem, we show that it is intractable to approximately compute a solution path associated with increasing values of some tuning parameter. The hardness results apply to a broad class of loss functions and sparse penalties. They suggest that one cannot even approximately solve these three problems in polynomial time, unless P = NP.

[abs][pdf][bib]       
© JMLR 2019. (edit, beta)