Convex envelopes of complexity controlling penalties: the case against premature envelopment

Vladimir Jojic, Suchi Saria, Daphne Koller
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, PMLR 15:399-406, 2011.

Abstract

Convex envelopes of the cardinality and rank function, $l_1$ and nuclear norm, have gained immense popularity due to their sparsity inducing properties. This gave rise to a natural approach to building objectives with sparse optima whereby such convex penalties are added to another objective. Such a heuristic approach to objective building does not always work. For example, addition of an $L_1$ penalty to the KL-divergence fails to induce any sparsity, as the $L_1$ norm of any vector in a simplex is a constant. However, a convex envelope of KL and a cardinality penalty can be obtained that indeed trades off sparsity and KL-divergence. We consider cases of two composite penalties, elastic net and fused lasso, which combine multiple desiderata. In both of these cases, we show that a hard objective relaxed to obtain penalties can be more tightly approximated. Further, by construction, it is impossible to get a better convex approximation than the ones we derive. Thus, constructing a joint envelope across different parts of the objective provides means to trade off tightness and computational cost.

Cite this Paper


BibTeX
@InProceedings{pmlr-v15-jojic11a, title = {Convex envelopes of complexity controlling penalties: the case against premature envelopment}, author = {Jojic, Vladimir and Saria, Suchi and Koller, Daphne}, booktitle = {Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics}, pages = {399--406}, year = {2011}, editor = {Gordon, Geoffrey and Dunson, David and Dudík, Miroslav}, volume = {15}, series = {Proceedings of Machine Learning Research}, address = {Fort Lauderdale, FL, USA}, month = {11--13 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v15/jojic11a/jojic11a.pdf}, url = {https://proceedings.mlr.press/v15/jojic11a.html}, abstract = {Convex envelopes of the cardinality and rank function, $l_1$ and nuclear norm, have gained immense popularity due to their sparsity inducing properties. This gave rise to a natural approach to building objectives with sparse optima whereby such convex penalties are added to another objective. Such a heuristic approach to objective building does not always work. For example, addition of an $L_1$ penalty to the KL-divergence fails to induce any sparsity, as the $L_1$ norm of any vector in a simplex is a constant. However, a convex envelope of KL and a cardinality penalty can be obtained that indeed trades off sparsity and KL-divergence. We consider cases of two composite penalties, elastic net and fused lasso, which combine multiple desiderata. In both of these cases, we show that a hard objective relaxed to obtain penalties can be more tightly approximated. Further, by construction, it is impossible to get a better convex approximation than the ones we derive. Thus, constructing a joint envelope across different parts of the objective provides means to trade off tightness and computational cost.} }
Endnote
%0 Conference Paper %T Convex envelopes of complexity controlling penalties: the case against premature envelopment %A Vladimir Jojic %A Suchi Saria %A Daphne Koller %B Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2011 %E Geoffrey Gordon %E David Dunson %E Miroslav Dudík %F pmlr-v15-jojic11a %I PMLR %P 399--406 %U https://proceedings.mlr.press/v15/jojic11a.html %V 15 %X Convex envelopes of the cardinality and rank function, $l_1$ and nuclear norm, have gained immense popularity due to their sparsity inducing properties. This gave rise to a natural approach to building objectives with sparse optima whereby such convex penalties are added to another objective. Such a heuristic approach to objective building does not always work. For example, addition of an $L_1$ penalty to the KL-divergence fails to induce any sparsity, as the $L_1$ norm of any vector in a simplex is a constant. However, a convex envelope of KL and a cardinality penalty can be obtained that indeed trades off sparsity and KL-divergence. We consider cases of two composite penalties, elastic net and fused lasso, which combine multiple desiderata. In both of these cases, we show that a hard objective relaxed to obtain penalties can be more tightly approximated. Further, by construction, it is impossible to get a better convex approximation than the ones we derive. Thus, constructing a joint envelope across different parts of the objective provides means to trade off tightness and computational cost.
RIS
TY - CPAPER TI - Convex envelopes of complexity controlling penalties: the case against premature envelopment AU - Vladimir Jojic AU - Suchi Saria AU - Daphne Koller BT - Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics DA - 2011/06/14 ED - Geoffrey Gordon ED - David Dunson ED - Miroslav Dudík ID - pmlr-v15-jojic11a PB - PMLR DP - Proceedings of Machine Learning Research VL - 15 SP - 399 EP - 406 L1 - http://proceedings.mlr.press/v15/jojic11a/jojic11a.pdf UR - https://proceedings.mlr.press/v15/jojic11a.html AB - Convex envelopes of the cardinality and rank function, $l_1$ and nuclear norm, have gained immense popularity due to their sparsity inducing properties. This gave rise to a natural approach to building objectives with sparse optima whereby such convex penalties are added to another objective. Such a heuristic approach to objective building does not always work. For example, addition of an $L_1$ penalty to the KL-divergence fails to induce any sparsity, as the $L_1$ norm of any vector in a simplex is a constant. However, a convex envelope of KL and a cardinality penalty can be obtained that indeed trades off sparsity and KL-divergence. We consider cases of two composite penalties, elastic net and fused lasso, which combine multiple desiderata. In both of these cases, we show that a hard objective relaxed to obtain penalties can be more tightly approximated. Further, by construction, it is impossible to get a better convex approximation than the ones we derive. Thus, constructing a joint envelope across different parts of the objective provides means to trade off tightness and computational cost. ER -
APA
Jojic, V., Saria, S. & Koller, D.. (2011). Convex envelopes of complexity controlling penalties: the case against premature envelopment. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 15:399-406 Available from https://proceedings.mlr.press/v15/jojic11a.html.

Related Material