On the Reducibility of Submodular Functions

Jincheng Mei, Hao Zhang, Bao-Liang Lu
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:186-194, 2016.

Abstract

The scalability of submodular optimization methods is critical for their usability in practice. In this paper, we study the reducibility of submodular functions, a property that enables us to reduce the solution space of submodular optimization problems without performance loss. We introduce the concept of reducibility using marginal gains. Then we show that by adding perturbation, we can endow irreducible functions with reducibility, based on which we propose the perturbation-reduction optimization framework. Our theoretical analysis proves that given the perturbation scales, the reducibility gain could be computed, and the performance loss has additive upper bounds. We further conduct empirical studies and the results demonstrate that our proposed framework significantly accelerates existing optimization methods for irreducible submodular functions with a cost of only small performance losses.

Cite this Paper


BibTeX
@InProceedings{pmlr-v51-mei16, title = {On the Reducibility of Submodular Functions}, author = {Mei, Jincheng and Zhang, Hao and Lu, Bao-Liang}, booktitle = {Proceedings of the 19th International Conference on Artificial Intelligence and Statistics}, pages = {186--194}, year = {2016}, editor = {Gretton, Arthur and Robert, Christian C.}, volume = {51}, series = {Proceedings of Machine Learning Research}, address = {Cadiz, Spain}, month = {09--11 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v51/mei16.pdf}, url = {https://proceedings.mlr.press/v51/mei16.html}, abstract = {The scalability of submodular optimization methods is critical for their usability in practice. In this paper, we study the reducibility of submodular functions, a property that enables us to reduce the solution space of submodular optimization problems without performance loss. We introduce the concept of reducibility using marginal gains. Then we show that by adding perturbation, we can endow irreducible functions with reducibility, based on which we propose the perturbation-reduction optimization framework. Our theoretical analysis proves that given the perturbation scales, the reducibility gain could be computed, and the performance loss has additive upper bounds. We further conduct empirical studies and the results demonstrate that our proposed framework significantly accelerates existing optimization methods for irreducible submodular functions with a cost of only small performance losses.} }
Endnote
%0 Conference Paper %T On the Reducibility of Submodular Functions %A Jincheng Mei %A Hao Zhang %A Bao-Liang Lu %B Proceedings of the 19th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2016 %E Arthur Gretton %E Christian C. Robert %F pmlr-v51-mei16 %I PMLR %P 186--194 %U https://proceedings.mlr.press/v51/mei16.html %V 51 %X The scalability of submodular optimization methods is critical for their usability in practice. In this paper, we study the reducibility of submodular functions, a property that enables us to reduce the solution space of submodular optimization problems without performance loss. We introduce the concept of reducibility using marginal gains. Then we show that by adding perturbation, we can endow irreducible functions with reducibility, based on which we propose the perturbation-reduction optimization framework. Our theoretical analysis proves that given the perturbation scales, the reducibility gain could be computed, and the performance loss has additive upper bounds. We further conduct empirical studies and the results demonstrate that our proposed framework significantly accelerates existing optimization methods for irreducible submodular functions with a cost of only small performance losses.
RIS
TY - CPAPER TI - On the Reducibility of Submodular Functions AU - Jincheng Mei AU - Hao Zhang AU - Bao-Liang Lu BT - Proceedings of the 19th International Conference on Artificial Intelligence and Statistics DA - 2016/05/02 ED - Arthur Gretton ED - Christian C. Robert ID - pmlr-v51-mei16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 51 SP - 186 EP - 194 L1 - http://proceedings.mlr.press/v51/mei16.pdf UR - https://proceedings.mlr.press/v51/mei16.html AB - The scalability of submodular optimization methods is critical for their usability in practice. In this paper, we study the reducibility of submodular functions, a property that enables us to reduce the solution space of submodular optimization problems without performance loss. We introduce the concept of reducibility using marginal gains. Then we show that by adding perturbation, we can endow irreducible functions with reducibility, based on which we propose the perturbation-reduction optimization framework. Our theoretical analysis proves that given the perturbation scales, the reducibility gain could be computed, and the performance loss has additive upper bounds. We further conduct empirical studies and the results demonstrate that our proposed framework significantly accelerates existing optimization methods for irreducible submodular functions with a cost of only small performance losses. ER -
APA
Mei, J., Zhang, H. & Lu, B.. (2016). On the Reducibility of Submodular Functions. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 51:186-194 Available from https://proceedings.mlr.press/v51/mei16.html.

Related Material