Efficiently Enforcing Diversity in Multi-Output Structured Prediction

Abner Guzman-Rivera, Pushmeet Kohli, Dhruv Batra, Rob Rutenbar
Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, PMLR 33:284-292, 2014.

Abstract

This paper proposes a novel method for efficiently generating multiple diverse predictions for structured prediction problems. Existing methods like SDPPs or DivMBest work by making a series of predictions where each prediction is made after considering the predictions that came before it. Such approaches are inherently sequential and computationally expensive. In contrast, our method, Diverse Multiple Choice Learning, learns a set of models to make multiple independent, yet diverse, predictions at testtime. We achieve this by including a diversity encouraging term in the loss function used for training the models. This approach encourages diversity in the predictions while preserving computational efficiency at test-time. Experimental results on a number of challenging problems show that our method learns models that not only predict more diverse results than competing methods, but are also able to generalize better and produce results with high test accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v33-guzman-rivera14, title = {{Efficiently Enforcing Diversity in Multi-Output Structured Prediction}}, author = {Guzman-Rivera, Abner and Kohli, Pushmeet and Batra, Dhruv and Rutenbar, Rob}, booktitle = {Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics}, pages = {284--292}, year = {2014}, editor = {Kaski, Samuel and Corander, Jukka}, volume = {33}, series = {Proceedings of Machine Learning Research}, address = {Reykjavik, Iceland}, month = {22--25 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v33/guzman-rivera14.pdf}, url = {https://proceedings.mlr.press/v33/guzman-rivera14.html}, abstract = {This paper proposes a novel method for efficiently generating multiple diverse predictions for structured prediction problems. Existing methods like SDPPs or DivMBest work by making a series of predictions where each prediction is made after considering the predictions that came before it. Such approaches are inherently sequential and computationally expensive. In contrast, our method, Diverse Multiple Choice Learning, learns a set of models to make multiple independent, yet diverse, predictions at testtime. We achieve this by including a diversity encouraging term in the loss function used for training the models. This approach encourages diversity in the predictions while preserving computational efficiency at test-time. Experimental results on a number of challenging problems show that our method learns models that not only predict more diverse results than competing methods, but are also able to generalize better and produce results with high test accuracy.} }
Endnote
%0 Conference Paper %T Efficiently Enforcing Diversity in Multi-Output Structured Prediction %A Abner Guzman-Rivera %A Pushmeet Kohli %A Dhruv Batra %A Rob Rutenbar %B Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2014 %E Samuel Kaski %E Jukka Corander %F pmlr-v33-guzman-rivera14 %I PMLR %P 284--292 %U https://proceedings.mlr.press/v33/guzman-rivera14.html %V 33 %X This paper proposes a novel method for efficiently generating multiple diverse predictions for structured prediction problems. Existing methods like SDPPs or DivMBest work by making a series of predictions where each prediction is made after considering the predictions that came before it. Such approaches are inherently sequential and computationally expensive. In contrast, our method, Diverse Multiple Choice Learning, learns a set of models to make multiple independent, yet diverse, predictions at testtime. We achieve this by including a diversity encouraging term in the loss function used for training the models. This approach encourages diversity in the predictions while preserving computational efficiency at test-time. Experimental results on a number of challenging problems show that our method learns models that not only predict more diverse results than competing methods, but are also able to generalize better and produce results with high test accuracy.
RIS
TY - CPAPER TI - Efficiently Enforcing Diversity in Multi-Output Structured Prediction AU - Abner Guzman-Rivera AU - Pushmeet Kohli AU - Dhruv Batra AU - Rob Rutenbar BT - Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics DA - 2014/04/02 ED - Samuel Kaski ED - Jukka Corander ID - pmlr-v33-guzman-rivera14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 33 SP - 284 EP - 292 L1 - http://proceedings.mlr.press/v33/guzman-rivera14.pdf UR - https://proceedings.mlr.press/v33/guzman-rivera14.html AB - This paper proposes a novel method for efficiently generating multiple diverse predictions for structured prediction problems. Existing methods like SDPPs or DivMBest work by making a series of predictions where each prediction is made after considering the predictions that came before it. Such approaches are inherently sequential and computationally expensive. In contrast, our method, Diverse Multiple Choice Learning, learns a set of models to make multiple independent, yet diverse, predictions at testtime. We achieve this by including a diversity encouraging term in the loss function used for training the models. This approach encourages diversity in the predictions while preserving computational efficiency at test-time. Experimental results on a number of challenging problems show that our method learns models that not only predict more diverse results than competing methods, but are also able to generalize better and produce results with high test accuracy. ER -
APA
Guzman-Rivera, A., Kohli, P., Batra, D. & Rutenbar, R.. (2014). Efficiently Enforcing Diversity in Multi-Output Structured Prediction. Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 33:284-292 Available from https://proceedings.mlr.press/v33/guzman-rivera14.html.

Related Material