PAC-Bayesian Collective Stability

Ben London, Bert Huang, Ben Taskar, Lise Getoor
Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, PMLR 33:585-594, 2014.

Abstract

Recent results have shown that the generalization error of structured predictors decreases with both the number of examples and the size of each example, provided the data distribution has weak dependence and the predictor exhibits a smoothness property called collective stability. These results use an especially strong definition of collective stability that must hold uniformly over all inputs and all hypotheses in the class. We investigate whether weaker definitions of collective stability suffice. Using the PAC-Bayes framework, which is particularly amenable to our new definitions, we prove that generalization is indeed possible when uniform collective stability happens with high probability over draws of predictors (and inputs). We then derive a generalization bound for a class of structured predictors with variably convex inference, which suggests a novel learning objective that optimizes collective stability.

Cite this Paper


BibTeX
@InProceedings{pmlr-v33-london14, title = {{PAC-Bayesian Collective Stability}}, author = {London, Ben and Huang, Bert and Taskar, Ben and Getoor, Lise}, booktitle = {Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics}, pages = {585--594}, year = {2014}, editor = {Kaski, Samuel and Corander, Jukka}, volume = {33}, series = {Proceedings of Machine Learning Research}, address = {Reykjavik, Iceland}, month = {22--25 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v33/london14.pdf}, url = {https://proceedings.mlr.press/v33/london14.html}, abstract = {Recent results have shown that the generalization error of structured predictors decreases with both the number of examples and the size of each example, provided the data distribution has weak dependence and the predictor exhibits a smoothness property called collective stability. These results use an especially strong definition of collective stability that must hold uniformly over all inputs and all hypotheses in the class. We investigate whether weaker definitions of collective stability suffice. Using the PAC-Bayes framework, which is particularly amenable to our new definitions, we prove that generalization is indeed possible when uniform collective stability happens with high probability over draws of predictors (and inputs). We then derive a generalization bound for a class of structured predictors with variably convex inference, which suggests a novel learning objective that optimizes collective stability.} }
Endnote
%0 Conference Paper %T PAC-Bayesian Collective Stability %A Ben London %A Bert Huang %A Ben Taskar %A Lise Getoor %B Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2014 %E Samuel Kaski %E Jukka Corander %F pmlr-v33-london14 %I PMLR %P 585--594 %U https://proceedings.mlr.press/v33/london14.html %V 33 %X Recent results have shown that the generalization error of structured predictors decreases with both the number of examples and the size of each example, provided the data distribution has weak dependence and the predictor exhibits a smoothness property called collective stability. These results use an especially strong definition of collective stability that must hold uniformly over all inputs and all hypotheses in the class. We investigate whether weaker definitions of collective stability suffice. Using the PAC-Bayes framework, which is particularly amenable to our new definitions, we prove that generalization is indeed possible when uniform collective stability happens with high probability over draws of predictors (and inputs). We then derive a generalization bound for a class of structured predictors with variably convex inference, which suggests a novel learning objective that optimizes collective stability.
RIS
TY - CPAPER TI - PAC-Bayesian Collective Stability AU - Ben London AU - Bert Huang AU - Ben Taskar AU - Lise Getoor BT - Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics DA - 2014/04/02 ED - Samuel Kaski ED - Jukka Corander ID - pmlr-v33-london14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 33 SP - 585 EP - 594 L1 - http://proceedings.mlr.press/v33/london14.pdf UR - https://proceedings.mlr.press/v33/london14.html AB - Recent results have shown that the generalization error of structured predictors decreases with both the number of examples and the size of each example, provided the data distribution has weak dependence and the predictor exhibits a smoothness property called collective stability. These results use an especially strong definition of collective stability that must hold uniformly over all inputs and all hypotheses in the class. We investigate whether weaker definitions of collective stability suffice. Using the PAC-Bayes framework, which is particularly amenable to our new definitions, we prove that generalization is indeed possible when uniform collective stability happens with high probability over draws of predictors (and inputs). We then derive a generalization bound for a class of structured predictors with variably convex inference, which suggests a novel learning objective that optimizes collective stability. ER -
APA
London, B., Huang, B., Taskar, B. & Getoor, L.. (2014). PAC-Bayesian Collective Stability. Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 33:585-594 Available from https://proceedings.mlr.press/v33/london14.html.

Related Material