A Bayesian Divergence Prior for Classiffier Adaptation

Xiao Li, Jeff Bilmes
Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, PMLR 2:275-282, 2007.

Abstract

Adaptation of statistical classifiers is critical when a target (or testing) distribution is different from the distribution that governs training data. In such cases, a classifier optimized for the training distribution needs to be adapted for optimal use in the target distribution. This paper presents a Bayesian “divergence prior” for generic classifier adaptation. Instantiations of this prior lead to simple yet principled adaptation strategies for a variety of classifiers, which yield superior performance in practice. In addition, this paper derives several adaptation error bounds by applying the divergence prior in the PAC-Bayesian setting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v2-li07a, title = {A Bayesian Divergence Prior for Classiffier Adaptation}, author = {Li, Xiao and Bilmes, Jeff}, booktitle = {Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics}, pages = {275--282}, year = {2007}, editor = {Meila, Marina and Shen, Xiaotong}, volume = {2}, series = {Proceedings of Machine Learning Research}, address = {San Juan, Puerto Rico}, month = {21--24 Mar}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v2/li07a/li07a.pdf}, url = {https://proceedings.mlr.press/v2/li07a.html}, abstract = {Adaptation of statistical classifiers is critical when a target (or testing) distribution is different from the distribution that governs training data. In such cases, a classifier optimized for the training distribution needs to be adapted for optimal use in the target distribution. This paper presents a Bayesian “divergence prior” for generic classifier adaptation. Instantiations of this prior lead to simple yet principled adaptation strategies for a variety of classifiers, which yield superior performance in practice. In addition, this paper derives several adaptation error bounds by applying the divergence prior in the PAC-Bayesian setting.} }
Endnote
%0 Conference Paper %T A Bayesian Divergence Prior for Classiffier Adaptation %A Xiao Li %A Jeff Bilmes %B Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2007 %E Marina Meila %E Xiaotong Shen %F pmlr-v2-li07a %I PMLR %P 275--282 %U https://proceedings.mlr.press/v2/li07a.html %V 2 %X Adaptation of statistical classifiers is critical when a target (or testing) distribution is different from the distribution that governs training data. In such cases, a classifier optimized for the training distribution needs to be adapted for optimal use in the target distribution. This paper presents a Bayesian “divergence prior” for generic classifier adaptation. Instantiations of this prior lead to simple yet principled adaptation strategies for a variety of classifiers, which yield superior performance in practice. In addition, this paper derives several adaptation error bounds by applying the divergence prior in the PAC-Bayesian setting.
RIS
TY - CPAPER TI - A Bayesian Divergence Prior for Classiffier Adaptation AU - Xiao Li AU - Jeff Bilmes BT - Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics DA - 2007/03/11 ED - Marina Meila ED - Xiaotong Shen ID - pmlr-v2-li07a PB - PMLR DP - Proceedings of Machine Learning Research VL - 2 SP - 275 EP - 282 L1 - http://proceedings.mlr.press/v2/li07a/li07a.pdf UR - https://proceedings.mlr.press/v2/li07a.html AB - Adaptation of statistical classifiers is critical when a target (or testing) distribution is different from the distribution that governs training data. In such cases, a classifier optimized for the training distribution needs to be adapted for optimal use in the target distribution. This paper presents a Bayesian “divergence prior” for generic classifier adaptation. Instantiations of this prior lead to simple yet principled adaptation strategies for a variety of classifiers, which yield superior performance in practice. In addition, this paper derives several adaptation error bounds by applying the divergence prior in the PAC-Bayesian setting. ER -
APA
Li, X. & Bilmes, J.. (2007). A Bayesian Divergence Prior for Classiffier Adaptation. Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 2:275-282 Available from https://proceedings.mlr.press/v2/li07a.html.

Related Material