A Bayesian Framework for Online Classifier Ensemble

Qinxun Bai, Henry Lam, Stan Sclaroff
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1584-1592, 2014.

Abstract

We propose a Bayesian framework for recursively estimating the classifier weights in online learning of a classifier ensemble. In contrast with past methods, such as stochastic gradient descent or online boosting, our framework estimates the weights in terms of evolving posterior distributions. For a specified class of loss functions, we show that it is possible to formulate a suitably defined likelihood function and hence use the posterior distribution as an approximation to the global empirical loss minimizer. If the stream of training data is sampled from a stationary process, we can also show that our framework admits a superior rate of convergence to the expected loss minimizer than is possible with standard stochastic gradient descent. In experiments with real-world datasets, our formulation often performs better than online boosting algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-bai14, title = {A Bayesian Framework for Online Classifier Ensemble}, author = {Bai, Qinxun and Lam, Henry and Sclaroff, Stan}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {1584--1592}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/bai14.pdf}, url = {https://proceedings.mlr.press/v32/bai14.html}, abstract = {We propose a Bayesian framework for recursively estimating the classifier weights in online learning of a classifier ensemble. In contrast with past methods, such as stochastic gradient descent or online boosting, our framework estimates the weights in terms of evolving posterior distributions. For a specified class of loss functions, we show that it is possible to formulate a suitably defined likelihood function and hence use the posterior distribution as an approximation to the global empirical loss minimizer. If the stream of training data is sampled from a stationary process, we can also show that our framework admits a superior rate of convergence to the expected loss minimizer than is possible with standard stochastic gradient descent. In experiments with real-world datasets, our formulation often performs better than online boosting algorithms.} }
Endnote
%0 Conference Paper %T A Bayesian Framework for Online Classifier Ensemble %A Qinxun Bai %A Henry Lam %A Stan Sclaroff %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-bai14 %I PMLR %P 1584--1592 %U https://proceedings.mlr.press/v32/bai14.html %V 32 %N 2 %X We propose a Bayesian framework for recursively estimating the classifier weights in online learning of a classifier ensemble. In contrast with past methods, such as stochastic gradient descent or online boosting, our framework estimates the weights in terms of evolving posterior distributions. For a specified class of loss functions, we show that it is possible to formulate a suitably defined likelihood function and hence use the posterior distribution as an approximation to the global empirical loss minimizer. If the stream of training data is sampled from a stationary process, we can also show that our framework admits a superior rate of convergence to the expected loss minimizer than is possible with standard stochastic gradient descent. In experiments with real-world datasets, our formulation often performs better than online boosting algorithms.
RIS
TY - CPAPER TI - A Bayesian Framework for Online Classifier Ensemble AU - Qinxun Bai AU - Henry Lam AU - Stan Sclaroff BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-bai14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 1584 EP - 1592 L1 - http://proceedings.mlr.press/v32/bai14.pdf UR - https://proceedings.mlr.press/v32/bai14.html AB - We propose a Bayesian framework for recursively estimating the classifier weights in online learning of a classifier ensemble. In contrast with past methods, such as stochastic gradient descent or online boosting, our framework estimates the weights in terms of evolving posterior distributions. For a specified class of loss functions, we show that it is possible to formulate a suitably defined likelihood function and hence use the posterior distribution as an approximation to the global empirical loss minimizer. If the stream of training data is sampled from a stationary process, we can also show that our framework admits a superior rate of convergence to the expected loss minimizer than is possible with standard stochastic gradient descent. In experiments with real-world datasets, our formulation often performs better than online boosting algorithms. ER -
APA
Bai, Q., Lam, H. & Sclaroff, S.. (2014). A Bayesian Framework for Online Classifier Ensemble. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):1584-1592 Available from https://proceedings.mlr.press/v32/bai14.html.

Related Material