Variational Relevance Vector Machine for Tabular Data

Dmitry Kropotov, Dmitry Vetrov, Lior Wolf, Tal Hassner
Proceedings of 2nd Asian Conference on Machine Learning, PMLR 13:79-94, 2010.

Abstract

We adopt the Relevance Vector Machine (RVM) framework to handle cases of table-structured data such as image blocks and image descriptors. This is achieved by coupling the regularization coefficients of rows and columns of features. We present two variants of this new gridRVM framework, based on the way in which the regularization coefficients of the rows and columns are combined. Appropriate variational optimization algorithms are derived for inference within this framework. The consequent reduction in the number of parameters from the product of the table’s dimensions to the sum of its dimensions allows for better performance in the face of small training sets, resulting in improved resistance to overfitting, as well as providing better interpretation of results. These properties are demonstrated on synthetic data-sets as well as on a modern and challenging visual identification benchmark.

Cite this Paper


BibTeX
@InProceedings{pmlr-v13-kropotov10a, title = {Variational Relevance Vector Machine for Tabular Data}, author = {Kropotov, Dmitry and Vetrov, Dmitry and Wolf, Lior and Hassner, Tal}, booktitle = {Proceedings of 2nd Asian Conference on Machine Learning}, pages = {79--94}, year = {2010}, editor = {Sugiyama, Masashi and Yang, Qiang}, volume = {13}, series = {Proceedings of Machine Learning Research}, address = {Tokyo, Japan}, month = {08--10 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v13/kropotov10a/kropotov10a.pdf}, url = {https://proceedings.mlr.press/v13/kropotov10a.html}, abstract = {We adopt the Relevance Vector Machine (RVM) framework to handle cases of table-structured data such as image blocks and image descriptors. This is achieved by coupling the regularization coefficients of rows and columns of features. We present two variants of this new gridRVM framework, based on the way in which the regularization coefficients of the rows and columns are combined. Appropriate variational optimization algorithms are derived for inference within this framework. The consequent reduction in the number of parameters from the product of the table’s dimensions to the sum of its dimensions allows for better performance in the face of small training sets, resulting in improved resistance to overfitting, as well as providing better interpretation of results. These properties are demonstrated on synthetic data-sets as well as on a modern and challenging visual identification benchmark.} }
Endnote
%0 Conference Paper %T Variational Relevance Vector Machine for Tabular Data %A Dmitry Kropotov %A Dmitry Vetrov %A Lior Wolf %A Tal Hassner %B Proceedings of 2nd Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2010 %E Masashi Sugiyama %E Qiang Yang %F pmlr-v13-kropotov10a %I PMLR %P 79--94 %U https://proceedings.mlr.press/v13/kropotov10a.html %V 13 %X We adopt the Relevance Vector Machine (RVM) framework to handle cases of table-structured data such as image blocks and image descriptors. This is achieved by coupling the regularization coefficients of rows and columns of features. We present two variants of this new gridRVM framework, based on the way in which the regularization coefficients of the rows and columns are combined. Appropriate variational optimization algorithms are derived for inference within this framework. The consequent reduction in the number of parameters from the product of the table’s dimensions to the sum of its dimensions allows for better performance in the face of small training sets, resulting in improved resistance to overfitting, as well as providing better interpretation of results. These properties are demonstrated on synthetic data-sets as well as on a modern and challenging visual identification benchmark.
RIS
TY - CPAPER TI - Variational Relevance Vector Machine for Tabular Data AU - Dmitry Kropotov AU - Dmitry Vetrov AU - Lior Wolf AU - Tal Hassner BT - Proceedings of 2nd Asian Conference on Machine Learning DA - 2010/10/31 ED - Masashi Sugiyama ED - Qiang Yang ID - pmlr-v13-kropotov10a PB - PMLR DP - Proceedings of Machine Learning Research VL - 13 SP - 79 EP - 94 L1 - http://proceedings.mlr.press/v13/kropotov10a/kropotov10a.pdf UR - https://proceedings.mlr.press/v13/kropotov10a.html AB - We adopt the Relevance Vector Machine (RVM) framework to handle cases of table-structured data such as image blocks and image descriptors. This is achieved by coupling the regularization coefficients of rows and columns of features. We present two variants of this new gridRVM framework, based on the way in which the regularization coefficients of the rows and columns are combined. Appropriate variational optimization algorithms are derived for inference within this framework. The consequent reduction in the number of parameters from the product of the table’s dimensions to the sum of its dimensions allows for better performance in the face of small training sets, resulting in improved resistance to overfitting, as well as providing better interpretation of results. These properties are demonstrated on synthetic data-sets as well as on a modern and challenging visual identification benchmark. ER -
APA
Kropotov, D., Vetrov, D., Wolf, L. & Hassner, T.. (2010). Variational Relevance Vector Machine for Tabular Data. Proceedings of 2nd Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 13:79-94 Available from https://proceedings.mlr.press/v13/kropotov10a.html.

Related Material