Markov Latent Feature Models

Aonan Zhang, John Paisley
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:1129-1137, 2016.

Abstract

We introduce Markov latent feature models (MLFM), a sparse latent feature model that arises naturally from a simple sequential construction. The key idea is to interpret each state of a sequential process as corresponding to a latent feature, and the set of states visited between two null-state visits as picking out features for an observation. We show that, given some natural constraints, we can represent this stochastic process as a mixture of recurrent Markov chains. In this way we can perform correlated latent feature modeling for the sparse coding problem. We demonstrate two cases in which we define finite and infinite latent feature models constructed from first-order Markov chains, and derive their associated scalable inference algorithms. We show empirical results on a genome analysis task and an image denoising task.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-zhangf16, title = {Markov Latent Feature Models}, author = {Zhang, Aonan and Paisley, John}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {1129--1137}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/zhangf16.pdf}, url = {https://proceedings.mlr.press/v48/zhangf16.html}, abstract = {We introduce Markov latent feature models (MLFM), a sparse latent feature model that arises naturally from a simple sequential construction. The key idea is to interpret each state of a sequential process as corresponding to a latent feature, and the set of states visited between two null-state visits as picking out features for an observation. We show that, given some natural constraints, we can represent this stochastic process as a mixture of recurrent Markov chains. In this way we can perform correlated latent feature modeling for the sparse coding problem. We demonstrate two cases in which we define finite and infinite latent feature models constructed from first-order Markov chains, and derive their associated scalable inference algorithms. We show empirical results on a genome analysis task and an image denoising task.} }
Endnote
%0 Conference Paper %T Markov Latent Feature Models %A Aonan Zhang %A John Paisley %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-zhangf16 %I PMLR %P 1129--1137 %U https://proceedings.mlr.press/v48/zhangf16.html %V 48 %X We introduce Markov latent feature models (MLFM), a sparse latent feature model that arises naturally from a simple sequential construction. The key idea is to interpret each state of a sequential process as corresponding to a latent feature, and the set of states visited between two null-state visits as picking out features for an observation. We show that, given some natural constraints, we can represent this stochastic process as a mixture of recurrent Markov chains. In this way we can perform correlated latent feature modeling for the sparse coding problem. We demonstrate two cases in which we define finite and infinite latent feature models constructed from first-order Markov chains, and derive their associated scalable inference algorithms. We show empirical results on a genome analysis task and an image denoising task.
RIS
TY - CPAPER TI - Markov Latent Feature Models AU - Aonan Zhang AU - John Paisley BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-zhangf16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 1129 EP - 1137 L1 - http://proceedings.mlr.press/v48/zhangf16.pdf UR - https://proceedings.mlr.press/v48/zhangf16.html AB - We introduce Markov latent feature models (MLFM), a sparse latent feature model that arises naturally from a simple sequential construction. The key idea is to interpret each state of a sequential process as corresponding to a latent feature, and the set of states visited between two null-state visits as picking out features for an observation. We show that, given some natural constraints, we can represent this stochastic process as a mixture of recurrent Markov chains. In this way we can perform correlated latent feature modeling for the sparse coding problem. We demonstrate two cases in which we define finite and infinite latent feature models constructed from first-order Markov chains, and derive their associated scalable inference algorithms. We show empirical results on a genome analysis task and an image denoising task. ER -
APA
Zhang, A. & Paisley, J.. (2016). Markov Latent Feature Models. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:1129-1137 Available from https://proceedings.mlr.press/v48/zhangf16.html.

Related Material