Manifold Preserving Hierarchical Topic Models for Quantization and Approximation

Minje Kim, Paris Smaragdis
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):1373-1381, 2013.

Abstract

We present two complementary topic models to address the analysis of mixture data lying on manifolds. First, we propose a quantization method with an additional mid-layer latent variable, which selects only data points that best preserve the manifold structure of the input data. In order to address the case of modeling all the in-between parts of that manifold using this reduced representation of the input, we introduce a new model that provides a manifold-aware interpolation method. We demonstrate the advantages of these models with experiments on the hand-written digit recognition and the speech source separation tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-kim13a, title = {Manifold Preserving Hierarchical Topic Models for Quantization and Approximation}, author = {Kim, Minje and Smaragdis, Paris}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {1373--1381}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/kim13a.pdf}, url = {https://proceedings.mlr.press/v28/kim13a.html}, abstract = {We present two complementary topic models to address the analysis of mixture data lying on manifolds. First, we propose a quantization method with an additional mid-layer latent variable, which selects only data points that best preserve the manifold structure of the input data. In order to address the case of modeling all the in-between parts of that manifold using this reduced representation of the input, we introduce a new model that provides a manifold-aware interpolation method. We demonstrate the advantages of these models with experiments on the hand-written digit recognition and the speech source separation tasks.} }
Endnote
%0 Conference Paper %T Manifold Preserving Hierarchical Topic Models for Quantization and Approximation %A Minje Kim %A Paris Smaragdis %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-kim13a %I PMLR %P 1373--1381 %U https://proceedings.mlr.press/v28/kim13a.html %V 28 %N 3 %X We present two complementary topic models to address the analysis of mixture data lying on manifolds. First, we propose a quantization method with an additional mid-layer latent variable, which selects only data points that best preserve the manifold structure of the input data. In order to address the case of modeling all the in-between parts of that manifold using this reduced representation of the input, we introduce a new model that provides a manifold-aware interpolation method. We demonstrate the advantages of these models with experiments on the hand-written digit recognition and the speech source separation tasks.
RIS
TY - CPAPER TI - Manifold Preserving Hierarchical Topic Models for Quantization and Approximation AU - Minje Kim AU - Paris Smaragdis BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/26 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-kim13a PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 3 SP - 1373 EP - 1381 L1 - http://proceedings.mlr.press/v28/kim13a.pdf UR - https://proceedings.mlr.press/v28/kim13a.html AB - We present two complementary topic models to address the analysis of mixture data lying on manifolds. First, we propose a quantization method with an additional mid-layer latent variable, which selects only data points that best preserve the manifold structure of the input data. In order to address the case of modeling all the in-between parts of that manifold using this reduced representation of the input, we introduce a new model that provides a manifold-aware interpolation method. We demonstrate the advantages of these models with experiments on the hand-written digit recognition and the speech source separation tasks. ER -
APA
Kim, M. & Smaragdis, P.. (2013). Manifold Preserving Hierarchical Topic Models for Quantization and Approximation. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(3):1373-1381 Available from https://proceedings.mlr.press/v28/kim13a.html.

Related Material