Uniqueness of Tensor Decompositions with Applications to Polynomial Identifiability

Aditya Bhaskara, Moses Charikar, Aravindan Vijayaraghavan
Proceedings of The 27th Conference on Learning Theory, PMLR 35:742-778, 2014.

Abstract

We give a robust version of the celebrated result of Kruskal on the uniqueness of tensor decompositions: given a tensor whose decomposition satisfies a robust form of Kruskal’s rank condition, we prove that it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (inverse polynomial) error. Kruskal’s theorem has found many applications in proving the identifiability of parameters for various latent variable models and mixture models such as Hidden Markov models, topic models etc. Our robust version immediately implies identifiability using only polynomially many samples in many of these settings – an essential first step towards efficient learning algorithms. Our methods also apply to the “overcomplete” case, which has proved challenging in many applications. Given the importance of Kruskal’s theorem in the tensor literature, we expect that our robust version will have several applications beyond the settings we explore in this work.

Cite this Paper


BibTeX
@InProceedings{pmlr-v35-bhaskara14a, title = {Uniqueness of Tensor Decompositions with Applications to Polynomial Identifiability}, author = {Bhaskara, Aditya and Charikar, Moses and Vijayaraghavan, Aravindan}, booktitle = {Proceedings of The 27th Conference on Learning Theory}, pages = {742--778}, year = {2014}, editor = {Balcan, Maria Florina and Feldman, Vitaly and Szepesvári, Csaba}, volume = {35}, series = {Proceedings of Machine Learning Research}, address = {Barcelona, Spain}, month = {13--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v35/bhaskara14a.pdf}, url = {https://proceedings.mlr.press/v35/bhaskara14a.html}, abstract = {We give a robust version of the celebrated result of Kruskal on the uniqueness of tensor decompositions: given a tensor whose decomposition satisfies a robust form of Kruskal’s rank condition, we prove that it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (inverse polynomial) error. Kruskal’s theorem has found many applications in proving the identifiability of parameters for various latent variable models and mixture models such as Hidden Markov models, topic models etc. Our robust version immediately implies identifiability using only polynomially many samples in many of these settings – an essential first step towards efficient learning algorithms. Our methods also apply to the “overcomplete” case, which has proved challenging in many applications. Given the importance of Kruskal’s theorem in the tensor literature, we expect that our robust version will have several applications beyond the settings we explore in this work. } }
Endnote
%0 Conference Paper %T Uniqueness of Tensor Decompositions with Applications to Polynomial Identifiability %A Aditya Bhaskara %A Moses Charikar %A Aravindan Vijayaraghavan %B Proceedings of The 27th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2014 %E Maria Florina Balcan %E Vitaly Feldman %E Csaba Szepesvári %F pmlr-v35-bhaskara14a %I PMLR %P 742--778 %U https://proceedings.mlr.press/v35/bhaskara14a.html %V 35 %X We give a robust version of the celebrated result of Kruskal on the uniqueness of tensor decompositions: given a tensor whose decomposition satisfies a robust form of Kruskal’s rank condition, we prove that it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (inverse polynomial) error. Kruskal’s theorem has found many applications in proving the identifiability of parameters for various latent variable models and mixture models such as Hidden Markov models, topic models etc. Our robust version immediately implies identifiability using only polynomially many samples in many of these settings – an essential first step towards efficient learning algorithms. Our methods also apply to the “overcomplete” case, which has proved challenging in many applications. Given the importance of Kruskal’s theorem in the tensor literature, we expect that our robust version will have several applications beyond the settings we explore in this work.
RIS
TY - CPAPER TI - Uniqueness of Tensor Decompositions with Applications to Polynomial Identifiability AU - Aditya Bhaskara AU - Moses Charikar AU - Aravindan Vijayaraghavan BT - Proceedings of The 27th Conference on Learning Theory DA - 2014/05/29 ED - Maria Florina Balcan ED - Vitaly Feldman ED - Csaba Szepesvári ID - pmlr-v35-bhaskara14a PB - PMLR DP - Proceedings of Machine Learning Research VL - 35 SP - 742 EP - 778 L1 - http://proceedings.mlr.press/v35/bhaskara14a.pdf UR - https://proceedings.mlr.press/v35/bhaskara14a.html AB - We give a robust version of the celebrated result of Kruskal on the uniqueness of tensor decompositions: given a tensor whose decomposition satisfies a robust form of Kruskal’s rank condition, we prove that it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (inverse polynomial) error. Kruskal’s theorem has found many applications in proving the identifiability of parameters for various latent variable models and mixture models such as Hidden Markov models, topic models etc. Our robust version immediately implies identifiability using only polynomially many samples in many of these settings – an essential first step towards efficient learning algorithms. Our methods also apply to the “overcomplete” case, which has proved challenging in many applications. Given the importance of Kruskal’s theorem in the tensor literature, we expect that our robust version will have several applications beyond the settings we explore in this work. ER -
APA
Bhaskara, A., Charikar, M. & Vijayaraghavan, A.. (2014). Uniqueness of Tensor Decompositions with Applications to Polynomial Identifiability. Proceedings of The 27th Conference on Learning Theory, in Proceedings of Machine Learning Research 35:742-778 Available from https://proceedings.mlr.press/v35/bhaskara14a.html.

Related Material