Low-Rank Spectral Learning with Weighted Loss Functions

Alex Kulesza, Nan Jiang, Satinder Singh
Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, PMLR 38:517-525, 2015.

Abstract

Kulesza et al. recently observed that low-rank spectral learning algorithms, which discard the smallest singular values of a moment matrix during training, can behave in unexpected ways, producing large errors even when the discarded singular values are arbitrarily small. In this paper we prove that when learning predictive state representations those problematic cases disappear if we introduce a particular weighted loss function and learn using sufficiently large sets of statistics; our main result is a bound on the loss of the learned low-rank model in terms of the singular values that are discarded. Practically speaking, this suggests that regardless of the model rank we should use the largest possible sets of statistics, and we show empirically that this is true on both synthetic and real-world domains.

Cite this Paper


BibTeX
@InProceedings{pmlr-v38-kulesza15, title = {{Low-Rank Spectral Learning with Weighted Loss Functions}}, author = {Kulesza, Alex and Jiang, Nan and Singh, Satinder}, booktitle = {Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics}, pages = {517--525}, year = {2015}, editor = {Lebanon, Guy and Vishwanathan, S. V. N.}, volume = {38}, series = {Proceedings of Machine Learning Research}, address = {San Diego, California, USA}, month = {09--12 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v38/kulesza15.pdf}, url = {https://proceedings.mlr.press/v38/kulesza15.html}, abstract = {Kulesza et al. recently observed that low-rank spectral learning algorithms, which discard the smallest singular values of a moment matrix during training, can behave in unexpected ways, producing large errors even when the discarded singular values are arbitrarily small. In this paper we prove that when learning predictive state representations those problematic cases disappear if we introduce a particular weighted loss function and learn using sufficiently large sets of statistics; our main result is a bound on the loss of the learned low-rank model in terms of the singular values that are discarded. Practically speaking, this suggests that regardless of the model rank we should use the largest possible sets of statistics, and we show empirically that this is true on both synthetic and real-world domains.} }
Endnote
%0 Conference Paper %T Low-Rank Spectral Learning with Weighted Loss Functions %A Alex Kulesza %A Nan Jiang %A Satinder Singh %B Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2015 %E Guy Lebanon %E S. V. N. Vishwanathan %F pmlr-v38-kulesza15 %I PMLR %P 517--525 %U https://proceedings.mlr.press/v38/kulesza15.html %V 38 %X Kulesza et al. recently observed that low-rank spectral learning algorithms, which discard the smallest singular values of a moment matrix during training, can behave in unexpected ways, producing large errors even when the discarded singular values are arbitrarily small. In this paper we prove that when learning predictive state representations those problematic cases disappear if we introduce a particular weighted loss function and learn using sufficiently large sets of statistics; our main result is a bound on the loss of the learned low-rank model in terms of the singular values that are discarded. Practically speaking, this suggests that regardless of the model rank we should use the largest possible sets of statistics, and we show empirically that this is true on both synthetic and real-world domains.
RIS
TY - CPAPER TI - Low-Rank Spectral Learning with Weighted Loss Functions AU - Alex Kulesza AU - Nan Jiang AU - Satinder Singh BT - Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics DA - 2015/02/21 ED - Guy Lebanon ED - S. V. N. Vishwanathan ID - pmlr-v38-kulesza15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 38 SP - 517 EP - 525 L1 - http://proceedings.mlr.press/v38/kulesza15.pdf UR - https://proceedings.mlr.press/v38/kulesza15.html AB - Kulesza et al. recently observed that low-rank spectral learning algorithms, which discard the smallest singular values of a moment matrix during training, can behave in unexpected ways, producing large errors even when the discarded singular values are arbitrarily small. In this paper we prove that when learning predictive state representations those problematic cases disappear if we introduce a particular weighted loss function and learn using sufficiently large sets of statistics; our main result is a bound on the loss of the learned low-rank model in terms of the singular values that are discarded. Practically speaking, this suggests that regardless of the model rank we should use the largest possible sets of statistics, and we show empirically that this is true on both synthetic and real-world domains. ER -
APA
Kulesza, A., Jiang, N. & Singh, S.. (2015). Low-Rank Spectral Learning with Weighted Loss Functions. Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 38:517-525 Available from https://proceedings.mlr.press/v38/kulesza15.html.

Related Material