Predicting Sequential Data with LSTMs Augmented with Strictly 2-Piecewise Input Vectors

Chihiro Shibata, Jeffrey Heinz
Proceedings of The 13th International Conference on Grammatical Inference, PMLR 57:137-142, 2017.

Abstract

Recurrent neural networks such as Long-Short Term Memory (LSTM) are often used to learn from various kinds of time-series data, especially those that involved long-distance dependencies. We introduce a vector representation for the Strictly 2-Piecewise (SP-2) formal languages, which encode certain kinds of long-distance dependencies using subsequences. These vectors are added to the LSTM architecture as an additional input. Through experiments with the problems in the SPiCe dataset, we demonstrate that for certain problems, these vectors slightly—but significantly—improve the top-5 score (normalized discounted cumulative gain) as well as the accuracy as compared to the LSTM architecture without the SP-2 input vector. These results are also compared to an LSTM architecture with an input vector based on bigrams.

Cite this Paper


BibTeX
@InProceedings{pmlr-v57-shibata16, title = {Predicting Sequential Data with {LSTM}s Augmented with Strictly 2-Piecewise Input Vectors}, author = {Shibata, Chihiro and Heinz, Jeffrey}, booktitle = {Proceedings of The 13th International Conference on Grammatical Inference}, pages = {137--142}, year = {2017}, editor = {Verwer, Sicco and Zaanen, Menno van and Smetsers, Rick}, volume = {57}, series = {Proceedings of Machine Learning Research}, address = {Delft, The Netherlands}, month = {05--07 Oct}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v57/shibata16.pdf}, url = {http://proceedings.mlr.press/v57/shibata16.html}, abstract = {Recurrent neural networks such as Long-Short Term Memory (LSTM) are often used to learn from various kinds of time-series data, especially those that involved long-distance dependencies. We introduce a vector representation for the Strictly 2-Piecewise (SP-2) formal languages, which encode certain kinds of long-distance dependencies using subsequences. These vectors are added to the LSTM architecture as an additional input. Through experiments with the problems in the SPiCe dataset, we demonstrate that for certain problems, these vectors slightly—but significantly—improve the top-5 score (normalized discounted cumulative gain) as well as the accuracy as compared to the LSTM architecture without the SP-2 input vector. These results are also compared to an LSTM architecture with an input vector based on bigrams.} }
Endnote
%0 Conference Paper %T Predicting Sequential Data with LSTMs Augmented with Strictly 2-Piecewise Input Vectors %A Chihiro Shibata %A Jeffrey Heinz %B Proceedings of The 13th International Conference on Grammatical Inference %C Proceedings of Machine Learning Research %D 2017 %E Sicco Verwer %E Menno van Zaanen %E Rick Smetsers %F pmlr-v57-shibata16 %I PMLR %P 137--142 %U http://proceedings.mlr.press/v57/shibata16.html %V 57 %X Recurrent neural networks such as Long-Short Term Memory (LSTM) are often used to learn from various kinds of time-series data, especially those that involved long-distance dependencies. We introduce a vector representation for the Strictly 2-Piecewise (SP-2) formal languages, which encode certain kinds of long-distance dependencies using subsequences. These vectors are added to the LSTM architecture as an additional input. Through experiments with the problems in the SPiCe dataset, we demonstrate that for certain problems, these vectors slightly—but significantly—improve the top-5 score (normalized discounted cumulative gain) as well as the accuracy as compared to the LSTM architecture without the SP-2 input vector. These results are also compared to an LSTM architecture with an input vector based on bigrams.
RIS
TY - CPAPER TI - Predicting Sequential Data with LSTMs Augmented with Strictly 2-Piecewise Input Vectors AU - Chihiro Shibata AU - Jeffrey Heinz BT - Proceedings of The 13th International Conference on Grammatical Inference DA - 2017/01/16 ED - Sicco Verwer ED - Menno van Zaanen ED - Rick Smetsers ID - pmlr-v57-shibata16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 57 SP - 137 EP - 142 L1 - http://proceedings.mlr.press/v57/shibata16.pdf UR - http://proceedings.mlr.press/v57/shibata16.html AB - Recurrent neural networks such as Long-Short Term Memory (LSTM) are often used to learn from various kinds of time-series data, especially those that involved long-distance dependencies. We introduce a vector representation for the Strictly 2-Piecewise (SP-2) formal languages, which encode certain kinds of long-distance dependencies using subsequences. These vectors are added to the LSTM architecture as an additional input. Through experiments with the problems in the SPiCe dataset, we demonstrate that for certain problems, these vectors slightly—but significantly—improve the top-5 score (normalized discounted cumulative gain) as well as the accuracy as compared to the LSTM architecture without the SP-2 input vector. These results are also compared to an LSTM architecture with an input vector based on bigrams. ER -
APA
Shibata, C. & Heinz, J.. (2017). Predicting Sequential Data with LSTMs Augmented with Strictly 2-Piecewise Input Vectors. Proceedings of The 13th International Conference on Grammatical Inference, in Proceedings of Machine Learning Research 57:137-142 Available from http://proceedings.mlr.press/v57/shibata16.html.

Related Material