Online Feature Selection for Model-based Reinforcement Learning

Trung Nguyen, Zhuoru Li, Tomi Silander, Tze Yun Leong
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(1):498-506, 2013.

Abstract

We propose a new framework for learning the world dynamics of feature-rich environments in model-based reinforcement learning. The main idea is formalized as a new, factored state-transition representation that supports efficient online-learning of the relevant features. We construct the transition models through predicting how the actions change the world. We introduce an online sparse coding learning technique for feature selection in high-dimensional spaces. We derive theoretical guarantees for our framework and empirically demonstrate its practicality in both simulated and real robotics domains.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-nguyen13, title = {Online Feature Selection for Model-based Reinforcement Learning}, author = {Nguyen, Trung and Li, Zhuoru and Silander, Tomi and Yun Leong, Tze}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {498--506}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {1}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/nguyen13.pdf}, url = {https://proceedings.mlr.press/v28/nguyen13.html}, abstract = {We propose a new framework for learning the world dynamics of feature-rich environments in model-based reinforcement learning. The main idea is formalized as a new, factored state-transition representation that supports efficient online-learning of the relevant features. We construct the transition models through predicting how the actions change the world. We introduce an online sparse coding learning technique for feature selection in high-dimensional spaces. We derive theoretical guarantees for our framework and empirically demonstrate its practicality in both simulated and real robotics domains. } }
Endnote
%0 Conference Paper %T Online Feature Selection for Model-based Reinforcement Learning %A Trung Nguyen %A Zhuoru Li %A Tomi Silander %A Tze Yun Leong %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-nguyen13 %I PMLR %P 498--506 %U https://proceedings.mlr.press/v28/nguyen13.html %V 28 %N 1 %X We propose a new framework for learning the world dynamics of feature-rich environments in model-based reinforcement learning. The main idea is formalized as a new, factored state-transition representation that supports efficient online-learning of the relevant features. We construct the transition models through predicting how the actions change the world. We introduce an online sparse coding learning technique for feature selection in high-dimensional spaces. We derive theoretical guarantees for our framework and empirically demonstrate its practicality in both simulated and real robotics domains.
RIS
TY - CPAPER TI - Online Feature Selection for Model-based Reinforcement Learning AU - Trung Nguyen AU - Zhuoru Li AU - Tomi Silander AU - Tze Yun Leong BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-nguyen13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 1 SP - 498 EP - 506 L1 - http://proceedings.mlr.press/v28/nguyen13.pdf UR - https://proceedings.mlr.press/v28/nguyen13.html AB - We propose a new framework for learning the world dynamics of feature-rich environments in model-based reinforcement learning. The main idea is formalized as a new, factored state-transition representation that supports efficient online-learning of the relevant features. We construct the transition models through predicting how the actions change the world. We introduce an online sparse coding learning technique for feature selection in high-dimensional spaces. We derive theoretical guarantees for our framework and empirically demonstrate its practicality in both simulated and real robotics domains. ER -
APA
Nguyen, T., Li, Z., Silander, T. & Yun Leong, T.. (2013). Online Feature Selection for Model-based Reinforcement Learning. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(1):498-506 Available from https://proceedings.mlr.press/v28/nguyen13.html.

Related Material