Optimal Regret Bounds for Selecting the State Representation in Reinforcement Learning

Odalric-Ambrym Maillard, Phuong Nguyen, Ronald Ortner, Daniil Ryabko
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(1):543-551, 2013.

Abstract

We consider an agent interacting with an environment in a single stream of actions, observations, and rewards, with no reset. This process is not assumed to be a Markov Decision Process (MDP). Rather, the agent has several representations (mapping histories of past interactions to a discrete state space) of the environment with unknown dynamics, only some of which result in an MDP. The goal is to minimize the average regret criterion against an agent who knows an MDP representation giving the highest optimal reward, and acts optimally in it. Recent regret bounds for this setting are of order O(T^2/3) with an additive term constant yet exponential in some characteristics of the optimal MDP. We propose an algorithm whose regret after T time steps is O(\sqrtT), with all constants reasonably small. This is optimal in T since O(\sqrtT) is the optimal regret in the setting of learning in a (single discrete) MDP.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-maillard13, title = {Optimal Regret Bounds for Selecting the State Representation in Reinforcement Learning}, author = {Maillard, Odalric-Ambrym and Nguyen, Phuong and Ortner, Ronald and Ryabko, Daniil}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {543--551}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {1}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/maillard13.pdf}, url = {https://proceedings.mlr.press/v28/maillard13.html}, abstract = {We consider an agent interacting with an environment in a single stream of actions, observations, and rewards, with no reset. This process is not assumed to be a Markov Decision Process (MDP). Rather, the agent has several representations (mapping histories of past interactions to a discrete state space) of the environment with unknown dynamics, only some of which result in an MDP. The goal is to minimize the average regret criterion against an agent who knows an MDP representation giving the highest optimal reward, and acts optimally in it. Recent regret bounds for this setting are of order O(T^2/3) with an additive term constant yet exponential in some characteristics of the optimal MDP. We propose an algorithm whose regret after T time steps is O(\sqrtT), with all constants reasonably small. This is optimal in T since O(\sqrtT) is the optimal regret in the setting of learning in a (single discrete) MDP.} }
Endnote
%0 Conference Paper %T Optimal Regret Bounds for Selecting the State Representation in Reinforcement Learning %A Odalric-Ambrym Maillard %A Phuong Nguyen %A Ronald Ortner %A Daniil Ryabko %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-maillard13 %I PMLR %P 543--551 %U https://proceedings.mlr.press/v28/maillard13.html %V 28 %N 1 %X We consider an agent interacting with an environment in a single stream of actions, observations, and rewards, with no reset. This process is not assumed to be a Markov Decision Process (MDP). Rather, the agent has several representations (mapping histories of past interactions to a discrete state space) of the environment with unknown dynamics, only some of which result in an MDP. The goal is to minimize the average regret criterion against an agent who knows an MDP representation giving the highest optimal reward, and acts optimally in it. Recent regret bounds for this setting are of order O(T^2/3) with an additive term constant yet exponential in some characteristics of the optimal MDP. We propose an algorithm whose regret after T time steps is O(\sqrtT), with all constants reasonably small. This is optimal in T since O(\sqrtT) is the optimal regret in the setting of learning in a (single discrete) MDP.
RIS
TY - CPAPER TI - Optimal Regret Bounds for Selecting the State Representation in Reinforcement Learning AU - Odalric-Ambrym Maillard AU - Phuong Nguyen AU - Ronald Ortner AU - Daniil Ryabko BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-maillard13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 1 SP - 543 EP - 551 L1 - http://proceedings.mlr.press/v28/maillard13.pdf UR - https://proceedings.mlr.press/v28/maillard13.html AB - We consider an agent interacting with an environment in a single stream of actions, observations, and rewards, with no reset. This process is not assumed to be a Markov Decision Process (MDP). Rather, the agent has several representations (mapping histories of past interactions to a discrete state space) of the environment with unknown dynamics, only some of which result in an MDP. The goal is to minimize the average regret criterion against an agent who knows an MDP representation giving the highest optimal reward, and acts optimally in it. Recent regret bounds for this setting are of order O(T^2/3) with an additive term constant yet exponential in some characteristics of the optimal MDP. We propose an algorithm whose regret after T time steps is O(\sqrtT), with all constants reasonably small. This is optimal in T since O(\sqrtT) is the optimal regret in the setting of learning in a (single discrete) MDP. ER -
APA
Maillard, O., Nguyen, P., Ortner, R. & Ryabko, D.. (2013). Optimal Regret Bounds for Selecting the State Representation in Reinforcement Learning. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(1):543-551 Available from https://proceedings.mlr.press/v28/maillard13.html.

Related Material