On TD(0) with function approximation: Concentration bounds and a centered variant with exponential convergence

Nathaniel Korda, Prashanth La
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:626-634, 2015.

Abstract

We provide non-asymptotic bounds for the well-known temporal difference learning algorithm TD(0) with linear function approximators. These include high-probability bounds as well as bounds in expectation. Our analysis suggests that a step-size inversely proportional to the number of iterations cannot guarantee optimal rate of convergence unless we assume (partial) knowledge of the stationary distribution for the Markov chain underlying the policy considered. We also provide bounds for the iterate averaged TD(0) variant, which gets rid of the step-size dependency while exhibiting the optimal rate of convergence. Furthermore, we propose a variant of TD(0) with linear approximators that incorporates a centering sequence, and establish that it exhibits an exponential rate of convergence in expectation. We demonstrate the usefulness of our bounds on two synthetic experimental settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-korda15, title = {On TD(0) with function approximation: Concentration bounds and a centered variant with exponential convergence}, author = {Korda, Nathaniel and La, Prashanth}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {626--634}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/korda15.pdf}, url = {https://proceedings.mlr.press/v37/korda15.html}, abstract = {We provide non-asymptotic bounds for the well-known temporal difference learning algorithm TD(0) with linear function approximators. These include high-probability bounds as well as bounds in expectation. Our analysis suggests that a step-size inversely proportional to the number of iterations cannot guarantee optimal rate of convergence unless we assume (partial) knowledge of the stationary distribution for the Markov chain underlying the policy considered. We also provide bounds for the iterate averaged TD(0) variant, which gets rid of the step-size dependency while exhibiting the optimal rate of convergence. Furthermore, we propose a variant of TD(0) with linear approximators that incorporates a centering sequence, and establish that it exhibits an exponential rate of convergence in expectation. We demonstrate the usefulness of our bounds on two synthetic experimental settings.} }
Endnote
%0 Conference Paper %T On TD(0) with function approximation: Concentration bounds and a centered variant with exponential convergence %A Nathaniel Korda %A Prashanth La %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-korda15 %I PMLR %P 626--634 %U https://proceedings.mlr.press/v37/korda15.html %V 37 %X We provide non-asymptotic bounds for the well-known temporal difference learning algorithm TD(0) with linear function approximators. These include high-probability bounds as well as bounds in expectation. Our analysis suggests that a step-size inversely proportional to the number of iterations cannot guarantee optimal rate of convergence unless we assume (partial) knowledge of the stationary distribution for the Markov chain underlying the policy considered. We also provide bounds for the iterate averaged TD(0) variant, which gets rid of the step-size dependency while exhibiting the optimal rate of convergence. Furthermore, we propose a variant of TD(0) with linear approximators that incorporates a centering sequence, and establish that it exhibits an exponential rate of convergence in expectation. We demonstrate the usefulness of our bounds on two synthetic experimental settings.
RIS
TY - CPAPER TI - On TD(0) with function approximation: Concentration bounds and a centered variant with exponential convergence AU - Nathaniel Korda AU - Prashanth La BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-korda15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 626 EP - 634 L1 - http://proceedings.mlr.press/v37/korda15.pdf UR - https://proceedings.mlr.press/v37/korda15.html AB - We provide non-asymptotic bounds for the well-known temporal difference learning algorithm TD(0) with linear function approximators. These include high-probability bounds as well as bounds in expectation. Our analysis suggests that a step-size inversely proportional to the number of iterations cannot guarantee optimal rate of convergence unless we assume (partial) knowledge of the stationary distribution for the Markov chain underlying the policy considered. We also provide bounds for the iterate averaged TD(0) variant, which gets rid of the step-size dependency while exhibiting the optimal rate of convergence. Furthermore, we propose a variant of TD(0) with linear approximators that incorporates a centering sequence, and establish that it exhibits an exponential rate of convergence in expectation. We demonstrate the usefulness of our bounds on two synthetic experimental settings. ER -
APA
Korda, N. & La, P.. (2015). On TD(0) with function approximation: Concentration bounds and a centered variant with exponential convergence. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:626-634 Available from https://proceedings.mlr.press/v37/korda15.html.

Related Material