Temporal Difference Methods for the Variance of the Reward To Go

Aviv Tamar, Dotan Di Castro, Shie Mannor
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):495-503, 2013.

Abstract

In this paper we extend temporal difference policy evaluation algorithms to performance criteria that include the variance of the cumulative reward. Such criteria are useful for risk management, and are important in domains such as finance and process control. We propose variants of both TD(0) and LSTD(λ) with linear function approximation, prove their convergence, and demonstrate their utility in a 4-dimensional continuous state space problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-tamar13, title = {Temporal Difference Methods for the Variance of the Reward To Go}, author = {Tamar, Aviv and Di Castro, Dotan and Mannor, Shie}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {495--503}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/tamar13.pdf}, url = {https://proceedings.mlr.press/v28/tamar13.html}, abstract = {In this paper we extend temporal difference policy evaluation algorithms to performance criteria that include the variance of the cumulative reward. Such criteria are useful for risk management, and are important in domains such as finance and process control. We propose variants of both TD(0) and LSTD(λ) with linear function approximation, prove their convergence, and demonstrate their utility in a 4-dimensional continuous state space problem.} }
Endnote
%0 Conference Paper %T Temporal Difference Methods for the Variance of the Reward To Go %A Aviv Tamar %A Dotan Di Castro %A Shie Mannor %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-tamar13 %I PMLR %P 495--503 %U https://proceedings.mlr.press/v28/tamar13.html %V 28 %N 3 %X In this paper we extend temporal difference policy evaluation algorithms to performance criteria that include the variance of the cumulative reward. Such criteria are useful for risk management, and are important in domains such as finance and process control. We propose variants of both TD(0) and LSTD(λ) with linear function approximation, prove their convergence, and demonstrate their utility in a 4-dimensional continuous state space problem.
RIS
TY - CPAPER TI - Temporal Difference Methods for the Variance of the Reward To Go AU - Aviv Tamar AU - Dotan Di Castro AU - Shie Mannor BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/26 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-tamar13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 3 SP - 495 EP - 503 L1 - http://proceedings.mlr.press/v28/tamar13.pdf UR - https://proceedings.mlr.press/v28/tamar13.html AB - In this paper we extend temporal difference policy evaluation algorithms to performance criteria that include the variance of the cumulative reward. Such criteria are useful for risk management, and are important in domains such as finance and process control. We propose variants of both TD(0) and LSTD(λ) with linear function approximation, prove their convergence, and demonstrate their utility in a 4-dimensional continuous state space problem. ER -
APA
Tamar, A., Di Castro, D. & Mannor, S.. (2013). Temporal Difference Methods for the Variance of the Reward To Go. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(3):495-503 Available from https://proceedings.mlr.press/v28/tamar13.html.

Related Material