Time-Regularized Interrupting Options (TRIO)

Timothy Mann, Daniel Mankowitz, Shie Mannor
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1350-1358, 2014.

Abstract

High-level skills relieve planning algorithms from low-level details. But when the skills are poorly designed for the domain, the resulting plan may be severely suboptimal. Sutton et al. 1999 made an important step towards resolving this problem by introducing a rule that automatically improves a set of skills called options. This rule terminates an option early whenever switching to another option gives a higher value than continuing with the current option. However, they only analyzed the case where the improvement rule is applied once. We show conditions where this rule converges to the optimal set of options. A new Bellman-like operator that simultaneously improves the set of options is at the core of our analysis. One problem with the update rule is that it tends to favor lower-level skills. Therefore we introduce a regularization term that favors longer duration skills. Experimental results demonstrate that this approach can derive a good set of high-level skills even when the original set of skills cannot solve the problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-mannb14, title = {Time-Regularized Interrupting Options (TRIO)}, author = {Mann, Timothy and Mankowitz, Daniel and Mannor, Shie}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {1350--1358}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/mannb14.pdf}, url = {https://proceedings.mlr.press/v32/mannb14.html}, abstract = {High-level skills relieve planning algorithms from low-level details. But when the skills are poorly designed for the domain, the resulting plan may be severely suboptimal. Sutton et al. 1999 made an important step towards resolving this problem by introducing a rule that automatically improves a set of skills called options. This rule terminates an option early whenever switching to another option gives a higher value than continuing with the current option. However, they only analyzed the case where the improvement rule is applied once. We show conditions where this rule converges to the optimal set of options. A new Bellman-like operator that simultaneously improves the set of options is at the core of our analysis. One problem with the update rule is that it tends to favor lower-level skills. Therefore we introduce a regularization term that favors longer duration skills. Experimental results demonstrate that this approach can derive a good set of high-level skills even when the original set of skills cannot solve the problem.} }
Endnote
%0 Conference Paper %T Time-Regularized Interrupting Options (TRIO) %A Timothy Mann %A Daniel Mankowitz %A Shie Mannor %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-mannb14 %I PMLR %P 1350--1358 %U https://proceedings.mlr.press/v32/mannb14.html %V 32 %N 2 %X High-level skills relieve planning algorithms from low-level details. But when the skills are poorly designed for the domain, the resulting plan may be severely suboptimal. Sutton et al. 1999 made an important step towards resolving this problem by introducing a rule that automatically improves a set of skills called options. This rule terminates an option early whenever switching to another option gives a higher value than continuing with the current option. However, they only analyzed the case where the improvement rule is applied once. We show conditions where this rule converges to the optimal set of options. A new Bellman-like operator that simultaneously improves the set of options is at the core of our analysis. One problem with the update rule is that it tends to favor lower-level skills. Therefore we introduce a regularization term that favors longer duration skills. Experimental results demonstrate that this approach can derive a good set of high-level skills even when the original set of skills cannot solve the problem.
RIS
TY - CPAPER TI - Time-Regularized Interrupting Options (TRIO) AU - Timothy Mann AU - Daniel Mankowitz AU - Shie Mannor BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-mannb14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 1350 EP - 1358 L1 - http://proceedings.mlr.press/v32/mannb14.pdf UR - https://proceedings.mlr.press/v32/mannb14.html AB - High-level skills relieve planning algorithms from low-level details. But when the skills are poorly designed for the domain, the resulting plan may be severely suboptimal. Sutton et al. 1999 made an important step towards resolving this problem by introducing a rule that automatically improves a set of skills called options. This rule terminates an option early whenever switching to another option gives a higher value than continuing with the current option. However, they only analyzed the case where the improvement rule is applied once. We show conditions where this rule converges to the optimal set of options. A new Bellman-like operator that simultaneously improves the set of options is at the core of our analysis. One problem with the update rule is that it tends to favor lower-level skills. Therefore we introduce a regularization term that favors longer duration skills. Experimental results demonstrate that this approach can derive a good set of high-level skills even when the original set of skills cannot solve the problem. ER -
APA
Mann, T., Mankowitz, D. & Mannor, S.. (2014). Time-Regularized Interrupting Options (TRIO). Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):1350-1358 Available from https://proceedings.mlr.press/v32/mannb14.html.

Related Material