Near Optimal Behavior via Approximate State Abstraction

David Abel, David Hershkowitz, Michael Littman
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:2915-2923, 2016.

Abstract

The combinatorial explosion that plagues planning and reinforcement learning (RL) algorithms can be moderated using state abstraction. Prohibitively large task representations can be condensed such that essential information is preserved, and consequently, solutions are tractably computable. However, exact abstractions, which treat only fully-identical situations as equivalent, fail to present opportunities for abstraction in environments where no two situations are exactly alike. In this work, we investigate approximate state abstractions, which treat nearly-identical situations as equivalent. We present theoretical guarantees of the quality of behaviors derived from four types of approximate abstractions. Additionally, we empirically demonstrate that approximate abstractions lead to reduction in task complexity and bounded loss of optimality of behavior in a variety of environments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-abel16, title = {Near Optimal Behavior via Approximate State Abstraction}, author = {Abel, David and Hershkowitz, David and Littman, Michael}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {2915--2923}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/abel16.pdf}, url = {https://proceedings.mlr.press/v48/abel16.html}, abstract = {The combinatorial explosion that plagues planning and reinforcement learning (RL) algorithms can be moderated using state abstraction. Prohibitively large task representations can be condensed such that essential information is preserved, and consequently, solutions are tractably computable. However, exact abstractions, which treat only fully-identical situations as equivalent, fail to present opportunities for abstraction in environments where no two situations are exactly alike. In this work, we investigate approximate state abstractions, which treat nearly-identical situations as equivalent. We present theoretical guarantees of the quality of behaviors derived from four types of approximate abstractions. Additionally, we empirically demonstrate that approximate abstractions lead to reduction in task complexity and bounded loss of optimality of behavior in a variety of environments.} }
Endnote
%0 Conference Paper %T Near Optimal Behavior via Approximate State Abstraction %A David Abel %A David Hershkowitz %A Michael Littman %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-abel16 %I PMLR %P 2915--2923 %U https://proceedings.mlr.press/v48/abel16.html %V 48 %X The combinatorial explosion that plagues planning and reinforcement learning (RL) algorithms can be moderated using state abstraction. Prohibitively large task representations can be condensed such that essential information is preserved, and consequently, solutions are tractably computable. However, exact abstractions, which treat only fully-identical situations as equivalent, fail to present opportunities for abstraction in environments where no two situations are exactly alike. In this work, we investigate approximate state abstractions, which treat nearly-identical situations as equivalent. We present theoretical guarantees of the quality of behaviors derived from four types of approximate abstractions. Additionally, we empirically demonstrate that approximate abstractions lead to reduction in task complexity and bounded loss of optimality of behavior in a variety of environments.
RIS
TY - CPAPER TI - Near Optimal Behavior via Approximate State Abstraction AU - David Abel AU - David Hershkowitz AU - Michael Littman BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-abel16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 2915 EP - 2923 L1 - http://proceedings.mlr.press/v48/abel16.pdf UR - https://proceedings.mlr.press/v48/abel16.html AB - The combinatorial explosion that plagues planning and reinforcement learning (RL) algorithms can be moderated using state abstraction. Prohibitively large task representations can be condensed such that essential information is preserved, and consequently, solutions are tractably computable. However, exact abstractions, which treat only fully-identical situations as equivalent, fail to present opportunities for abstraction in environments where no two situations are exactly alike. In this work, we investigate approximate state abstractions, which treat nearly-identical situations as equivalent. We present theoretical guarantees of the quality of behaviors derived from four types of approximate abstractions. Additionally, we empirically demonstrate that approximate abstractions lead to reduction in task complexity and bounded loss of optimality of behavior in a variety of environments. ER -
APA
Abel, D., Hershkowitz, D. & Littman, M.. (2016). Near Optimal Behavior via Approximate State Abstraction. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:2915-2923 Available from https://proceedings.mlr.press/v48/abel16.html.

Related Material