Stability of Controllers for Gaussian Process Forward Models

Julia Vinogradska, Bastian Bischoff, Duy Nguyen-Tuong, Anne Romer, Henner Schmidt, Jan Peters
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:545-554, 2016.

Abstract

Learning control has become an appealing alternative to the derivation of control laws based on classic control theory. However, a major shortcoming of learning control is the lack of performance guarantees which prevents its application in many real-world scenarios. As a step in this direction, we provide a stability analysis tool for controllers acting on dynamics represented by Gaussian processes (GPs). We consider arbitrary Markovian control policies and system dynamics given as (i) the mean of a GP, and (ii) the full GP distribution. For the first case, our tool finds a state space region, where the closed-loop system is provably stable. In the second case, it is well known that infinite horizon stability guarantees cannot exist. Instead, our tool analyzes finite time stability. Empirical evaluations on simulated benchmark problems support our theoretical results.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-vinogradska16, title = {Stability of Controllers for Gaussian Process Forward Models}, author = {Vinogradska, Julia and Bischoff, Bastian and Nguyen-Tuong, Duy and Romer, Anne and Schmidt, Henner and Peters, Jan}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {545--554}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/vinogradska16.pdf}, url = {https://proceedings.mlr.press/v48/vinogradska16.html}, abstract = {Learning control has become an appealing alternative to the derivation of control laws based on classic control theory. However, a major shortcoming of learning control is the lack of performance guarantees which prevents its application in many real-world scenarios. As a step in this direction, we provide a stability analysis tool for controllers acting on dynamics represented by Gaussian processes (GPs). We consider arbitrary Markovian control policies and system dynamics given as (i) the mean of a GP, and (ii) the full GP distribution. For the first case, our tool finds a state space region, where the closed-loop system is provably stable. In the second case, it is well known that infinite horizon stability guarantees cannot exist. Instead, our tool analyzes finite time stability. Empirical evaluations on simulated benchmark problems support our theoretical results.} }
Endnote
%0 Conference Paper %T Stability of Controllers for Gaussian Process Forward Models %A Julia Vinogradska %A Bastian Bischoff %A Duy Nguyen-Tuong %A Anne Romer %A Henner Schmidt %A Jan Peters %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-vinogradska16 %I PMLR %P 545--554 %U https://proceedings.mlr.press/v48/vinogradska16.html %V 48 %X Learning control has become an appealing alternative to the derivation of control laws based on classic control theory. However, a major shortcoming of learning control is the lack of performance guarantees which prevents its application in many real-world scenarios. As a step in this direction, we provide a stability analysis tool for controllers acting on dynamics represented by Gaussian processes (GPs). We consider arbitrary Markovian control policies and system dynamics given as (i) the mean of a GP, and (ii) the full GP distribution. For the first case, our tool finds a state space region, where the closed-loop system is provably stable. In the second case, it is well known that infinite horizon stability guarantees cannot exist. Instead, our tool analyzes finite time stability. Empirical evaluations on simulated benchmark problems support our theoretical results.
RIS
TY - CPAPER TI - Stability of Controllers for Gaussian Process Forward Models AU - Julia Vinogradska AU - Bastian Bischoff AU - Duy Nguyen-Tuong AU - Anne Romer AU - Henner Schmidt AU - Jan Peters BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-vinogradska16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 545 EP - 554 L1 - http://proceedings.mlr.press/v48/vinogradska16.pdf UR - https://proceedings.mlr.press/v48/vinogradska16.html AB - Learning control has become an appealing alternative to the derivation of control laws based on classic control theory. However, a major shortcoming of learning control is the lack of performance guarantees which prevents its application in many real-world scenarios. As a step in this direction, we provide a stability analysis tool for controllers acting on dynamics represented by Gaussian processes (GPs). We consider arbitrary Markovian control policies and system dynamics given as (i) the mean of a GP, and (ii) the full GP distribution. For the first case, our tool finds a state space region, where the closed-loop system is provably stable. In the second case, it is well known that infinite horizon stability guarantees cannot exist. Instead, our tool analyzes finite time stability. Empirical evaluations on simulated benchmark problems support our theoretical results. ER -
APA
Vinogradska, J., Bischoff, B., Nguyen-Tuong, D., Romer, A., Schmidt, H. & Peters, J.. (2016). Stability of Controllers for Gaussian Process Forward Models. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:545-554 Available from https://proceedings.mlr.press/v48/vinogradska16.html.

Related Material