Deterministic Anytime Inference for Stochastic Continuous-Time Markov Processes

E. Busra Celikkaya, Christian Shelton
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1962-1970, 2014.

Abstract

We describe a deterministic anytime method for calculating filtered and smoothed distributions in large variable-based continuous time Markov processes. Prior non-random algorithms do not converge to the true distribution in the limit of infinite computation time. Sampling algorithms give different results each time run, which can lead to instability when used inside expectation-maximization or other algorithms. Our method combines the anytime convergent properties of sampling with the non-random nature of variational approaches. It is built upon a sum of time-ordered products, an expansion of the matrix exponential. We demonstrate that our method performs as well as or better than the current best sampling approaches on benchmark problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-celikkaya14, title = {Deterministic Anytime Inference for Stochastic Continuous-Time Markov Processes}, author = {Celikkaya, E. Busra and Shelton, Christian}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {1962--1970}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/celikkaya14.pdf}, url = {https://proceedings.mlr.press/v32/celikkaya14.html}, abstract = {We describe a deterministic anytime method for calculating filtered and smoothed distributions in large variable-based continuous time Markov processes. Prior non-random algorithms do not converge to the true distribution in the limit of infinite computation time. Sampling algorithms give different results each time run, which can lead to instability when used inside expectation-maximization or other algorithms. Our method combines the anytime convergent properties of sampling with the non-random nature of variational approaches. It is built upon a sum of time-ordered products, an expansion of the matrix exponential. We demonstrate that our method performs as well as or better than the current best sampling approaches on benchmark problems.} }
Endnote
%0 Conference Paper %T Deterministic Anytime Inference for Stochastic Continuous-Time Markov Processes %A E. Busra Celikkaya %A Christian Shelton %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-celikkaya14 %I PMLR %P 1962--1970 %U https://proceedings.mlr.press/v32/celikkaya14.html %V 32 %N 2 %X We describe a deterministic anytime method for calculating filtered and smoothed distributions in large variable-based continuous time Markov processes. Prior non-random algorithms do not converge to the true distribution in the limit of infinite computation time. Sampling algorithms give different results each time run, which can lead to instability when used inside expectation-maximization or other algorithms. Our method combines the anytime convergent properties of sampling with the non-random nature of variational approaches. It is built upon a sum of time-ordered products, an expansion of the matrix exponential. We demonstrate that our method performs as well as or better than the current best sampling approaches on benchmark problems.
RIS
TY - CPAPER TI - Deterministic Anytime Inference for Stochastic Continuous-Time Markov Processes AU - E. Busra Celikkaya AU - Christian Shelton BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-celikkaya14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 1962 EP - 1970 L1 - http://proceedings.mlr.press/v32/celikkaya14.pdf UR - https://proceedings.mlr.press/v32/celikkaya14.html AB - We describe a deterministic anytime method for calculating filtered and smoothed distributions in large variable-based continuous time Markov processes. Prior non-random algorithms do not converge to the true distribution in the limit of infinite computation time. Sampling algorithms give different results each time run, which can lead to instability when used inside expectation-maximization or other algorithms. Our method combines the anytime convergent properties of sampling with the non-random nature of variational approaches. It is built upon a sum of time-ordered products, an expansion of the matrix exponential. We demonstrate that our method performs as well as or better than the current best sampling approaches on benchmark problems. ER -
APA
Celikkaya, E.B. & Shelton, C.. (2014). Deterministic Anytime Inference for Stochastic Continuous-Time Markov Processes. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):1962-1970 Available from https://proceedings.mlr.press/v32/celikkaya14.html.

Related Material