Sequential Information Maximization: When is Greedy Near-optimal?

Yuxin Chen, S. Hamed Hassani, Amin Karbasi, Andreas Krause
Proceedings of The 28th Conference on Learning Theory, PMLR 40:338-363, 2015.

Abstract

Optimal information gathering is a central challenge in machine learning and science in general. A common objective that quantifies the usefulness of observations is Shannon’s mutual information, defined w.r.t. a probabilistic model. Greedily selecting observations that maximize the mutual information is the method of choice in numerous applications, ranging from Bayesian experimental design to automated diagnosis, to active learning in Bayesian models. Despite its importance and widespread use in applications, little is known about the theoretical properties of sequential information maximization, in particular under noisy observations. In this paper, we analyze the widely used greedy policy for this task, and identify problem instances where it provides provably near-maximal utility, even in the challenging setting of persistent noise. Our results depend on a natural separability condition associated with a channel injecting noise into the observations. We also identify examples where this separability parameter is necessary in the bound: if it is too small, then the greedy policy fails to select informative tests.

Cite this Paper


BibTeX
@InProceedings{pmlr-v40-Chen15b, title = {Sequential Information Maximization: When is Greedy Near-optimal?}, author = {Chen, Yuxin and Hassani, S. Hamed and Karbasi, Amin and Krause, Andreas}, booktitle = {Proceedings of The 28th Conference on Learning Theory}, pages = {338--363}, year = {2015}, editor = {Grünwald, Peter and Hazan, Elad and Kale, Satyen}, volume = {40}, series = {Proceedings of Machine Learning Research}, address = {Paris, France}, month = {03--06 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v40/Chen15b.pdf}, url = {https://proceedings.mlr.press/v40/Chen15b.html}, abstract = {Optimal information gathering is a central challenge in machine learning and science in general. A common objective that quantifies the usefulness of observations is Shannon’s mutual information, defined w.r.t. a probabilistic model. Greedily selecting observations that maximize the mutual information is the method of choice in numerous applications, ranging from Bayesian experimental design to automated diagnosis, to active learning in Bayesian models. Despite its importance and widespread use in applications, little is known about the theoretical properties of sequential information maximization, in particular under noisy observations. In this paper, we analyze the widely used greedy policy for this task, and identify problem instances where it provides provably near-maximal utility, even in the challenging setting of persistent noise. Our results depend on a natural separability condition associated with a channel injecting noise into the observations. We also identify examples where this separability parameter is necessary in the bound: if it is too small, then the greedy policy fails to select informative tests.} }
Endnote
%0 Conference Paper %T Sequential Information Maximization: When is Greedy Near-optimal? %A Yuxin Chen %A S. Hamed Hassani %A Amin Karbasi %A Andreas Krause %B Proceedings of The 28th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2015 %E Peter Grünwald %E Elad Hazan %E Satyen Kale %F pmlr-v40-Chen15b %I PMLR %P 338--363 %U https://proceedings.mlr.press/v40/Chen15b.html %V 40 %X Optimal information gathering is a central challenge in machine learning and science in general. A common objective that quantifies the usefulness of observations is Shannon’s mutual information, defined w.r.t. a probabilistic model. Greedily selecting observations that maximize the mutual information is the method of choice in numerous applications, ranging from Bayesian experimental design to automated diagnosis, to active learning in Bayesian models. Despite its importance and widespread use in applications, little is known about the theoretical properties of sequential information maximization, in particular under noisy observations. In this paper, we analyze the widely used greedy policy for this task, and identify problem instances where it provides provably near-maximal utility, even in the challenging setting of persistent noise. Our results depend on a natural separability condition associated with a channel injecting noise into the observations. We also identify examples where this separability parameter is necessary in the bound: if it is too small, then the greedy policy fails to select informative tests.
RIS
TY - CPAPER TI - Sequential Information Maximization: When is Greedy Near-optimal? AU - Yuxin Chen AU - S. Hamed Hassani AU - Amin Karbasi AU - Andreas Krause BT - Proceedings of The 28th Conference on Learning Theory DA - 2015/06/26 ED - Peter Grünwald ED - Elad Hazan ED - Satyen Kale ID - pmlr-v40-Chen15b PB - PMLR DP - Proceedings of Machine Learning Research VL - 40 SP - 338 EP - 363 L1 - http://proceedings.mlr.press/v40/Chen15b.pdf UR - https://proceedings.mlr.press/v40/Chen15b.html AB - Optimal information gathering is a central challenge in machine learning and science in general. A common objective that quantifies the usefulness of observations is Shannon’s mutual information, defined w.r.t. a probabilistic model. Greedily selecting observations that maximize the mutual information is the method of choice in numerous applications, ranging from Bayesian experimental design to automated diagnosis, to active learning in Bayesian models. Despite its importance and widespread use in applications, little is known about the theoretical properties of sequential information maximization, in particular under noisy observations. In this paper, we analyze the widely used greedy policy for this task, and identify problem instances where it provides provably near-maximal utility, even in the challenging setting of persistent noise. Our results depend on a natural separability condition associated with a channel injecting noise into the observations. We also identify examples where this separability parameter is necessary in the bound: if it is too small, then the greedy policy fails to select informative tests. ER -
APA
Chen, Y., Hassani, S.H., Karbasi, A. & Krause, A.. (2015). Sequential Information Maximization: When is Greedy Near-optimal?. Proceedings of The 28th Conference on Learning Theory, in Proceedings of Machine Learning Research 40:338-363 Available from https://proceedings.mlr.press/v40/Chen15b.html.

Related Material