Toward Optimal Stratification for Stratified Monte-Carlo Integration

Alexandra Carpentier, Rémi Munos
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(2):28-36, 2013.

Abstract

We consider the problem of adaptive stratified sampling for Monte Carlo integration of a function, given a finite number of function evaluations perturbed by noise. Here we address the problem of adapting simultaneously the number of samples into each stratum and the stratification itself. We show a tradeoff in the size of the partitioning. On the one hand it is important to refine the partition in areas where the observation noise or the function are heterogeneous in order to reduce this variability. But on the other hand, a too refined stratification makes it harder to assign the samples according to a near-optimal (oracle) allocation strategy. In this paper we provide an algorithm \em Monte-Carlo Upper-Lower Confidence Bound that selects online, among a large class of partitions, the partition that provides a near-optimal trade-off, and allocates the samples almost optimally on this partition.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-carpentier13, title = {Toward Optimal Stratification for Stratified Monte-Carlo Integration}, author = {Carpentier, Alexandra and Munos, Rémi}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {28--36}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/carpentier13.pdf}, url = {https://proceedings.mlr.press/v28/carpentier13.html}, abstract = {We consider the problem of adaptive stratified sampling for Monte Carlo integration of a function, given a finite number of function evaluations perturbed by noise. Here we address the problem of adapting simultaneously the number of samples into each stratum and the stratification itself. We show a tradeoff in the size of the partitioning. On the one hand it is important to refine the partition in areas where the observation noise or the function are heterogeneous in order to reduce this variability. But on the other hand, a too refined stratification makes it harder to assign the samples according to a near-optimal (oracle) allocation strategy. In this paper we provide an algorithm \em Monte-Carlo Upper-Lower Confidence Bound that selects online, among a large class of partitions, the partition that provides a near-optimal trade-off, and allocates the samples almost optimally on this partition.} }
Endnote
%0 Conference Paper %T Toward Optimal Stratification for Stratified Monte-Carlo Integration %A Alexandra Carpentier %A Rémi Munos %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-carpentier13 %I PMLR %P 28--36 %U https://proceedings.mlr.press/v28/carpentier13.html %V 28 %N 2 %X We consider the problem of adaptive stratified sampling for Monte Carlo integration of a function, given a finite number of function evaluations perturbed by noise. Here we address the problem of adapting simultaneously the number of samples into each stratum and the stratification itself. We show a tradeoff in the size of the partitioning. On the one hand it is important to refine the partition in areas where the observation noise or the function are heterogeneous in order to reduce this variability. But on the other hand, a too refined stratification makes it harder to assign the samples according to a near-optimal (oracle) allocation strategy. In this paper we provide an algorithm \em Monte-Carlo Upper-Lower Confidence Bound that selects online, among a large class of partitions, the partition that provides a near-optimal trade-off, and allocates the samples almost optimally on this partition.
RIS
TY - CPAPER TI - Toward Optimal Stratification for Stratified Monte-Carlo Integration AU - Alexandra Carpentier AU - Rémi Munos BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-carpentier13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 2 SP - 28 EP - 36 L1 - http://proceedings.mlr.press/v28/carpentier13.pdf UR - https://proceedings.mlr.press/v28/carpentier13.html AB - We consider the problem of adaptive stratified sampling for Monte Carlo integration of a function, given a finite number of function evaluations perturbed by noise. Here we address the problem of adapting simultaneously the number of samples into each stratum and the stratification itself. We show a tradeoff in the size of the partitioning. On the one hand it is important to refine the partition in areas where the observation noise or the function are heterogeneous in order to reduce this variability. But on the other hand, a too refined stratification makes it harder to assign the samples according to a near-optimal (oracle) allocation strategy. In this paper we provide an algorithm \em Monte-Carlo Upper-Lower Confidence Bound that selects online, among a large class of partitions, the partition that provides a near-optimal trade-off, and allocates the samples almost optimally on this partition. ER -
APA
Carpentier, A. & Munos, R.. (2013). Toward Optimal Stratification for Stratified Monte-Carlo Integration. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(2):28-36 Available from https://proceedings.mlr.press/v28/carpentier13.html.

Related Material