Generalizing the Balance Heuristic Estimator in Multiple Importance Sampling
Abstract
:1. Introduction
2. Balance Heuristic Estimator
2.1. Interpretation of F and General Notation of the Paper
2.2. Rationale
2.3. Realistic Applications
2.3.1. Global Illumination
2.3.2. Bayesian Inference
3. Generalized Multiple Importance Sampling Balance Heuristic Estimator
3.1. Case 1: ,
3.2. Case 2: Fixed
Optimal Efficiency
3.3. Case 3: Fixed
3.4. Case 4:
Comparing , and
3.5. Case 5: General Case
4. Singular Solutions
5. Relationship with Divergence: A Necessary and Sufficient Condition for
6. Numerical Examples
6.1. Efficiency Comparison between F and G Estimators
- As expected, Theorems 4 and 5 hold for all the cases.
- Examples 1–4 show a general gain in efficiency, around 20%.
- Example 5 with equal costs shows a gain in efficiency from 2% to around 20%.
- Example 5 with different costs shows big gains in efficiency, in particular for equal count of sampling the efficiency is doubled.
6.1.1. Example 1
6.1.2. Example 2
6.1.3. Example 3
6.1.4. Example 4
6.1.5. Example 5
6.2. Variances of Examples 1–5 for Some Notable Cases
- In first, second and third row we present the optimal values of , respectively. In the first example those values are equal up to the fourth decimal position. In the second example they are equal up to the second decimal position, in the third and fourth examples they are equal, and in the fifth example the gain of with respect to is more than 30%.
- In the fourth row we present the values of , which are compared against several strategies and heuristics in the following rows
- The values in the fifth row for , that correspond to Equation (30) (or the left-hand side of Equation (37) with all costs equal to 1), although being better than the values for in the fourth row, as expected, do not give any significant improvement. As we have seen in Table 3 the improvements for this case come rather from efficiency.
- The values in the sixth row for , that correspond to the solution of Equation (40), this is, the optimal for , are smaller than the variances of for equal sampling rate, fourth row, and much smaller in some of the examples, with variances equal (as in Examples 3 and 4), or less, as in Example 5, than the optimal .
- Although in general there is no solution for all equal, for instance the numerical approximation obtained in Example 1 is , , the variances obtained, in the seventh row of Table 4, are for all the examples very near to the optimal value of . Observe that if these values exist, then they are the optimal values for , i.e., for the sampling rates the G estimator can not improve on the F estimator.
- In row 8 we show the G estimator corresponding , with . It improves on for all examples except the third one, where it scores closely, beating estimators in rows 9 and 10 except for Example 3. It is indeed a theoretical estimator as there is no easy way to solve this equation. However, it can be approximated by the heuristic in line 11, see below. Observe that if are the solutions of for all i, equal, then the optimal for are .
- The results in row 9 correspond to the estimator defined in [19], , where are the second moments of the independent techniques, and . The results are slightly better than for Examples 1 and 2, much better for Example 3, much worse for Example 4 and slightly worse for Example 5. Observe that the values can be easily approximated using a first batch of samples in a Monte Carlo integration, as it was already performed in [7,9,10].
- In row 10 we introduce the estimator corresponding to , and , which gives better results than for Examples 1 and 2, much better for Example 3, worse for Example 5, and much worse for Example 4. This estimator improves on the one in [19] except for Example 4. Observe that these weights correspond to the optimal weights in the linear combination of estimators when the sampling rates () are fixed, Equation (13) of [9].
- In row 11 we approximate the estimator in row 8 by using , which corresponds to , to approximate . Observe that values can be easily obtained with a first batch of samples in a Monte Carlo integration. In all cases, except in Example 3, we improve on . The big error on Example 4 of the estimator in lines 9 and 10 is controlled.
- In row 12 we modulate the estimator in row 11 with the values. Observe that now for all the examples the variance is less than for .
- The results in row 13 show that, even if , we can not guarantee that .
7. Conclusions
Author Contributions
Funding
Conflicts of Interest
Appendix A. Difference between the Variances of Deterministic and Randomized Multiple Importance Sampling Estimators
Appendix B. Proof of Theorem 2: Optimal Variance of F
Appendix C. Derivation of Case 3
Appendix D. An Alternative Perspective Based on the χ 2 Divergence
References
- Veach, E.; Guibas, L. Optimally Combining Sampling Techniques for Monte Carlo Rendering. In Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 6–11 August 1995; pp. 419–428. [Google Scholar] [CrossRef] [Green Version]
- Elvira, V.; Martino, L.; Luengo, D.; Bugallo, M.F. Generalized Multiple Importance Sampling. Stat. Sci. 2019, 34, 129–155. [Google Scholar] [CrossRef] [Green Version]
- Owen, A.; Zhou, Y. Safe and Effective Importance Sampling. J. Am. Stat. Assoc. 2000, 95, 135–143. [Google Scholar] [CrossRef]
- Elvira, V.; Martino, L.; Luengo, D.; Bugallo, M.F. Efficient Multiple Importance Sampling Estimators. IEEE Signal Process. Lett. 2015, 22, 1757–1761. [Google Scholar] [CrossRef] [Green Version]
- Elvira, V.; Martino, L.; Luengo, D.; Bugallo, M.F. Multiple importance sampling with overlapping sets of proposals. In Proceedings of the Statistical Signal Processing Workshop (SSP), Palma de Mallorca, Spain, 26–29 June 2016; pp. 1–5. [Google Scholar]
- Elvira, V.; Martino, L.; Luengo, D.; Bugallo, M.F. Heretical Multiple Importance Sampling. IEEE Signal Process. Lett. 2016, 23, 1474–1478. [Google Scholar] [CrossRef]
- Sbert, M.; Havran, V.; Szirmay-Kalos, L. Variance Analysis of Multi-sample and One-sample Multiple Importance Sampling. Comput. Graph. Forum 2016, 35, 451–460. [Google Scholar] [CrossRef]
- Havran, V.; Sbert, M. Optimal Combination of Techniques in Multiple Importance Sampling. In Proceedings of the 13th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry, VRCAI ’14, Shenzhen, China, 30 November–2 December 2014; ACM: New York, NY, USA, 2014; pp. 141–150. [Google Scholar] [CrossRef]
- Sbert, M.; Havran, V. Adaptive multiple importance sampling for general functions. Vis. Comput. 2017, 33, 1–11. [Google Scholar] [CrossRef]
- Sbert, M.; Havran, V.; Szirmay-Kalos, L.; Elvira, V. Multiple importance sampling characterization by weighted mean invariance. Vis. Comput. 2018, 34, 843–852. [Google Scholar] [CrossRef]
- Cappé, O.; Guillin, A.; Marin, J.M.; Robert, C.P. Population Monte Carlo. J. Comput. Graph. Stat. 2004, 13, 907–929. [Google Scholar] [CrossRef]
- Cornuet, J.M.; Marin, J.M.; Mira, A.; Robert, C.P. Adaptive Multiple Importance Sampling. Scand. J. Stat. 2012, 39, 798–812. [Google Scholar] [CrossRef] [Green Version]
- Martino, L.; Elvira, V.; Luengo, D.; Corander, J. Layered adaptive importance sampling. Stat. Comput. 2017, 27, 599–623. [Google Scholar] [CrossRef] [Green Version]
- Elvira, V.; Martino, L.; Luengo, D.; Bugallo, M.F. Improving Population Monte Carlo: Alternative Weighting and Resampling Schemes. Signal Process. 2017, 131, 77–91. [Google Scholar] [CrossRef] [Green Version]
- Bugallo, M.F.; Elvira, V.; Martino, L.; Luengo, D.; Míguez, J.; Djuric, P.M. Adaptive Importance Sampling: The past, the present, and the future. IEEE Signal Process. Mag. 2017, 34, 60–79. [Google Scholar] [CrossRef]
- Kondapaneni, I.; Vevoda, P.; Grittmann, P.; Skřivan, T.; Slusallek, P.; Křivánek, J. Optimal Multiple Importance Sampling. ACM Trans. Graph. 2019, 38, 37. [Google Scholar] [CrossRef]
- Veach, E. Robust Monte Carlo Methods for Light Transport Simulation. Ph.D. Thesis, Stanford University, Stanford, CA, USA, 1997. [Google Scholar]
- Karlík, O.; Šik, M.; Vévoda, P.; Skřivan, T.; Křivánek, J. MIS Compensation: Optimizing Sampling Techniques in Multiple Importance Sampling. ACM Trans. Graph. 2019, 38, 151. [Google Scholar] [CrossRef] [Green Version]
- Grittmann, P.; Georgiev, I.; Slusallek, P.; Křivánek, J. Variance-Aware Multiple Importance Sampling. ACM Trans. Graph. 2019, 38, 152. [Google Scholar] [CrossRef] [Green Version]
- West, R.; Georgiev, I.; Gruson, A.; Hachisuka, T. Continuous Multiple Importance Sampling. ACM Trans. Graph. 2020, 39, 136. [Google Scholar] [CrossRef]
- Kajiya, J.T. The Rendering Equation. In Proceedings of the Computer Graphics (SIGGRAPH ’86 Proceedings), Dallas, TX, USA, 18–22 August 1986; Evans, D.C., Athay, R.J., Eds.; Volume 20, pp. 143–150. [Google Scholar]
- Robert, C.P. The Bayesian Choice: From Decision-Theoretic Foundations to Computational Implementation; Springer: Berlin/Heidelberg, Germany, 2007; Volume 2. [Google Scholar]
- Elvira, V.; Martino, L.; Closas, P. Importance Gaussian Quadrature. IEEE Trans. Signal Process. 2020, 69, 474–488. [Google Scholar] [CrossRef]
- Elvira, V.; Chouzenoux, E. Langevin-based strategy for efficient proposal adaptation in population Monte Carlo. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 5077–5081. [Google Scholar]
- Rubinstein, R.; Kroese, D. Simulation and the Monte Carlo Method; Wiley Series in Probability and Statistics; Wiley: Hoboken, NJ, USA, 2008. [Google Scholar]
- Sbert, M.; Yoshida, Y. Stochastic Orders on Two-Dimensional Space: Application to Cross Entropy. In Proceedings of the Modeling Decisions for Artificial Intelligence—17th International Conference, MDAI 2020, Sant Cugat, Spain, 2–4 September 2020; Lecture Notes in Computer Science. Torra, V., Narukawa, Y., Nin, J., Agell, N., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; Volume 12256, pp. 28–40. [Google Scholar]
- Sbert, M.; Poch, J. A necessary and sufficient condition for the inequality of generalized weighted means. J. Inequalities Appl. 2016, 2016, 292. [Google Scholar] [CrossRef]
- Sbert, M.; Havran, V.; Szirmay-Kalos, L. Multiple importance sampling revisited: Breaking the bounds. EURASIP J. Adv. Signal Process. 2018, 2018, 15. [Google Scholar] [CrossRef] [Green Version]
- Cornebise, J.; Moulines, E.; Olsson, J. Adaptive methods for sequential importance sampling with application to state space models. Stat. Comput. 2008, 18, 461–480. [Google Scholar] [CrossRef] [Green Version]
- Míguez, J. On the performance of nonlinear importance samplers and population Monte Carlo schemes. In Proceedings of the 2017 22nd International Conference on Digital Signal Processing (DSP), London, UK, 23–25 August 2017; pp. 1–5. [Google Scholar]
- Nielsen, F.; Nock, R. On the chi square and higher-order chi distances for approximating f-divergences. IEEE Signal Process. Lett. 2014, 21, 10–13. [Google Scholar] [CrossRef] [Green Version]
- Geyer, C.J. Reweighting Monte Carlo Mixtures; Technical Report; University of Minnesota: Minneapolis, MN, USA, 1991. [Google Scholar]
Z | Generic deterministic (multi-sample) MIS estimator |
Generic deterministic (multi-sample) MIS estimator normalized to one sample | |
Generic randomized (one-sample) MIS estimator | |
Generic randomized (one-sample) MIS estimator, for number of samples equal to 1 | |
F | Balance heuristic multi-sample MIS estimator |
Balance heuristic multi-sample MIS estimator normalized to one sample | |
Balance heuristic one-sample MIS estimator | |
Balance heuristic one-sample MIS estimator, for number of samples equal to 1 | |
G | Generalized balance heuristic multi-sample MIS estimator |
Generalized balance heuristic multi-sample MIS estimator normalized to one sample | |
Generalized balance heuristic one-sample MIS estimator |
for all i, equal | global minimum |
for all i, equal and equal | global minimum, , but |
for all i, equal | ; optimal of is for |
for all i, equal | optimal of is for , |
for all i, equal, or | |
for all i, , equal, or | optimal of when are fixed, |
Ex. 1 | Ex. 2 | Ex. 3 | Ex. 4 | Ex. 5 | Ex. 5 | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
costs = (1,1) | costs = (1,1) | costs = (1,5) | costs = (1,5) | |||||||||
Estimator | F | G | F | G | F | G | F | G | F | G | F | G |
102.26 | 89.40 | 17.24 | 15.44 | 37.47 | 31.80 | 98.68 | 83.78 | 0.28 | 0.23 | 0.83 | 0.40 | |
[8] | 49.53 | 41.29 | 9.28 | 8.10 | 4.03 | 3.85 | 300.12 | 294.85 | 0.31 | 0.26 | 2.76 | 2.33 |
[28] | 54.36 | 46.2 | 9.82 | 8.49 | 3.12 | 2.49 | 534.37 | 449.33 | 0.20 | 0.15 | 1.51 | 1.00 |
[28] | 81.43 | 69.88 | 13.54 | 11.67 | 28.68 | 23.17 | 91.01 | 73.54 | 1.00 | 0.98 | 2.90 | 2.50 |
[28] | 79.73 | 67.77 | 13.08 | 11.35 | 25.74 | 20.76 | 31.77 | 25.90 | 0.29 | 0.24 | 2.72 | 2.28 |
Estimator | Example 1 | Example 2 | Example 3 | Example 4 | Example 5 | |
---|---|---|---|---|---|---|
1 | 22.7122 | 4.1949 | 0 | 0 | 0.0910 | |
2 | 22.7122 | 4.1944 | 0 | 0 | 0.0903 | |
3 | 22.7122 | 4.1932 | 0 | 0 | 0.0601 | |
4 | 29.1634 | 4.9175 | 10.6877 | 28.1431 | 0.2771 | |
5 | 29.0908 | 4.9069 | 10.6125 | 27.9412 | 0.2264 | |
6 | 27.1603 | 4.7265 | 0 | 0 | 0.0653 | |
7 | 22.8216 | 4.1980 | 0 | 0 | 0.0926 | |
8 | 28.4089 | 4.8313 | 12.4705 | 11.2072 | 0.0734 | |
9 | ,19] | 28.0224 | 4.8513 | 3.7955 | 172.192 | 0.2832 |
10 | 27.4126 | 4.7897 | 0.5466 | 1256.48 | 0.2943 | |
11 | 28.3977 | 4.8291 | 11.95 | 13.7926 | 0.0724 | |
12 | 27.4426 | 4.7373 | 7.8615 | 6.7895 | 0.1112 | |
13 | 41.8791 | 8.8798 | 0.0014 | 0 | 0.0739 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sbert, M.; Elvira, V. Generalizing the Balance Heuristic Estimator in Multiple Importance Sampling. Entropy 2022, 24, 191. https://doi.org/10.3390/e24020191
Sbert M, Elvira V. Generalizing the Balance Heuristic Estimator in Multiple Importance Sampling. Entropy. 2022; 24(2):191. https://doi.org/10.3390/e24020191
Chicago/Turabian StyleSbert, Mateu, and Víctor Elvira. 2022. "Generalizing the Balance Heuristic Estimator in Multiple Importance Sampling" Entropy 24, no. 2: 191. https://doi.org/10.3390/e24020191
APA StyleSbert, M., & Elvira, V. (2022). Generalizing the Balance Heuristic Estimator in Multiple Importance Sampling. Entropy, 24(2), 191. https://doi.org/10.3390/e24020191