Wednesday, October 15, 2025

Tips on how to beat Monte Carlo (with out QMC) – Statisfaction


Say I need to approximate the integral

based mostly on n evaluations of perform f. I might use plain previous Monte Carlo:

hat{I}(f) = n^{-1}sum_{i=1}^n f(U_i),quad U_isim U([0, 1]^s).

whose RMSE (root imply sq. error) is O(n^{-1/2}).

Can I do higher? That’s, can I design an alternate estimator/algorithm, which performs n evaluations and returns a random output, with a RMSE that converges faster?

Surprisingly (to me no less than), the reply to this query has been recognized for a very long time. If I’m able to concentrate on capabilities finmathcal{C}^r([0, 1]^s), Bakhvalov (1959) confirmed that the most effective fee I can hope for is O(n^{-1/2-r/s}). That’s, there exist algorithms that obtain this fee, and algorithms attaining a greater fee merely don’t exist.

Okay, however how can I really design such an algorithm? The proof of Bakhvalov comprises a easy recipe. Say I’m able to assemble a superb approximation f_n of f, based mostly on n evaluations; assume the approximation error is |f-f_n|_infty = O(n^{-alpha}), alpha>0. Then I might compute the next estimator, based mostly on a second batch of n evaluations:

hat{I}(f) := I(f_n) + n^{-1} sum_{i=1}^n (f-f_n)(U_i),quad U_isim U([0, 1]^s).

and it’s simple to test that this new estimator (based mostly on 2n evaluations) is unbiased, that its variance is O(n^{-1-2alpha}), and due to this fact its RMSE is O(n^{-1/2-alpha}).

So there’s sturdy connection between stochastic quadrature and performance approximation. In actual fact, the most effective fee you possibly can obtain for the latter is alpha=r/s, which explains why the most effective fee you may get for the previous is 1/2+r/s.

Now you can higher perceive the “with out QMC” within the title. QMC is about utilizing factors which can be “higher” than random factors. However right here I’m utilizing IID factors, and the improved fee comes from the very fact I take advantage of a greater approximation of f.

Right here is an easy instance of a superb perform approximation. Take s=1, and

f_n(u) = sum_{i=1}^n f(frac{2i-1}{2n}) 1_{[(i-1)/n, i/n]}(u)

that’s, break up [0, 1] into n intervals [(i-1)/n, i/n], and approximate f inside a given interval by its worth on the centre of the interval. You’ll be able to test that the approximation error is O(n^{-1}) supplied f is C^1. So that you get a easy recipe to acquire the optimum fee when s=1 and r=1.

Is it doable to generalise such a development to any r and any s? The reply is in our latest paper with Mathieu Gerber, which you’ll find right here. You may additionally need to learn Novak (2016), which is an excellent entry on stochastic quadrature, and particularly provides a extra detailed (and extra rigorous!) overview of Bakhvalov’s and associated outcomes.

Additionally, please remind me to not attempt to kind latex in wordpress ever once more, it looks like this:

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles