Chapter 5 Monte Carlo integration

A typical usage of simulation of random variables is Monte Carlo integration. With \(X_1, \ldots, X_n\) i.i.d. with density \(f\) \[\hat{\mu}_{\textrm{MC}} := \frac{1}{n} \sum_{i=1}^n h(X_i) \rightarrow \mu := E(h(X_1)) = \int h(x) f(x) \ \mathrm{d}x\] for \(n \to \infty\) by the law of large numbers (LLN).

Monte Carlo integration is a clever idea, where we use the computer to simulate i.i.d. random variables and compute an average as an approximation of an integral. The idea may be applied in a statistical context, but it way also have applications outside of statistics and be a direct competitor to numerical integration. By increasing \(n\) the LLN tells us that the average will eventually become a good approximation of the integral. However, the LLN does not quantify how large \(n\) should be, and a fundamental question of Monte Carlo integration is therefore to quantify the precision of the average.

This chapter first deals with the quantification of the precision — mostly via the asymptotic variance in the central limit theorem. This will on the one hand provide us with a quantification of precision for any specific Monte Carlo approximation, and it will on the other hand provide us with a way to compare different Monte Carlo integration techniques. The direct use of the average above requires that we can simulate from the distribution with density \(f\), but that might have low precision or it might just be plain difficult. In the second half of the chapter we will treat importance sampling, which is a technique for simulating from a different distribution and use a weighted average to obtain the approximation of the integral.