In tis section, we will focus on a case in the Markov chain Monte Carlo (MCMC) subject. MCMC contains algorithms for sampling from a probability distribution. The sampling happens by a Markov chain, i.e. it is a random process that changes from one state to another. In this article, we will use the Metropolis-Hastings algorithm to generate a random walk with a proposal density. We will then either accept or reject the proposed moves according to a certain condition.
We will perform some experiments to study the behavior of the Metropolis-Hastings on a test suite of target distributions. The test suite consists in this case of a Gaussian distribution in one dimension. The distributions are characterized by ther mean and variance. Therefore the target distribution is the mixture
The weights and measure the relative contribution of both Gaussian distributions. Without loss of generality, we can always assume that the first mean
As for a target distribution, we will us
For additional notes, I refer to the paper An Introduction to MCMC for Machine Learning by C. Andrieu, N. de Freitas, A. Doucet and M. Jordan.
The proposal distributions are the Gaussian
and the uniform distribution over the interval
where is the current state and is the candidate new state.
Now, we can begin with the experiments. We will collect 5,000 samples after the chain has converged to the invariant distribution, i.e. after the burn-in period. We will then construct a histogram based on the collected samples and compared with the target distribution. For these experiments, we will use Java.