In this article we present a variance-reduction technique for Monte Carlo methods. By an elementary version of the Girsanov theorem, we introduce a drift term into the computation of a security's price via Monte Carlo simulation. Subsequently, the basic idea is to use a truncated version of the Robbins–Monro algorithms to find the optimal drift that reduces the variance. We prove that, for a large class of payoff functions, this version of the Robbins–Monro algorithms converges a.s. to the optimal drift. Finally, we illustrate the method by applications to options pricing.