Faster Gibbs sampling MCMC from within R

Introduction

This post follows on from the previous post on Gibbs sampling in various languages. In that post a simple Gibbs sampler was implemented in various languages, and speeds were compared. It was seen that R is very slow for iterative simulation algorithms characteristic of MCMC methods such as the Gibbs sampler. Statically typed languages such as C/C++ and Java were seen to be fastest for this type of algorithm. Since many statisticians like to use R for most of their work, there is natural interest in the possibility of extending R by calling simulation algorithms written in other languages. It turns out to be straightforward to call C, C++ and Java from within R, so this post will look at how this can be done, and exactly how fast the different options turn out to be. The post draws heavily on my previous posts on calling C from R and calling Java from R, as well as Dirk Eddelbuettel’s post on calling C++ from R, and it may be helpful to consult these posts for further details.

Languages

R

We will start with the simple pure R version of the Gibbs sampler, and use this as our point of reference for understanding the benefits of re-coding in other languages. The background to the problem was given in the previous post and so won’t be repeated here. The code can be given as follows:

gibbs<-function(N=50000,thin=1000)
{
	mat=matrix(0,ncol=2,nrow=N)
	x=0
	y=0
	for (i in 1:N) {
		for (j in 1:thin) {
			x=rgamma(1,3,y*y+4)
			y=rnorm(1,1/(x+1),1/sqrt(2*x+2))
		}
		mat[i,]=c(x,y)
	}
	names(mat)=c("x","y")
	mat
}

This code works perfectly, but is very slow. It takes 458.9 seconds on my very fast laptop (details given in previous post).

C

Let us now see how we can introduce a new function, gibbsC into R, which works in exactly the same way as gibbs, but actually calls on compiled C code to do all of the work. First we need the C code in a file called gibbs.c:

#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <R.h>
#include <Rmath.h>

void gibbs(int *Np,int *thinp,double *xvec,double *yvec)
{
  int i,j;
  int N=*Np,thin=*thinp;
  GetRNGstate();
  double x=0;
  double y=0;
  for (i=0;i<N;i++) {
    for (j=0;j<thin;j++) {
      x=rgamma(3.0,1.0/(y*y+4));
      y=rnorm(1.0/(x+1),1.0/sqrt(2*x+2));
    }
    xvec[i]=x; yvec[i]=y;
  }
  PutRNGstate();
}

This can be compiled with R CMD SHLIB gibbs.c. We can load it into R and wrap it up so that it is easy to use with the following code:

dyn.load(file.path(".",paste("gibbs",.Platform$dynlib.ext,sep="")))
gibbsC<-function(n=50000,thin=1000)
{
  tmp=.C("gibbs",as.integer(n),as.integer(thin),
                x=as.double(1:n),y=as.double(1:n))
  mat=cbind(tmp$x,tmp$y)
  colnames(mat)=c("x","y")
  mat
}

The new function gibbsC works just like gibbs, but takes just 12.1 seconds to run. This is roughly 40 times faster than the pure R version, which is a big deal.

Note that using the R inline package, it is possible to directly inline the C code into the R source code. We can do this with the following R code:

require(inline)
code='
  int i,j;
  int N=*Np,thin=*thinp;
  GetRNGstate();
  double x=0;
  double y=0;
  for (i=0;i<N;i++) {
    for (j=0;j<thin;j++) {
      x=rgamma(3.0,1.0/(y*y+4));
      y=rnorm(1.0/(x+1),1.0/sqrt(2*x+2));
    }
    xvec[i]=x; yvec[i]=y;
  }
  PutRNGstate();'
gibbsCin<-cfunction(sig=signature(Np="integer",thinp="integer",xvec="numeric",yvec="numeric"),body=code,includes="#include <Rmath.h>",language="C",convention=".C")
gibbsCinline<-function(n=50000,thin=1000)
{
  tmp=gibbsCin(n,thin,rep(0,n),rep(0,n))
  mat=cbind(tmp$x,tmp$y)
  colnames(mat)=c("x","y")
  mat
}

This runs at the same speed as the code compiled separately, and is arguably a bit cleaner in this case. Personally I’m not a big fan of inlining code unless it is something really very simple. If there is one thing that we have learned from the murky world of web development, it is that little good comes from mixing up different languages in the same source code file!

C++

We can also inline C++ code into R using the inline and Rcpp packages. The code below originates from Sanjog Misra, and was discussed in the post by Dirk Eddelbuettel mentioned at the start of this post.

require(Rcpp)
require(inline)

gibbscode = '
int N = as<int>(n);
int thn = as<int>(thin);
int i,j;
RNGScope scope;
NumericVector xs(N),ys(N);
double x=0;
double y=0;
for (i=0;i<N;i++) {
  for (j=0;j<thn;j++) {
    x = ::Rf_rgamma(3.0,1.0/(y*y+4));
    y= ::Rf_rnorm(1.0/(x+1),1.0/sqrt(2*x+2));
  }
  xs(i) = x;
  ys(i) = y;
}
return Rcpp::DataFrame::create( Named("x")= xs, Named("y") = ys);
'

RcppGibbsFn <- cxxfunction( signature(n="int", thin = "int"),
                              gibbscode, plugin="Rcpp")

RcppGibbs <- function(N=50000,thin=1000)
{
	RcppGibbsFn(N,thin)
}

This version of the sampler runs in 12.4 seconds, just a little bit slower than the C version.

Java

It is also quite straightforward to call Java code from within R using the rJava package. The following code

import java.util.*;
import cern.jet.random.tdouble.*;
import cern.jet.random.tdouble.engine.*;

class GibbsR
{

    public static double[][] gibbs(int N,int thin,int seed)
    {
	DoubleRandomEngine rngEngine=new DoubleMersenneTwister(seed);
	Normal rngN=new Normal(0.0,1.0,rngEngine);
	Gamma rngG=new Gamma(1.0,1.0,rngEngine);
	double x=0,y=0;
	double[][] mat=new double[2][N];
	for (int i=0;i<N;i++) {
	    for (int j=0;j<thin;j++) {
		x=rngG.nextDouble(3.0,y*y+4);
		y=rngN.nextDouble(1.0/(x+1),1.0/Math.sqrt(2*x+2));
	    }
	    mat[0][i]=x; mat[1][i]=y;
	}
	return mat;
    }

}

can be compiled with javac GibbsR.java (assuming that Parallel COLT is in the classpath), and wrapped up from within an R session with

library(rJava)
.jinit()
obj=.jnew("GibbsR")

gibbsJ<-function(N=50000,thin=1000,seed=trunc(runif(1)*1e6))
{
    result=.jcall(obj,"[[D","gibbs",as.integer(N),as.integer(thin),as.integer(seed))
    mat=sapply(result,.jevalArray)
    colnames(mat)=c("x","y")
    mat
}

This code runs in 10.7 seconds. Yes, that’s correct. Yes, the Java code is faster than both the C and C++ code! This really goes to show that Java is now an excellent option for numerically intensive work such as this. However, before any C/C++ enthusiasts go apoplectic, I should explain why Java turns out to be faster here, as the comparison is not quite fair… In the C and C++ code, use was made of the internal R random number generation routines, which are relatively slow compared to many modern numerical library implementations. In the Java code, I used Parallel COLT for random number generation, as it isn’t straightforward to call the R generators from Java code. It turns out that the COLT generators are faster than the R generators, and that is why Java turns out to be faster here…

C+GSL

Of course we do not have to use the R random number generators within our C code. For example, we could instead call on the GSL generators, using the following code:

#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <gsl/gsl_rng.h>
#include <gsl/gsl_randist.h>
#include <R.h>

void gibbsGSL(int *Np,int *thinp,int *seedp,double *xvec,double *yvec)
{
  int i,j;
  int N=*Np,thin=*thinp,seed=*seedp;
  gsl_rng *r = gsl_rng_alloc(gsl_rng_mt19937);
  gsl_rng_set(r,seed);
  double x=0;
  double y=0;
  for (i=0;i<N;i++) {
    for (j=0;j<thin;j++) {
      x=gsl_ran_gamma(r,3.0,1.0/(y*y+4));
      y=1.0/(x+1)+gsl_ran_gaussian(r,1.0/sqrt(2*x+2));
    }
    xvec[i]=x; yvec[i]=y;
  }
}

It can be compiled with R CMD SHLIB -lgsl -lgslcblas gibbsGSL.c, and then called as for the regular C version. This runs in 8.0 seconds, which is noticeably faster than the Java code, but probably not “enough” faster to make it an important factor to consider in language choice.

Summary

In this post I’ve shown that it is relatively straightforward to call code written in C, C++ or Java from within R, and that this can give very significant performance gains relative to pure R code. All of the options give fairly similar performance gains. I showed that in the case of this particular example, the “obvious” Java code is actually slightly faster than the “obvious” C or C++ code, and explained why, and how to make the C version slightly faster by using the GSL. The post by Dirk shows how to call the GSL generators from the C++ version, which I haven’t replicated here.

Advertisement

Java math libraries and Monte Carlo simulation codes

Java libraries for (non-uniform) random number simulation

Anyone writing serious Monte Carlo (and MCMC) codes relies on having a very good and fast (uniform) random number generator and associated functions for generation of non-uniform random quantities, such as Gaussian, Poisson, Gamma, etc. In a previous post I showed how to write a simple Gibbs sampler in four different languages. In C (and C++) random number generation is easy for most scientists, as the (excellent) GNU Scientific Library (GSL) provides exactly what most people need. But it wasn’t always that way… I remember the days before the GSL, when it was necessary to hunt around on the net for bits of C code to implement different algorithms. Worse, it was often necessary to hunt around for a bit of free FORTRAN code, and compile that with an F77 compiler and figure out how to call it from C. Even in the early Alpha days of the GSL, coverage was patchy, and the API changed often. Bad old days… But those days are long gone, and C programmers no longer have to worry about the problem of random variate generation – they can safely concentrate on developing their interesting new algorithm, and leave the rest to the GSL. Unfortunately for Java programmers, there isn’t yet anything quite comparable to the GSL in Java world.

I pretty much ignored Java until Java 5. Before then, the language was too limited, and the compilers and JVMs were too primitive to really take seriously for numerical work. But since the launch of Java 5 I’ve been starting to pay more interest. The language is now a perfectly reasonable O-O language, and the compilers and JVMs are pretty good. On a lot of benchmarks, Java is really quite comparable to C/C++, and Java is nicer to code, and has a lot of impressive associated technology. So if there was a math library comparable to the GSL, I’d be quite tempted to jump ship to the Java world and start writing all of my Monte Carlo codes in Java. But there isn’t. At least not yet.

When I first started to take Java seriously, the only good math library with good support for non-uniform random number generation was COLT. COLT was, and still is, pretty good. The code is generally well-written, and fast, and the documentation for it is reasonable. However, the structure of the library is very idiosyncratic, the coverage is a bit patchy, and there doesn’t ever seem to have been a proper development community behind it. It seems very much to have been a one-man project, which has long since stagnated. Unsurprisingly then, COLT has been forked. There is now a Parallel COLT project. This project is continuing the development of COLT, adding new features that were missing from COLT, and, as the name suggests, adding concurrency support. Parallel COLT is also good, and is the main library I currently use for random number generation in Java. However, it has obviously inherited all of the idiosyncrasies that COLT had, and still doesn’t seem to have a large and active development community associated with it. There is no doubt that it is an incredibly useful software library, but it still doesn’t really compare to the GSL.

I have watched the emergence of the Apache Commons Math project with great interest (not to be confused with Uncommons Math – another one-man project). I think this project probably has the greatest potential for providing the Java community with their own GSL equivalent. The Commons project has a lot of momentum, the Commons Math project seems to have an active development community, and the structure of the library is more intuitive than that of (Parallel) COLT. However, it is early days, and the library still has patchy coverage and is a bit rough around the edges. It reminds me a lot of the GSL back in its Alpha days. I’d not bothered to even download it until recently, as the random number generation component didn’t include the generation of gamma random quantities – an absolutely essential requirement for me. However, I noticed recently that the latest release (2.2) did include gamma generation, so I decided to download it and try it out. It works, but the generation of gamma random quantities is very slow (around 50 times slower than Parallel COLT). This isn’t a fundamental design flaw of the whole library – generating Gaussian random quantities is quite comparable with other libraries. It’s just that an inversion method has been used for gamma generation. All efficient gamma generators use a neat rejection scheme. In case anyone would like to investigate for themselves, here is a complete program for gamma generation designed to be linked against Parallel COLT:

import java.util.*;
import cern.jet.random.tdouble.*;
import cern.jet.random.tdouble.engine.*;

class GammaPC
{

    public static void main(String[] arg)
    {
	DoubleRandomEngine rngEngine=new DoubleMersenneTwister();
	Gamma rngG=new Gamma(1.0,1.0,rngEngine);
	long N=10000;
	double x=0.0;
	for (int i=0;i<N;i++) {
	    for (int j=0;j<1000;j++) {
		x=rngG.nextDouble(3.0,25.0);
	    }
	    System.out.println(x);
	}
    }
    
}

and here is a complete program designed to be linked against Commons Math:

import java.util.*;
import org.apache.commons.math.*;
import org.apache.commons.math.random.*;

class GammaACM
{

    public static void main(String[] arg) throws MathException
    {
	RandomDataImpl rng=new RandomDataImpl();
	long N=10000;
	double x=0.0;
	for (int i=0;i<N;i++) {
	    for (int j=0;j<1000;j++) {
		x=rng.nextGamma(3.0,1.0/25.0);
	    }
	    System.out.println(x);
	}
    }
    
}

The two codes do the same thing (note that they parameterise the gamma distribution differently). Both programs work (they generate variates from the same, correct, distribution), and the Commons Math interface is slightly nicer, but the code is much slower to execute. I’m still optimistic that Commons Math will one day be Java’s GSL, but I’m not giving up on Parallel COLT (or C, for that matter!) just yet…