### Introduction

This post follows on from the previous post on Gibbs sampling in various languages. In that post a simple Gibbs sampler was implemented in various languages, and speeds were compared. It was seen that R is very slow for iterative simulation algorithms characteristic of MCMC methods such as the Gibbs sampler. Statically typed languages such as C/C++ and Java were seen to be fastest for this type of algorithm. Since many statisticians like to use R for most of their work, there is natural interest in the possibility of extending R by calling simulation algorithms written in other languages. It turns out to be straightforward to call C, C++ and Java from within R, so this post will look at how this can be done, and exactly how fast the different options turn out to be. The post draws heavily on my previous posts on calling C from R and calling Java from R, as well as Dirk Eddelbuettel’s post on calling C++ from R, and it may be helpful to consult these posts for further details.

### Languages

#### R

We will start with the simple pure R version of the Gibbs sampler, and use this as our point of reference for understanding the benefits of re-coding in other languages. The background to the problem was given in the previous post and so won’t be repeated here. The code can be given as follows:

gibbs<-function(N=50000,thin=1000) { mat=matrix(0,ncol=2,nrow=N) x=0 y=0 for (i in 1:N) { for (j in 1:thin) { x=rgamma(1,3,y*y+4) y=rnorm(1,1/(x+1),1/sqrt(2*x+2)) } mat[i,]=c(x,y) } names(mat)=c("x","y") mat }

This code works perfectly, but is very slow. It takes 458.9 seconds on my very fast laptop (details given in previous post).

#### C

Let us now see how we can introduce a new function, `gibbsC` into R, which works in exactly the same way as `gibbs`, but actually calls on compiled C code to do all of the work. First we need the C code in a file called `gibbs.c`:

#include <stdio.h> #include <math.h> #include <stdlib.h> #include <R.h> #include <Rmath.h> void gibbs(int *Np,int *thinp,double *xvec,double *yvec) { int i,j; int N=*Np,thin=*thinp; GetRNGstate(); double x=0; double y=0; for (i=0;i<N;i++) { for (j=0;j<thin;j++) { x=rgamma(3.0,1.0/(y*y+4)); y=rnorm(1.0/(x+1),1.0/sqrt(2*x+2)); } xvec[i]=x; yvec[i]=y; } PutRNGstate(); }

This can be compiled with `R CMD SHLIB gibbs.c`. We can load it into R and wrap it up so that it is easy to use with the following code:

dyn.load(file.path(".",paste("gibbs",.Platform$dynlib.ext,sep=""))) gibbsC<-function(n=50000,thin=1000) { tmp=.C("gibbs",as.integer(n),as.integer(thin), x=as.double(1:n),y=as.double(1:n)) mat=cbind(tmp$x,tmp$y) colnames(mat)=c("x","y") mat }

The new function `gibbsC` works just like `gibbs`, but takes just 12.1 seconds to run. This is roughly 40 times faster than the pure R version, which is a big deal.

Note that using the R `inline` package, it is possible to directly inline the C code into the R source code. We can do this with the following R code:

require(inline) code=' int i,j; int N=*Np,thin=*thinp; GetRNGstate(); double x=0; double y=0; for (i=0;i<N;i++) { for (j=0;j<thin;j++) { x=rgamma(3.0,1.0/(y*y+4)); y=rnorm(1.0/(x+1),1.0/sqrt(2*x+2)); } xvec[i]=x; yvec[i]=y; } PutRNGstate();' gibbsCin<-cfunction(sig=signature(Np="integer",thinp="integer",xvec="numeric",yvec="numeric"),body=code,includes="#include <Rmath.h>",language="C",convention=".C") gibbsCinline<-function(n=50000,thin=1000) { tmp=gibbsCin(n,thin,rep(0,n),rep(0,n)) mat=cbind(tmp$x,tmp$y) colnames(mat)=c("x","y") mat }

This runs at the same speed as the code compiled separately, and is arguably a bit cleaner in this case. Personally I’m not a big fan of inlining code unless it is something really very simple. If there is one thing that we have learned from the murky world of web development, it is that little good comes from mixing up different languages in the same source code file!

#### C++

We can also inline C++ code into R using the `inline` and `Rcpp` packages. The code below originates from Sanjog Misra, and was discussed in the post by Dirk Eddelbuettel mentioned at the start of this post.

require(Rcpp) require(inline) gibbscode = ' int N = as<int>(n); int thn = as<int>(thin); int i,j; RNGScope scope; NumericVector xs(N),ys(N); double x=0; double y=0; for (i=0;i<N;i++) { for (j=0;j<thn;j++) { x = ::Rf_rgamma(3.0,1.0/(y*y+4)); y= ::Rf_rnorm(1.0/(x+1),1.0/sqrt(2*x+2)); } xs(i) = x; ys(i) = y; } return Rcpp::DataFrame::create( Named("x")= xs, Named("y") = ys); ' RcppGibbsFn <- cxxfunction( signature(n="int", thin = "int"), gibbscode, plugin="Rcpp") RcppGibbs <- function(N=50000,thin=1000) { RcppGibbsFn(N,thin) }

This version of the sampler runs in 12.4 seconds, just a little bit slower than the C version.

#### Java

It is also quite straightforward to call Java code from within R using the `rJava` package. The following code

import java.util.*; import cern.jet.random.tdouble.*; import cern.jet.random.tdouble.engine.*; class GibbsR { public static double[][] gibbs(int N,int thin,int seed) { DoubleRandomEngine rngEngine=new DoubleMersenneTwister(seed); Normal rngN=new Normal(0.0,1.0,rngEngine); Gamma rngG=new Gamma(1.0,1.0,rngEngine); double x=0,y=0; double[][] mat=new double[2][N]; for (int i=0;i<N;i++) { for (int j=0;j<thin;j++) { x=rngG.nextDouble(3.0,y*y+4); y=rngN.nextDouble(1.0/(x+1),1.0/Math.sqrt(2*x+2)); } mat[0][i]=x; mat[1][i]=y; } return mat; } }

can be compiled with `javac GibbsR.java` (assuming that Parallel COLT is in the classpath), and wrapped up from within an R session with

library(rJava) .jinit() obj=.jnew("GibbsR") gibbsJ<-function(N=50000,thin=1000,seed=trunc(runif(1)*1e6)) { result=.jcall(obj,"[[D","gibbs",as.integer(N),as.integer(thin),as.integer(seed)) mat=sapply(result,.jevalArray) colnames(mat)=c("x","y") mat }

This code runs in 10.7 seconds. Yes, that’s correct. Yes, *the Java code is faster than both the C and C++ code!* This really goes to show that Java is now an excellent option for numerically intensive work such as this. However, before any C/C++ enthusiasts go apoplectic, I should explain why Java turns out to be faster here, as the comparison is not quite fair… In the C and C++ code, use was made of the internal R random number generation routines, which are relatively slow compared to many modern numerical library implementations. In the Java code, I used Parallel COLT for random number generation, as it isn’t straightforward to call the R generators from Java code. It turns out that the COLT generators are faster than the R generators, and that is why Java turns out to be faster here…

#### C+GSL

Of course we do not have to use the R random number generators within our C code. For example, we could instead call on the GSL generators, using the following code:

#include <stdio.h> #include <math.h> #include <stdlib.h> #include <gsl/gsl_rng.h> #include <gsl/gsl_randist.h> #include <R.h> void gibbsGSL(int *Np,int *thinp,int *seedp,double *xvec,double *yvec) { int i,j; int N=*Np,thin=*thinp,seed=*seedp; gsl_rng *r = gsl_rng_alloc(gsl_rng_mt19937); gsl_rng_set(r,seed); double x=0; double y=0; for (i=0;i<N;i++) { for (j=0;j<thin;j++) { x=gsl_ran_gamma(r,3.0,1.0/(y*y+4)); y=1.0/(x+1)+gsl_ran_gaussian(r,1.0/sqrt(2*x+2)); } xvec[i]=x; yvec[i]=y; } }

It can be compiled with `R CMD SHLIB -lgsl -lgslcblas gibbsGSL.c`, and then called as for the regular C version. This runs in 8.0 seconds, which is noticeably faster than the Java code, but probably not “enough” faster to make it an important factor to consider in language choice.

### Summary

In this post I’ve shown that it is relatively straightforward to call code written in C, C++ or Java from within R, and that this can give very significant performance gains relative to pure R code. All of the options give fairly similar performance gains. I showed that in the case of this particular example, the “obvious” Java code is actually slightly faster than the “obvious” C or C++ code, and explained why, and how to make the C version slightly faster by using the GSL. The post by Dirk shows how to call the GSL generators from the C++ version, which I haven’t replicated here.

Tags: API, C, calling, COLT, extending, faster, Gibbs, GSL, inline, Java, MCMC, parallel, Parallel COLT, R, Rcpp, rJava, rstats, sampling, speed

01/08/2011 at 08:34 |

Hi Darren,

Very nice post. I’m glad you reversed the Java-biased bit of the analysis! It wasn’t really fair to compare the R functions with parallel colt.

I interpret your observation of 8.0 s for C vs. 10.7 s for Java differently. I think that for tasks which can be highly CPU intensive, which could be running for days or weeks, speed is extremely important. The fact that the Java code takes 34% more CPU time than the faster C equivalent could be very important, depending on your application. 4 weeks runtime for java implementation versus 3 weeks runtime for C for instance. It seems that C/C++ has everything: speed, more beautiful code, portability. Java’s only slight advantage (easy OS portability) is eliminated by the elegant R packaging system in this context.

Cheers,

CONOR.

30/12/2011 at 15:44 |

[...] Faster Gibbs sampling MCMC from within R: How to call MCMC code written in C, C++ and Java from R, with timing [...]

28/02/2012 at 16:26 |

[...] Two writeups by Darren Wilkinson on empirical evaluations of R, Java, Python, Scala and C for MCMC based learning, in particular using Gibbs Sampling. [...]

12/02/2013 at 04:37 |

[...] Two writeups by Darren Wilkinson on empirical evaluations of R, Java, Python, Scala and C for MCMC based learning, in particular for Gibbs Sampling (a type of MCMC algorithm). [...]

06/06/2014 at 07:51 |

how to install gibbs? can anyone please tell me?