## Index to first 50 posts

This is the 50th post to this blog. For my 25th post I provided a catalogue of my first 25 posts, and as promised then, I now provide a similar index for posts 25 to 50.

• 50. Index to first 50 posts [this post]

If I make it to post 75, I’ll do the same again.

## Scala for Machine Learning [book review]

Full disclosure: I received a free electronic version of this book from the publisher for the purposes of review.

There is clearly a market for a good book about using Scala for statistical computing, machine learning and data science. So when the publisher of “Scala for Machine Learning” offered me a copy for review purposes, I eagerly accepted. Three months later, I have eventually forced myself to read through the whole book, but I was very disappointed. It is important to be clear that I’m not just disappointed because I personally didn’t get much from the book – I am not really the target audience. I am disappointed because I struggle to envisage any audience that will benefit greatly from reading this book. There are several potential audiences for a book with this title: eg. people with little knowledge of Scala or machine learning (ML), people with knowledge of Scala but not ML, people with knowledge of ML but not Scala, and people with knowledge of both. I think there is scope for a book targeting any of those audiences. Personally, I fall in the latter category. The book author claimed to be aiming primarily at those who know Scala but not ML. This is sensible in that the book assumes a good working knowledge of Scala, and uses advanced features of the Scala language without any explanation: this book is certainly not appropriate for people hoping to learn about Scala in the context of ML. However, it is also a problem, as this would probably be the worst book I have ever encountered for learning about ML from scratch, and there are a lot of poor books about ML! The book just picks ML algorithms out of thin air without any proper explanation or justification, and blindly applies them to tedious financial data sets irrespective of whether or not it would be in any way appropriate to do so. It presents ML as an incoherent “bag of tricks” to be used indiscriminately on any data of the correct “shape”. It is by no means the only ML book to take such an approach, but there are many much better books which don’t. The author also claims that the book will be useful to people who know ML but not Scala, but as previously explained, I do not think that this is the case (eg. monadic traits appear on the fifth page, without proper explanation, and containing typos). I think that the only audience that could potentially benefit from this book would be people who know some Scala and some ML and want to see some practical examples of real world implementations of ML algorithms in Scala. I think those people will also be disappointed, for reasons outlined below.

The first problem with the book is that it is just full of errors and typos. It doesn’t really matter to me that essentially all of the equations in the first chapter are wrong – I already know the difference between an expectation and a sample mean, and know Bayes theorem – so I can just see that they are wrong, correct them, and move on. But for the intended audience it would be a complete nightmare. I wonder about the quality of copy-editing and technical review that this book received – it is really not of “publishable” quality. All of the descriptions of statistical/ML methods and algorithms are incredibly superficial, and usually contain factual errors or typos. One should not attempt to learn ML by reading this book. So the only hope for this book is that the Scala implementations of ML algorithms are useful and insightful. Again, I was disappointed.

For reasons that are not adequately explained or justified, the author decides to use a combination of plain Scala interfaced to legacy Java libraries (especially Apache Commons Math) for all of the example implementations. In addition, the author is curiously obsessed with an F# style pipe operator, which doesn’t seem to bring much practical benefit. Consequently, all of the code looks like a strange and rather inelegant combination of Java, Scala, C++, and F#, with a hint of Haskell, and really doesn’t look like clean idiomatic Scala code at all. For me this was the biggest disappointment of all – I really wouldn’t want any of this code in my own Scala code base (though the licensing restrictions on the code probably forbid this, anyway). It is a real shame that Scala libraries such as Breeze were not used for all of the examples – this would have led to much cleaner and more idiomatic Scala code, which could have really taken proper advantage of the functional power of the Scala language. As it is, advanced Scala features were used without much visible pay-off. Reading this book one could easily get the (incorrect) impression that Scala is an unnecessarily complex language which doesn’t offer much advantage over Java for implementing ML algorithms.

On the positive side, the book consists of nearly 500 pages of text, covering a wide range of ML algorithms and examples, and has a zip file of associated code containing the implementation and examples, which builds using sbt. If anyone is interested in seeing examples of ML algorithms implemented in Scala using Java rather than Scala libraries together with a F# pipe operator, then there is definitely something of substance here of interest.

### Alternatives

It should be clear from the above review that I think there is still a gap in the market for a good book about using Scala for statistical computing, machine learning and data science. Hopefully someone will fill this gap soon. In the meantime it is necessary to learn about Scala and ML separately, and to put the ideas together yourself. This isn’t so difficult, as there are many good resources and code repositories to help. For learning about ML, I would recommend starting off with ISLR, which uses R for the examples (but if you work in data science, you need to know R anyway). Once the basic concepts are understood, one can move on to a serious text, such as Machine Learning (which has associated Matlab code). Converting algorithms from R or Matlab to Scala (plus Breeze) is generally very straightforward, if you know Scala. For learning Scala, there are many on-line resources. If you want books, I recommend Functional Programming in Scala and Programming in Scala, 2e. Once you know about Scala, learn about scientific computing using Scala by figuring out Breeze. At some point you will probably also want to know about Spark, and there are now books on this becoming available – I’ve just got a copy of Learning Spark, which looks OK.

## Calling R from Scala sbt projects

### Overview

In previous posts I’ve shown how the jvmr CRAN R package can be used to call Scala sbt projects from R and inline Scala Breeze code in R. In this post I will show how to call to R from a Scala sbt project. This requires that R and the jvmr CRAN R package are installed on your system, as described in the previous posts. Since I’m focusing here on Scala sbt projects, I’m also assuming that sbt is installed.

The only “trick” required for calling back to R from Scala is telling sbt where the jvmr jar file is located. You can find the location from the R console as illustrated by the following session:

> library(jvmr)
> .jvmr.jar
[1] "/home/ndjw1/R/x86_64-pc-linux-gnu-library/3.1/jvmr/java/jvmr_2.11-2.11.2.1.jar"


This location (which will obviously be different for you) can then be added in to your sbt classpath by adding the following line to your build.sbt file:

unmanagedJars in Compile += file("/home/ndjw1/R/x86_64-pc-linux-gnu-library/3.1/jvmr/java/jvmr_2.11-2.11.2.1.jar")


Once this is done, calling out to R from your Scala sbt project can be carried out as described in the jvmr documentation. For completeness, a working example is given below.

### Example

In this example I will use Scala to simulate some data consistent with a Poisson regression model, and then push the data to R to fit it using the R function glm(), and then pull back the fitted regression coefficients into Scala. This is obviously a very artificial example, but the point is to show how it is possible to call back to R for some statistical procedure that may be “missing” from Scala.

The dependencies for this project are described in the file build.sbt

name := "jvmr test"

version := "0.1"

scalacOptions ++= Seq("-unchecked", "-deprecation", "-feature")

libraryDependencies  ++= Seq(
"org.scalanlp" %% "breeze" % "0.10",
"org.scalanlp" %% "breeze-natives" % "0.10"
)

resolvers ++= Seq(
"Sonatype Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots/",
"Sonatype Releases" at "https://oss.sonatype.org/content/repositories/releases/"
)

unmanagedJars in Compile += file("/home/ndjw1/R/x86_64-pc-linux-gnu-library/3.1/jvmr/java/jvmr_2.11-2.11.2.1.jar")

scalaVersion := "2.11.2"


The complete Scala program is contained in the file PoisReg.scala

import org.ddahl.jvmr.RInScala
import breeze.stats.distributions._
import breeze.linalg._

object ScalaToRTest {

def main(args: Array[String]) = {

// first simulate some data consistent with a Poisson regression model
val x = Uniform(50,60).sample(1000)
val eta = x map { xi => (xi * 0.1) - 3 }
val mu = eta map { math.exp(_) }
val y = mu map { Poisson(_).draw }

// call to R to fit the Poission regression model
val R = RInScala() // initialise an R interpreter
R.x=x.toArray // send x to R
R.y=y.toArray // send y to R
R.eval("mod <- glm(y~x,family=poisson())") // fit the model in R
// pull the fitted coefficents back into scala
val beta = DenseVector[Double](R.toVector[Double]("mod$coefficients")) // print the fitted coefficents println(beta) } }  If these two files are put in an empty directory, the code can be compiled and run by typing sbt run from the command prompt in the relevant directory. The commented code should be self-explanatory, but see the jvmr documentation for further details. ## Inlining Scala Breeze code in R using jvmr and sbt ### Introduction In the previous post I showed how to call Scala code from R using sbt and jvmr. The approach described in that post is the one I would recommend for any non-trivial piece of Scala code – mixing up code from different languages in the same source code file is not a good strategy in general. That said, for very small snippets of code, it can sometimes be convenient to inline Scala code directly into an R source code file. The canonical example of this is a computationally intensive algorithm being prototyped in R which has a slow inner loop. If the inner loop can be recoded in a few lines of Scala, it would be nice to just inline this directly into the R code without having to create a separate Scala project. The CRAN package jvmr provides a very simple and straightforward way to do this. However, as discussed in the last post, most Scala code for statistical computing (even short and simple code) is likely to rely on Breeze for special functions, probability distributions, non-uniform random number generation, numerical linear algebra, etc. In this post we will see how to use sbt in order to make sure that the Breeze library and all of its dependencies are downloaded and cached, and to provide a correct classpath with which to initialise a jvmr scalaInterpreter session. ### Setting up Configuring your system to be able to inline Scala Breeze code is very easy. You just need to install Java, R and sbt. Then install the CRAN R package jvmr. At this point you have everything you need except for the R function breezeInit, given at the end of this post. I’ve deferred the function to the end of the post as it is a bit ugly, and the details of it are not important. All it does is get sbt to ensure that Breeze is correctly downloaded and cached and then starts a scalaInterpreter with Breeze on the classpath. With this function available, we can use it within an R session as the following R session illustrates: > b=breezeInit() > b['import breeze.stats.distributions._'] NULL > b['Poisson(10).sample(20).toArray'] [1] 13 14 13 10 7 6 15 14 5 10 14 11 15 8 11 12 6 7 5 7 > summary(b['Gamma(3,2).sample(10000).toArray']) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.2124 3.4630 5.3310 5.9910 7.8390 28.5200 >  So we see that Scala Breeze code can be inlined directly into an R session, and if we are careful about return types, have the results of Scala expressions automatically unpack back into convenient R data structures. ### Summary In this post I have shown how easy it is to inline Scala Breeze code into R using sbt in conjunction with the CRAN package jvmr. This has many potential applications, with the most obvious being the desire to recode slow inner loops from R to Scala. This should give performance quite comparable with alternatives such as Rcpp, with the advantage being that you get to write beautiful, elegant, functional Scala code instead of horrible, ugly, imperative C++ code! ;-) ### The breezeInit function The actual breezeInit() function is given below. It is a little ugly, but very simple. It is obviously easy to customise for different libraries and library versions as required. All of the hard work is done by sbt which must be installed and on the default system path in order for this function to work. breezeInit<-function() { library(jvmr) sbtStr="name := \"tmp\" version := \"0.1\" libraryDependencies ++= Seq( \"org.scalanlp\" %% \"breeze\" % \"0.10\", \"org.scalanlp\" %% \"breeze-natives\" % \"0.10\" ) resolvers ++= Seq( \"Sonatype Snapshots\" at \"https://oss.sonatype.org/content/repositories/snapshots/\", \"Sonatype Releases\" at \"https://oss.sonatype.org/content/repositories/releases/\" ) scalaVersion := \"2.11.2\" lazy val printClasspath = taskKey[Unit](\"Dump classpath\") printClasspath := { (fullClasspath in Runtime value) foreach { e => print(e.data+\"!\") } } " tmps=file(file.path(tempdir(),"build.sbt"),"w") cat(sbtStr,file=tmps) close(tmps) owd=getwd() setwd(tempdir()) cpstr=system2("sbt","printClasspath",stdout=TRUE) cpst=cpstr[length(cpstr)] cpsp=strsplit(cpst,"!")[[1]] cp=cpsp[2:(length(cpsp)-1)] si=scalaInterpreter(cp,use.jvmr.class.path=FALSE) setwd(owd) si }  ## Calling Scala code from R using jvmr ### Introduction In previous posts I have explained why I think that Scala is a good language to use for statistical computing and data science. Despite this, R is very convenient for simple exploratory data analysis and visualisation – currently more convenient than Scala. I explained in my recent talk at the RSS what (relatively straightforward) things would need to be developed for Scala in order to make R completely redundant, but for the short term at least, it seems likely that I will need to use both R and Scala for my day-to-day work. Since I use both Scala and R for statistical computing, it is very convenient to have a degree of interoperability between the two languages. I could call R from Scala code or Scala from R code, or both. Fortunately, some software tools have been developed recently which make this much simpler than it used to be. The software is jvmr, and as explained at the website, it enables calling Java and Scala from R and calling R from Java and Scala. I have previously discussed calling Java from R using the R CRAN package rJava. In this post I will focus on calling Scala from R using the CRAN package jvmr, which depends on rJava. I may examine calling R from Scala in a future post. On a system with Java installed, it should be possible to install the jvmr R package with a simple install.packages("jvmr")  from the R command prompt. The package has the usual documentation associated with it, but the draft paper describing the package is the best way to get an overview of its capabilities and a walk-through of simple usage. ### A Gibbs sampler in Scala using Breeze For illustration I’m going to use a Scala implementation of a Gibbs sampler which relies on the Breeze scientific library, and will be built using the simple build tool, sbt. Most non-trivial Scala projects depend on various versions of external libraries, and sbt is an easy way to build even very complex projects trivially on any system with Java installed. You don’t even need to have Scala installed in order to build and run projects using sbt. I give some simple complete worked examples of building and running Scala sbt projects in the github repo associated with my recent RSS talk. Installing sbt is trivial as explained in the repo READMEs. For this post, the Scala code, gibbs.scala is given below: package gibbs object Gibbs { import scala.annotation.tailrec import scala.math.sqrt import breeze.stats.distributions.{Gamma,Gaussian} case class State(x: Double, y: Double) { override def toString: String = x.toString + " , " + y + "\n" } def nextIter(s: State): State = { val newX = Gamma(3.0, 1.0/((s.y)*(s.y)+4.0)).draw State(newX, Gaussian(1.0/(newX+1), 1.0/sqrt(2*newX+2)).draw) } @tailrec def nextThinnedIter(s: State,left: Int): State = if (left==0) s else nextThinnedIter(nextIter(s),left-1) def genIters(s: State, stop: Int, thin: Int): List[State] = { @tailrec def go(s: State, left: Int, acc: List[State]): List[State] = if (left>0) go(nextThinnedIter(s,thin), left-1, s::acc) else acc go(s,stop,Nil).reverse } def main(args: Array[String]) = { if (args.length != 3) { println("Usage: sbt \"run <outFile> <iters> <thin>\"") sys.exit(1) } else { val outF=args(0) val iters=args(1).toInt val thin=args(2).toInt val out = genIters(State(0.0,0.0),iters,thin) val s = new java.io.FileWriter(outF) s.write("x , y\n") out map { it => s.write(it.toString) } s.close } } }  This code requires Scala and the Breeze scientific library in order to build. We can specify this in a sbt build file, which should be called build.sbt and placed in the same directory as the Scala code. name := "gibbs" version := "0.1" scalacOptions ++= Seq("-unchecked", "-deprecation", "-feature") libraryDependencies ++= Seq( "org.scalanlp" %% "breeze" % "0.10", "org.scalanlp" %% "breeze-natives" % "0.10" ) resolvers ++= Seq( "Sonatype Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots/", "Sonatype Releases" at "https://oss.sonatype.org/content/repositories/releases/" ) scalaVersion := "2.11.2"  Now, from a system command prompt in the directory where the files are situated, it should be possible to download all dependencies and compile and run the code with a simple sbt "run output.csv 50000 1000"  #### Calling via R system calls Since this code takes a relatively long time to run, calling it from R via simple system calls isn’t a particularly terrible idea. For example, we can do this from the R command prompt with the following commands system("sbt \"run output.csv 50000 1000\"") out=read.csv("output.csv") library(smfsb) mcmcSummary(out,rows=2)  This works fine, but won’t work so well for code which needs to be called repeatedly. For this, tighter integration between R and Scala would be useful, which is where jvmr comes in. #### Calling sbt-based Scala projects via jvmr jvmr provides a very simple way to embed a Scala interpreter within an R session, to be able to execute Scala expressions from R and to have the results returned back to the R session for further processing. The main issue with using this in practice is managing dependencies on external libraries and setting the Scala classpath correctly. For an sbt project such as we are considering here, it is relatively easy to get sbt to provide us with all of the information we need in a fully automated way. First, we need to add a new task to our sbt build instructions, which will output the full classpath in a way that is easy to parse from R. Just add the following to the end of the file build.sbt: lazy val printClasspath = taskKey[Unit]("Dump classpath") printClasspath := { (fullClasspath in Runtime value) foreach { e => print(e.data+"!") } }  Be aware that blank lines are significant in sbt build files. Once we have this in our build file, we can write a small R function to get the classpath from sbt and then initialise a jvmr scalaInterpreter with the correct full classpath needed for the project. An R function which does this, sbtInit(), is given below sbtInit<-function() { library(jvmr) system2("sbt","compile") cpstr=system2("sbt","printClasspath",stdout=TRUE) cpst=cpstr[length(cpstr)] cpsp=strsplit(cpst,"!")[[1]] cp=cpsp[1:(length(cpsp)-1)] scalaInterpreter(cp,use.jvmr.class.path=FALSE) }  With this function at our disposal, it becomes trivial to call our Scala code direct from the R interpreter, as the following code illustrates. sc=sbtInit() sc['import gibbs.Gibbs._'] out=sc['genIters(State(0.0,0.0),50000,1000).toArray.map{s=>Array(s.x,s.y)}'] library(smfsb) mcmcSummary(out,rows=2)  Here we call the getIters function directly, rather than via the main method. This function returns an immutable List of States. Since R doesn’t understand this, we map it to an Array of Arrays, which R then unpacks into an R matrix for us to store in the matrix out. ### Summary The CRAN package jvmr makes it very easy to embed a Scala interpreter within an R session. However, for most non-trivial statistical computing problems, the Scala code will have dependence on external scientific libraries such as Breeze. The standard way to easily manage external dependencies in the Scala ecosystem is sbt. Given an sbt-based Scala project, it is easy to add a task to the sbt build file and a function to R in order to initialise the jvmr Scala interpreter with the full classpath needed to call arbitrary Scala functions. This provides very convenient inter-operability between R and Scala for many statistical computing applications. ## One-way ANOVA with fixed and random effects from a Bayesian perspective This blog post is derived from a computer practical session that I ran as part of my new course on Statistics for Big Data, previously discussed. This course covered a lot of material very quickly. In particular, I deferred introducing notions of hierarchical modelling until the Bayesian part of the course, where I feel it is more natural and powerful. However, some of the terminology associated with hierarchical statistical modelling probably seems a bit mysterious to those without a strong background in classical statistical modelling, and so this practical session was intended to clear up some potential confusion. I will analyse a simple one-way Analysis of Variance (ANOVA) model from a Bayesian perspective, making sure to highlight the difference between fixed and random effects in a Bayesian context where everything is random, as well as emphasising the associated identifiability issues. R code is used to illustrate the ideas. ### Example scenario We will consider the body mass index (BMI) of new male undergraduate students at a selection of UK Universities. Let us suppose that our data consist of measurements of (log) BMI for a random sample of 1,000 males at each of 8 Universities. We are interested to know if there are any differences between the Universities. Again, we want to model the process as we would simulate it, so thinking about how we would simulate such data is instructive. We start by assuming that the log BMI is a normal random quantity, and that the variance is common across the Universities in question (this is quite a big assumption, and it is easy to relax). We assume that the mean of this normal distribution is University-specific, but that we do not have strong prior opinions regarding the way in which the Universities differ. That said, we expect that the Universities would not be very different from one another. ### Simulating data A simple simulation of the data with some plausible parameters can be carried out as follows. set.seed(1) Z=matrix(rnorm(1000*8,3.1,0.1),nrow=8) RE=rnorm(8,0,0.01) X=t(Z+RE) colnames(X)=paste("Uni",1:8,sep="") Data=stack(data.frame(X)) boxplot(exp(values)~ind,data=Data,notch=TRUE)  Make sure that you understand exactly what this code is doing before proceeding. The boxplot showing the simulated data is given below. ### Frequentist analysis We will start with a frequentist analysis of the data. The model we would like to fit is $y_{ij} = \mu + \theta_i + \varepsilon_{ij}$ where i is an indicator for the University and j for the individual within a particular University. The “effect”, $\theta_i$ represents how the ith University differs from the overall mean. We know that this model is not actually identifiable when the model parameters are all treated as “fixed effects”, but R will handle this for us. > mod=lm(values~ind,data=Data) > summary(mod) Call: lm(formula = values ~ ind, data = Data) Residuals: Min 1Q Median 3Q Max -0.36846 -0.06778 -0.00069 0.06910 0.38219 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.101068 0.003223 962.244 < 2e-16 *** indUni2 -0.006516 0.004558 -1.430 0.152826 indUni3 -0.017168 0.004558 -3.767 0.000166 *** indUni4 0.017916 0.004558 3.931 8.53e-05 *** indUni5 -0.022838 0.004558 -5.011 5.53e-07 *** indUni6 -0.001651 0.004558 -0.362 0.717143 indUni7 0.007935 0.004558 1.741 0.081707 . indUni8 0.003373 0.004558 0.740 0.459300 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.1019 on 7992 degrees of freedom Multiple R-squared: 0.01439, Adjusted R-squared: 0.01353 F-statistic: 16.67 on 7 and 7992 DF, p-value: < 2.2e-16  We see that R has handled the identifiability problem using “treatment contrasts”, dropping the fixed effect for the first university, so that the intercept actually represents the mean value for the first University, and the effects for the other Univeristies represent the differences from the first University. If we would prefer to impose a sum constraint, then we can switch to sum contrasts with options(contrasts=rep("contr.sum",2))  and then re-fit the model. > mods=lm(values~ind,data=Data) > summary(mods) Call: lm(formula = values ~ ind, data = Data) Residuals: Min 1Q Median 3Q Max -0.36846 -0.06778 -0.00069 0.06910 0.38219 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.0986991 0.0011394 2719.558 < 2e-16 *** ind1 0.0023687 0.0030146 0.786 0.432048 ind2 -0.0041477 0.0030146 -1.376 0.168905 ind3 -0.0147997 0.0030146 -4.909 9.32e-07 *** ind4 0.0202851 0.0030146 6.729 1.83e-11 *** ind5 -0.0204693 0.0030146 -6.790 1.20e-11 *** ind6 0.0007175 0.0030146 0.238 0.811889 ind7 0.0103039 0.0030146 3.418 0.000634 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.1019 on 7992 degrees of freedom Multiple R-squared: 0.01439, Adjusted R-squared: 0.01353 F-statistic: 16.67 on 7 and 7992 DF, p-value: < 2.2e-16  This has 7 degrees of freedom for the effects, as before, but ensures that the 8 effects sum to precisely zero. This is arguably more interpretable in this case. ### Bayesian analysis We will now analyse the simulated data from a Bayesian perspective, using JAGS. #### Fixed effects All parameters in Bayesian models are uncertain, and therefore random, so there is much confusion regarding the difference between “fixed” and “random” effects in a Bayesian context. For “fixed” effects, our prior captures the idea that we sample the effects independently from a “fixed” (typically vague) prior distribution. We could simply code this up and fit it in JAGS as follows. require(rjags) n=dim(X)[1] p=dim(X)[2] data=list(X=X,n=n,p=p) init=list(mu=2,tau=1) modelstring=" model { for (j in 1:p) { theta[j]~dnorm(0,0.0001) for (i in 1:n) { X[i,j]~dnorm(mu+theta[j],tau) } } mu~dnorm(0,0.0001) tau~dgamma(1,0.0001) } " model=jags.model(textConnection(modelstring),data=data,inits=init) update(model,n.iter=1000) output=coda.samples(model=model,variable.names=c("mu","tau","theta"),n.iter=100000,thin=10) print(summary(output)) plot(output) autocorr.plot(output) pairs(as.matrix(output)) crosscorr.plot(output)  On running the code we can clearly see that this naive approach leads to high posterior correlation between the mean and the effects, due to the fundamental lack of identifiability of the model. This also leads to MCMC mixing problems, but it is important to understand that this computational issue is conceptually entirely separate from the fundamental statisticial identifiability issue. Even if we could avoid MCMC entirely, the identifiability issue would remain. A quick fix for the identifiability issue is to use “treatment contrasts”, just as for the frequentist model. We can implement that as follows. data=list(X=X,n=n,p=p) init=list(mu=2,tau=1) modelstring=" model { for (j in 1:p) { for (i in 1:n) { X[i,j]~dnorm(mu+theta[j],tau) } } theta[1]<-0 for (j in 2:p) { theta[j]~dnorm(0,0.0001) } mu~dnorm(0,0.0001) tau~dgamma(1,0.0001) } " model=jags.model(textConnection(modelstring),data=data,inits=init) update(model,n.iter=1000) output=coda.samples(model=model,variable.names=c("mu","tau","theta"),n.iter=100000,thin=10) print(summary(output)) plot(output) autocorr.plot(output) pairs(as.matrix(output)) crosscorr.plot(output)  Running this we see that the model now works perfectly well, mixes nicely, and gives sensible inferences for the treatment effects. Another source of confusion for models of this type is data formating and indexing in JAGS models. For our balanced data there was not problem passing in data to JAGS as a matrix and specifying the model using nested loops. However, for unbalanced designs this is not necessarily so convenient, and so then it can be helpful to specify the model based on two-column data, as we would use for fitting using lm(). This is illustrated with the following model specification, which is exactly equivalent to the previous model, and should give identical (up to Monte Carlo error) results. N=n*p data=list(y=Data$values,g=Data\$ind,N=N,p=p)
init=list(mu=2,tau=1)
modelstring="
model {
for (i in 1:N) {
y[i]~dnorm(mu+theta[g[i]],tau)
}
theta[1]<-0
for (j in 2:p) {
theta[j]~dnorm(0,0.0001)
}
mu~dnorm(0,0.0001)
tau~dgamma(1,0.0001)
}
"
model=jags.model(textConnection(modelstring),data=data,inits=init)
update(model,n.iter=1000)
output=coda.samples(model=model,variable.names=c("mu","tau","theta"),n.iter=100000,thin=10)
print(summary(output))
plot(output)


As suggested above, this indexing scheme is much more convenient for unbalanced data, and hence widely used. However, since our data is balanced here, we will revert to the matrix approach for the remainder of the post.

One final thing to consider before moving on to random effects is the sum-contrast model. We can implement this in various ways, but I’ve tried to encode it for maximum clarity below, imposing the sum-to-zero constraint via the final effect.

data=list(X=X,n=n,p=p)
init=list(mu=2,tau=1)
modelstring="
model {
for (j in 1:p) {
for (i in 1:n) {
X[i,j]~dnorm(mu+theta[j],tau)
}
}
for (j in 1:(p-1)) {
theta[j]~dnorm(0,0.0001)
}
theta[p] <- -sum(theta[1:(p-1)])
mu~dnorm(0,0.0001)
tau~dgamma(1,0.0001)
}
"
model=jags.model(textConnection(modelstring),data=data,inits=init)
update(model,n.iter=1000)
output=coda.samples(model=model,variable.names=c("mu","tau","theta"),n.iter=100000,thin=10)
print(summary(output))
plot(output)


Again, this works perfectly well and gives similar results to the frequentist analysis.

#### Random effects

The key difference between fixed and random effects in a Bayesian framework is that random effects are not independent, being drawn from a distribution with parameters which are not fixed. Essentially, there is another level of hierarchy involved in the specification of the random effects. This is best illustrated by example. A random effects model for this problem is given below.

data=list(X=X,n=n,p=p)
init=list(mu=2,tau=1)
modelstring="
model {
for (j in 1:p) {
theta[j]~dnorm(0,taut)
for (i in 1:n) {
X[i,j]~dnorm(mu+theta[j],tau)
}
}
mu~dnorm(0,0.0001)
tau~dgamma(1,0.0001)
taut~dgamma(1,0.0001)
}
"
model=jags.model(textConnection(modelstring),data=data,inits=init)
update(model,n.iter=1000)
output=coda.samples(model=model,variable.names=c("mu","tau","taut","theta"),n.iter=100000,thin=10)
print(summary(output))
plot(output)


The only difference between this and our first naive attempt at a Bayesian fixed effects model is that we have put a gamma prior on the precision of the effect. Note that this model now runs and fits perfectly well, with reasonable mixing, and gives sensible parameter inferences. Although the effects here are not constrained to sum-to-zero, like in the case of sum contrasts for a fixed effects model, the prior encourages shrinkage towards zero, and so the random effect distribution can be thought of as a kind of soft version of a hard sum-to-zero constraint. From a predictive perspective, this model is much more powerful. In particular, using a random effects model, we can make strong predictions for unobserved groups (eg. a ninth University), with sensible prediction intervals based on our inferred understanding of how similar different universities are. Using a fixed effects model this isn’t really possible. Even for a Bayesian version of a fixed effects model using proper (but vague) priors, prediction intervals for unobserved groups are not really sensible.

Since we have used simulated data here, we can compare the estimated random effects with the true effects generated during the simulation.

> apply(as.matrix(output),2,mean)
mu           tau          taut      theta[1]      theta[2]
3.098813e+00  9.627110e+01  7.015976e+03  2.086581e-03 -3.935511e-03
theta[3]      theta[4]      theta[5]      theta[6]      theta[7]
-1.389099e-02  1.881528e-02 -1.921854e-02  5.640306e-04  9.529532e-03
theta[8]
5.227518e-03
> RE
[1]  0.002637034 -0.008294518 -0.014616348  0.016839902 -0.015443243
[6] -0.001908871  0.010162117  0.005471262


We see that the Bayesian random effects model has done an excellent job of estimation. If we wished, we could relax the assumption of common variance across the groups by making tau a vector indexed by j, though there is not much point in persuing this here, since we know that the groups do all have the same variance.

#### Strong subjective priors

The above is the usual story regarding fixed and random effects in Bayesian inference. I hope this is reasonably clear, so really I should quit while I’m ahead… However, the issues are really a bit more subtle than I’ve suggested. The inferred precision of the random effects was around 7,000, so now lets re-run the original, naive, “fixed effects” model with a strong subjective Bayesian prior on the distribution of the effects.

data=list(X=X,n=n,p=p)
init=list(mu=2,tau=1)
modelstring="
model {
for (j in 1:p) {
theta[j]~dnorm(0,7000)
for (i in 1:n) {
X[i,j]~dnorm(mu+theta[j],tau)
}
}
mu~dnorm(0,0.0001)
tau~dgamma(1,0.0001)
}
"
model=jags.model(textConnection(modelstring),data=data,inits=init)
update(model,n.iter=1000)
output=coda.samples(model=model,variable.names=c("mu","tau","theta"),n.iter=100000,thin=10)
print(summary(output))
plot(output)


This model also runs perfectly well and gives sensible inferences, despite the fact that the effects are iid from a fixed distribution and there is no hard constraint on the effects. Similarly, we can make sensible predictions, together with appropriate prediction intervals, for an unobserved group. So it isn’t so much the fact that the effects are coupled via an extra level of hierarchy that makes things work. It’s really the fact that the effects are sensibly distributed and not just sampled directly from a vague prior. So for “real” subjective Bayesians the line between fixed and random effects is actually very blurred indeed…

## Statistical computing languages at the RSS

On Friday the Royal Statistical Society hosted a meeting on Statistical computing languages, organised by my colleague Colin Gillespie. Four languages were presented at the meeting: Python, Scala, Matlab and Julia. I presented the talk on Scala. The slides I presented are available, in addition to the code examples and instructions on how to run them, in a public github repo I have created. The talk mainly covered material I have discussed in various previous posts on this blog. What was new was the emphasis on how easy it is to use and run Scala code, and the inclusion of complete examples and instructions on how to run them on any platform with a JVM installed. I also discussed some of the current limitations of Scala as an environment for interactive statistical data analysis and visualisation, and how these limitations could be overcome with a little directed effort. Colin suggested that all of the speakers covered a couple of examples (linear regression and a Monte Carlo integral) in “their” languages, and he provided an R solution. It is interesting to see the examples in the five different languages side by side for comparison purposes. Colin is collecting together all of the material relating to the meeting in a combined github repo, which should fill out over the next few days.

For other Scala posts on this blog, see all of my posts with a “scala” tag.