Just wanted to let you know that I found this article so good that I decided to use it as a reference for a class wiki assignment as a part of my PL class. I was assigned R for my language to document. You’ll be the only “blog” citation that made it into my wiki 🙂 ]]>

I am in Harry’s team. I am reading your https://github.com/darrenjw/scala-course scscala.pdf I tried to see if you have the .md file to I can do a pull request.

You will need to update the page 11 ‘Apache Spark — big data framework built on Akka’ as from Spark 1.6 Spark does not use any more Akka https://stackoverflow.com/questions/43911000/spark-2-1-1-with-property-spark-akka-threads-12

Regards Guillermo ]]>

2) Raw, because of the above. ]]>

I’m a first year PhD student working in mathematical modelling of infectious disease dynamics at Imperial College! Firstly just wanted to say thank you so much for having written this series on Particle Filters and the associated maths, it’s been immensely useful!

I’ve got just two quick questions based on the post above:

1) It wasn’t clear to me how the average weight at each time provides an estimate of the marginal likelihood of the current data point given the data so far- could you possibly explain how that’s the case?

2) Just to double check, is the estimate of the marginal likelihood calculated using the weights or the normalised weights?

Thanks again!

Charlie Whittaker

]]>