Sunday 23 October 2011

The costs of concurrency and the actor model

I was interested to understand more about how actors are implemented in Erlang. In an old but still relevant article, an argument is raised:
These days every third grader knows that processes are expensive. Ok, so we're not using real processes, we're using threads, but if you constantly create, destroy, and switch between thousands of threads you'll bring any system to a crawl. Threads are expensive for a number of reasons. They take a long time and a lot of memory to create because the operating system needs to set up many things (the stack, internal data structures, etc.) for them to work. They take a long time to destroy because all the resources they consumed need to be freed. And they take a long time to switch between because unloading registers, storing them, loading them back, and flipping the stacks is complicated business.
So how expensive it really is to create a Thread in java these days?
Peter Lawrey, writer of the excellent Vanilla Java blog, tell us that roughly, creating a Thread takes
about 70μ (microseconds)
, which is fairly expensive, but not prohibitive. The size of the stack can be configured by command line argument to the JVM, with a default of 512kb. According to some Mac OS X 10.5 docs, a thread takes about 90μ, which is kind of validates his results.

As it was pointed out in the StackOverflow discussion
Creating threads is expensive if you're planning on firing 2000 threads per second, every second of your runtime. The JVM is not designed to handle that. If you'll have a couple of stable workers that won't be fired and killed over and over, relax.
This is slightly different in Erlang, where everything is a Process, so the model leads you to that situation.

Essentially, Erlang uses a kind of lightweight processes, akin to green threads (VM scheduled pseudo-threads as opposed to real threads). Since they only consist of a mailbox implemented as a queue, it takes about 300 words to build one, meaning we can create thousands, even millions of them, so the only constrain is memory.

I read a great post on Erlang lightweight-process actor concurrency and it's inherent problems in terms of message passing costs, i.e., having to serialize the entire message between processes, even within the same node. The lack of zero-copy messaging between actors, even within the same node, can really hurt performance. It also talks a great deal on how the BEAM VM and HiPE JIT compiler aren't anywhere near as good as the industry leader JVM (or his favoured Azul Systems "pauseless" VM).

On that basis, after years of hardcore work on Erlang, he decants for Erjang (using Java's Killim actors) or Clojure, possibly
due to been more functional and lisp syntax. I wonder why Scala wasn't mentioned? Akka
offers the same principles of Erlang actors, but implemented in a much more robust way and running on top of the JVM.

Scala actors, particularly Akka actors, use configurableexecutors that efficiently reuse threads (and in Akka 2.0, this will be declaratively configured). Thus, i can see Akka
as supersiding Erlang, an the heir of the throne in actors concurrency.

Saturday 22 October 2011

The art of GC Tuning

Great presentation discussing best practices when tuning Garbage Collection on the JVM

The different GC Collectors:


Fragmented Heap:



Also, a comparison of all the JVM GC implementations (particularly valuable given that Java8 will incorporate JRockit's features.

I've came across this links by reading the blog post on the new features of Cassandra 1.0. Some very interesting JVM tuning allowed them to increase performance manyfolds.


The JVM JIT compiler at runtime, and even the java and particularly scala compiler at compile time, love small methods that can be analysed in isolation. It is very important to consider the effect that the compiler can have whenever reading microbenchmarks.

Quoting Brian Goetz:

"Often, the way to write fast code in Java applications is to write dumb code -- code that is straightforward, clean, and follows the most obvious object-oriented principles."
Brian Goetz
Technology Evangelist, Sun Microsystems

For huge collections, is better to use a framework that can handle billions of items, potentially mapped outside of the heap, thus avoiding Full GC completely.

Friday 21 October 2011

Scala Options: Type and Null Safety

Options are great because they allow you to circumvent the $1 billion bug (Hoare).
Several posts discuss the great values of Options. It was Graham Tackley who really made me click when he presented Options as collections with zero or one elements in it.
Yesterday I was trying to make good use of this idea, and came with the problem of how retain only the elements that actually contain a value. Since you cannot call get on a None for obvious reasons, there must be an easy way to retrieve only the values.
Here's what i found:



I've transformed a List[Option[String]] with possible Nones in it, to a List[String].

How about this then?


Which is basically doing the operation that I've just done before, call flatMap(_.toList)

This makes passing around options a really valuable thing, because you have safety in your operations with minimal effort.

Very useful Scala Option cheatlist by Tony Morris

Friday 7 October 2011

Supporting different paradigms of Parallelism

Doug Lea's keynote in Scala Days is a really interesting presentation.

It shows the importance of multicore, Amdahl's Law and lots more. Well worth it.