Concurrently JavaScript (part 3)

Note: If you haven’t read part 1 and part 2 of this series yet, I suggest you check them out first! We’ll keep building on those concepts heavily.

As we’ve already discussed at length in this series, coordination of concurrency lets us express the relationship between operations as either series or parallel.

In this final part of the series, I want to direct your attention to an entirely different model of concurrency than what we looked at in the previous post (reactive programming).

Be careful of the temptation to look at any new shiny thing and assume the message is “X is the new Y”. My intention is not to suggest that this post’s topic replaces reactive programming. My intention is instead to give you another powerful tool to swing at your complex concurrency modeling.

Note: If you got here looking for “CSP” as in “Content Security Policy”, this is not the article you’re looking for. CSP here means something entirely different.

Channels vs Streams

With reactive programming, we build the communication layer, the backbone of the flow of data through the program, with observables (aka streams).

We’re going to now shift our focus to another primitive: channels.

To put it most succinctly, a channel is kind of like a stream with a default buffer size of 1 (or 0, depending on your perspective). A channel cannot accept a message unless something on the other end is reading the message out at the same time.

Most streams you’ve likely worked with (I/O streams, etc) are simplex (1-way), meaning that you need two streams in opposite directions to achieve 2-way communication. Channels are duplex (2-way); you can read or write from either side.


In streams programming, we have a notion where a stream’s read-end can “communicate” (indirectly) back to the write-end by refusing to accept any more messages; the write-end knows that it must stop writing until the stream has been flushed. We call this backpressure.

Think of backpressure as when you hold your thumb completely covering the end of a hose, and all water stops flowing. If your house’s water supply had no notion of “responding” to backpressure, as soon as you stopped the flow, water would start building up, creating greater and greater pressure until a pipe or the hose exploded.

But what actually happens when you block the output end of the hose is that water simply stops flowing until you unblock the hose.

We can do this pretty easily with Node.js streams, but it is a little harder or more awkward to model with observables. There are things like hot and cold observables which provide similar capabilities, but it’s not exactly easy to do that in place.

So a channel is like a stream that automatically “communicates” the backpressure. A message will only be transferred when both the read and write ends are ready to make the exchange.

Note: Channels can be configured to have larger buffer sizes, in which case the backpressure doesn’t build up until the buffer is full. But the default and most common behavior is single-message transfer, so that will be our focus for this post.

Communicating Sequential Processes (CSP)

CSP is a model for coordinating concurrency, described in a book of the same name (PDF, 1978) by C. A. R. Hoare — Foreword by none other than Dijkstra. CSP models coordination of concurrency based on blocking channel communications.

CSP is the model for concurrency in the go language, as well as Clojure/ClojureScript. Let me also recommend these amazing articles for further reading about CSP. David Nolen and James Long are a couple of brilliant leading voices in this space. I’ve also written about CSP ideas before.

To understand conceptually what CSP is all about, consider two school children with walkie talkies. Imagine they’re both running around a large backyard, playing as spies. And in this game, each spy is on their own mission, so they’re playing mostly independently.

Occassionally, one spy stops running, hides behind a rock or tree, and calls out to the other on the walkie talkie using their super-secret code language. The other spy recognizes the code word, and also dives behind cover and responds over the walkie talkie. They exchange some coded messages, then go back about their separate missions.

Silly scenario and metaphor, I know. But it’s how I visualize CSP. So let’s break that down.


What are these “processes” that CSP refers to?

In a general computing sense, you can think about them as separate system processes (or threads, etc) that each work on different tasks as part of a larger program’s operation. CSP assumes the processes operate independently, but provides a structured way for them to communicate with each other to coordinate their task operations as necessary.

The processes are so independent that they don’t know about each other at all, and have no direct mutual control. Rather than a preemptive mechanism where process A can interrupt process B, the coordination is always cooperative.

In languages that have real threads, a process is easy to conceive. But what about in a single-threaded environment like JavaScript?

Fortunately, we have a mechanism to model such processes: generators.

function *processA() {
    // ..

function *processB() {
    // ..

These two generators can operate (run / pause / resume) independently of each other, so conceptually they are “processes”. Because generators can locally pause with yield, they can run concurrently — IOW, appearing as if in parallel — even though the overall JS program is still single-threaded and only running one statement at a time.

With CSP, you model each different task in your application as a separate process.

Communication (CSP Series)

To facilitate coordination between CSP processes, message communication is the implicit control mechanism. Consider two abstract message communication operations: send and receive.

If A gets to a point where it needs to coordinate with B in some way (without even knowing B exists!), it can do so only indirectly by modeling the coordination as a message. Likewise, if B decides it needs to coordinate with A (again with no direct knowledge of A), it must do so via message communication.

In other words, A doesn’t know about B specifically, but it does know it needs to communicate with it, and likewise B to A.

The key to understanding this CSP model is to understand that both A and B must independently decide that it’s time to execute this message communication before any message will be sent. In a sense, this is synchronous message communication. Yes, that may seem surprising in this context, since we’re typically talking about asynchronous mechanisms, but it’s entirely intentional.

Both the send and receive operations are blocking operations, in that they locally block execution in the process. If A wants to send a message to another process (like B), it blocks waiting on the send to complete until some other process receives it. If B needs to receive a message, it blocks waiting on some other process to send it.

So, how are we going to send and receive these messages? With a channel!

In the traditional CSP API, send is called put(..) and receive is called take(..), as in put a message onto a channel and take a message off a channel. Here’s what this might look like in JS:

var ch = chan();

function *processA() { // .. yield put( ch, 42 ); // .. } function *processB() { // .. var answer = yield take( ch ); // .. }

See how *processA() pauses with yield whenever it tries to send the 42 message on the ch channel? It doesn’t know or care who will take(..) the message — or even when — but it knows it should wait until its put(..) completes with a matching take(..) elsewhere.

And likewise, *processB() doesn’t know or care who might send a message on ch, but it does know it should yield to wait until the take(..) completes from a matching put(..) elsewhere.

Also, since channels are like duplex streams, the direction of communication is easily reversed if *processB() uses put(..) to send a message on ch for another process (like *processA()) to take(..). In fact, three different processes could all queue up waiting to put(..) messages on ch, and two other processes could take(..) messages off ch at their leisure.

It’s important to recognize that the blocking semantic here means that it doesn’t matter whether a take(..) happens on a channel before a put(..), or a put(..) happens before a take(..). In either case, both processes block waiting (indirectly) for the other, and once they both “arrive”, the message transfer occurs, and the processes become unblocked and can proceed independently as before.

Time — the complexity of whether two things A and B happen in AB or BA order — is the most complex component of state in any application. CSP eliminates (hides) this time complexity, allowing you to model your application’s state transitions as simple message transfers. It cannot be overstated how incredibly powerful that idea is.

Blocking put(..)s and take(..)s on a shared channel. That’s it! That’s how independent processes coordinate their concurrency. And that’s really all you need to understand the core concept and magical simplicity of CSP.

Library Support

CSP processes are referred to as “goroutines” in the go language. Clojure/ClojureScript’s core.async module also implements CSP with goroutines.

But in the earlier snippets, we didn’t explain how in our JS program those processes (generators in JS) are executed (scheduled, resumed, etc). We also assumed the appearance of put(..) and take(..) operations out of thin air, and an implementation of channels.

All these details need to be handled by a CSP library. There are a couple of CSP-flavored libraries (via the use(..) method) to choose from, and most are heavily inspired by the naming and methodology of the go language.

Here’s roughly what it may look like:

var ch = csp.chan();

csp.go(function *processA() {
    // ..
    yield csp.put( ch, 42 );
    // ..

csp.go(function *processB() {
    // ..
    var answer = yield csp.take( ch );
    // ..

For cleaner scoping, the CSP methods are typically organized under a csp namespace. The go(..) method spins up a goroutine, chan(..) makes channels, and put(..) and take(..) map directly to the send and receive operations we’ve just discussed.

Keeping Things go(..)ing

Your program might not be all that interesting if all your goroutines are just single flow code snippets with a few yields, where the entire program finishes in a brief few ms after spinning quickly through all the yields.

More likely, you’re going to want to have “long running” processes that run indefinitely during the life of your program. That’s where our old friend while comes in:

var ch = csp.chan();

csp.go(function *processA(){
    while (true) {
        var num = yield csp.take( ch );
        console.log( num );

csp.go(function *processB(){
    while (true) {
        var num = Math.random();
        yield csp.put( ch, num );

As you can see, I’m using the infamous and oft-hated while..true loop to keep the goroutines going; this will likely be a very common pattern in your program. Of course, in this particular example, both processes just spin as fast as possible, and it’s more likely that you’ll actually be responding more slowly as events (channel messages) come intermittently from user interactions, network responses, etc.

Another thing to be aware of: the core take(..) and put(..) utilities are designed to be called with a yield from inside a generator. But what if you need to call them from a normal function, such as a DOM event handler? Take a look:

var click = csp.chan();

csp.go(function *handleClicks(){
    while (true) {
        var evt = yield csp.take( click );
        console.log( evt.mouseX, evt.mouseY );

$ onclick(evt){
    csp.putAsync( click, evt );

Here, the putAsync(..) method is called from a regular non-generator function, and thus it’s not used with yield. It still performs the same work, which is to attempt to put the message (the evt object, in this case) onto the click channel. Even though putAsync(..) is a normal function and will finish right away, the underling send operation will not complete until the matching take(..) is executed in the *handleClicks() goroutine.

Note: Some CSP libraries have putAsync(..) and takeAsync(..) return promises that are resolved when the underlying operations complete, so you can observe the promise resolution if you need to.

You can probably imagine by now that you’ll actually end up having lots of channels for the various messaging pathways in your application. This is OK and totally expected. You can mix-n-match as many as you need! This is very analogous to how your program would alternatively have lots of promises, or lots of streams, or lots of observables… etc.

The beauty of CSP channels is that they abstract out the notion of time-dependent ordering, so a paired read and a write can actually happen in either order. They also have built-in “backpressure” semantics, which is harder to achieve with some other concurrency models.

CSP Parallelism

It’s easy to see how CSP message transfers achieve the series concurrency pattern. What about parallel? To answer that, we need to examine another CSP primitive method: alts(..).

At its most basic, alts(..) can sorta be thought of like Promise.race(..). What alts(..) says is, given two or more channels, pair up with the operation that can complete first.

It’s like a signal switch. Channels A, B, and C come in, and whichever one is ready first for a message operation, that’s what alts(..) performs, keeping the others waiting for another turn.


csp.go(function *processA(){
    while (true) {
        var msg = yield csp.alts( chA, chB, chC );
        console.log( msg );

If chA has a put(..) pending against it from somewhere else, that will immediately be the resultant operation of alts(..); msg will be the result of take(..)ing that message from chA. Every time the alts(..) call is made, if chA has a put(..) pending, it’ll always be taken first, meaning it could “starve out” the other channels’ chances. If chA isn’t ready at some point, chB is consulted for its turn, and then chC, and so on.

Note: If you wanted a round-robin style of scheduling instead of chA always getting first priority, you could shuffle chA, chB, and chC manually in an array and give them in a different order each time you call alts(..).

alts(..) as shown here is assuming all take(..)s, which I’ve found to be by far the most common. However, a more general idea of alts(..) is that it’s any CSP operation, whether that be put(..), take(..), or even another sub-alts(..). You can mix and match, so you could imagine something more exotic like:

csp.alts( chA, [chB, 42], [chC, "hello"] );

This alts(..) would be attempting take(chA), put(chB,42), and put(chC,"hello"). It’s important to keep in mind “attempting” though: only the first operation that can be performed will be performed; the others will be ignored. Think of it as alts(..) peeking into the state of each channel before trying to send or receive on it, and only executing if it will immediately be processed.

If you read part 2 of this series on reactive programming, we talked about reactive stream operations such as merge and zip. alts(..) can be thought of as CSP’s version of these stream compositions.

Note: If you’re wondering further about more realistic usages of CSP concepts, check out my A Tale Of Three Lists project. It’s kinda like the TodoMVC of async programming. Specifically, look at some snippets of CSP code for less trivial code examples.

CSP In asynquence

I’ve mentioned a few times already in this post series my asynquence library. It’s probably no surprise to point out that it also has CSP support. The primary advantage of asynquence is that it has all of these different async/concurrency patterns bundled in one small-but-powerful library, instead of requiring you to learn a half dozen different libraries.

Here’s what asynquence-flavored CSP looks like, at a glimpse:

// friendly, short namespace alias
var Ac = ASQ.csp;

    Ac.go(function *processA(mainCh){
        var answer = yield Ac.take( mainCh );
        console.log( answer );

    Ac.go(function *processB(mainCh){
        yield Ac.put( mainCh, 42 );

Should look pretty familiar and similar to the previous snippets. Of note, goroutines are passed to runner(..), and a shared channel (mainCh here) is automatically created and passed to all goroutines so you don’t need to do that part manually.

Remote CSP

All the goroutines we’ve discussed so far are assumed to be local to each other (in the same JS program instance). But one of the greatest parts of CSP is that the semantics of put(..) and take(..) are agnostic of where the two sides of the channel are.

If you could, for example, construct a special CSP channel that had one end in the browser program, and the other on the server, a goroutine in the browser could put(..) or take(..) from that channel, and the server could do the same, and your code would have no concern whatsoever that the two processes were separated across the internet from each other.

Bridging the gap across the wire, either between browser and server, or browser thread and web worker, or between server and child process, or … all these are implementation details, but the CSP semantics remain the same.

Heck, for that matter, one side of the channel could be JS and the other side of the channel could be go or Clojure/ClojureScript!

That also means you could actually achieve true multi-threading by distributing your CSP processes to various remote contexts, and yet still have simple message passing semantics in your JS for managing what could otherwise be very complex concurrency.

I’ve started a project for exactly this purpose: Remote CSP Channel. I encourage you to give it a look!


Independent processes, coordinating their concurrency through simple, blocking semantics of message transfer. That’s CSP, in a nutshell.

It’s so simple, it’s deceptive. It’s kinda amazing how powerful this pattern is and yet how relatively undiscovered and unexplored it is. Sometimes, I guess, the most incredible stuff comes in the most unimpressive of packages.

I encourage and challenge you to try rethinking your notions of async and concurrency and see if CSP can fit.