Advanced Go – Goroutines, the basics

Goroutines are a fundamental, (THE fundamental ?), feature of Go(lang).

In this post we are going to understand how the goroutine scheduler works.

It is important to understand that to be productive in Go, the following notions are not fundamentals. To be productive in Go using goroutines is sufficient to know that:

  • Goroutines (may) run in a different operative system (OS) thread
  • Since the run in different OS threads, you need to synchronize access to resources shared between goroutines
  • Goroutines are extremely cheap to create
  • Goroutines are extremely fast to create

Anything more, most likely, won’t make you a better Golang developer, but it is nice to know.

A very underappreciated feature of goroutines it is how transparent they are. The developer only care about creating goroutines, it is not concerned about how the goroutines are run or when the runtime swap execution.

M:N concurrency

For multiple reasons, (the end of Moore law, slow disk IO, complex network IO, etc., etc… we are not going to repeat all of them) it is now a necessity to write codes that can exploit all the physical processors of a machine.

Different languages, have different answers to this necessity, Golang implements the M:N concurrency model.

In the M:N model, there are M application threads (goroutines) mapped into N OS threads.

This mapping happens automatically without the developer being aware of the details.

In the M:N model, the user (developer) makes very little effort, or even no effort at all, in creating multiple applications threads (goroutines).

The language runtime takes care of all the complexity of scheduling and executing each goroutine in an OS thread, which, in turn, is mapped to a physical (or virtual) processor.

There are many more concurrency models adopted by different languages and runtime. For instance, Javascript maps M application threads (callbacks) to a single 1 thread, giving us a M:1 concurrency model. Similarly, Python. (Of course, we are not considering, neither the JS worker nor the Python multiprocess capabilities.)

In the rest of the post, we are going to understand how the M:N concurrency model is implemented in Go.

We won’t care what the code is doing, the go runtime will find a way to schedule a new goroutine. In the rest of the post, we are going to understand how the go runtime is able to schedule and execute goroutines.

The simpler case – cooperative

The simpler case is cooperation between goroutines.

In this case, the goroutine understand by itself that it has no work left to do for the moment.

The goroutine communicates its state to the runtime, which parks the goroutine and pick another goroutine to run.

There are multiple cases when the goroutine understand it has no more immediate work to do.

  • Operations against a file descriptor that would block, for instance network IO and storage IO.
  • Operations against a channel that won’t immediately succeed.
  • Grabbing a lock that it is not immediately available

In all these cases, the goroutine is suspended and the runtime pick another goroutine to run.

These cases cover a big chunk of standard go code, everything that does some sort of IO will inevitably fall into this category.

Some other cases exist. For instance, code that does a very limited IO, maybe just at the beginning, and spawn several CPU intensive tasks.

In these cases, it is not possible to rely on cooperative multitasking. The goroutine does not know when it will finish working, it only knows that it has more work to do.

On the other hand, the runtime still need to swap goroutines, in order to guarantees that each goroutines can make progress.

The more complex case – preemption

In the case when cooperative multitasking is not possible, the runtime need to preempt the running goroutine and pick another one to execute.

This is definitely not trivial, and it requires quite a bit of effort and overall complexity.

We need to remember that we are treating user code as a black-box, the runtime does not know exactly what is happening in the user code. Still, it needs a way to stop a goroutine and schedule a completely different one, on the same OS thread.

Even for preemption, there are two cases. Either we have function invocation in the goroutine, or we don’t.

Why it would make a difference? Because during function invocation, it is possible to inject (at compilation time) checks for the preemption.

The check happens when the stack of the goroutine grows (each time we are invoking a function). If the check indicates that the goroutine should be preempted and that it can be preempted, then the runtime takes care of parking the running goroutine and pick another one to run.

If there are no function invocations in the goroutine, for instance looping over an array and incrementing a counter, we need to find another way to preempt the goroutine.

The solution is using signals.

A specif signal, in Linux SIGURG, is sent to the OS thread running the goroutine. When the signal is received, the runtime checks for preemption and, if necessary, even functionless goroutines are preempted.


In this short article, we explored how go 1.15 swap the execution of goroutines.

The system is overall quite clever.

Most of the time the swapping of goroutines happens cooperatively, when the goroutines has nothing to do, and it knows it. Waiting for IO, waiting on channels or on locks.

In the relatively rare cases when this is not possible, checks during functions invocations allow to still swap goroutines.

Finally, in the very very rare occasion when it is not possible to rely on functions invocation to check the preemption, signals are used to interrupt the OS threads and allow rescheduling of goroutines.