Skip to content
Nadav Har'El edited this page Jul 1, 2019 · 69 revisions

NOTE: This document is outdated, last edited in 2015. A newer version of this document is included with the Seastar source distribution doc/tutorial.md, and you can see a live browsable version here. You can also generate prettier HTML or PDF versions from the source distribution, using ninja -C build/release/ doc/split or ninja -C build/release/ doc/tutorial.pdf respectively.

An earlier tutorial is also available.

Introduction

Seastar is an advanced, open-source C++ framework for high-performance server applications on modern hardware. Applications using Seastar can run on Linux or OSv. Key Seastar design features include:

  • Shared-nothing design: Seastar uses a shared-nothing model that shards all requests onto individual cores.

  • User-space networking: Seastar offers a choice of network stack, including conventional Linux networking for ease of development, DPDK for fast user-space networking on Linux, and native networking on OSv.

  • Futures and promises: an advanced new model for concurrent applications that offers C++ programmers both high performance and the ability to create comprehensible, testable high-quality code.

This tutorial is intended for developers already familiar with the C++ language, and will cover how to use Seastar to create a new application.

Getting started

The simplest Seastar program is this:

#include "core/app-template.hh"
#include "core/reactor.hh"
#include <iostream>

using namespace seastar;

int main(int argc, char** argv) {
    app_template app;
    app.run(argc, argv, [] {
            std::cout << "Hello world\n";
            return make_ready_future<>(); 
    });
}

As we do in this example, each Seastar program must define and run, an app_template object. This object starts the main event loop (the Seastar engine) on one or more CPUs, and then runs the given function - in this case an unnamed function, a lambda - once.

When the future returned by it resolves the app will shut down, stopping event loops on all cpus, and app.run() will return.

There's also app.run_deprecated() variant, still in use by some code, which differs from run() in that the application doesn't exit when the callback returns, but it has to be explicitly stopped by calling engine_exit(). It should be used instead of the regular C exit(), to allow for proper cleanups when necessary (we'll discuss these cleanups later).

To compile this program, first make sure you have downloaded and built Seastar (see instructions in Building Seastar). Below we'll use the symbol $SEASTAR to refer to the directory where Seastar was built (Seastar doesn't yet have a "make install" feature).

Now, put the the above program in a source file anywhere you want, let's call the file getting-started.cc. You can compile it with the following command:

c++ `pkg-config --cflags --libs $SEASTAR/build/release/seastar.pc` getting-started.cc

Linux's pkg-config is a useful tool for easily determining the compilation and linking parameters needed for using various libraries - such as Seastar.

The program now runs as expected:

$ ./a.out
Hello world
$

Seastar threads

As explained in the introduction, Seastar-based programs run a single thread on each CPU. Each of these threads runs its own event loop, known as the engine in Seastar nomenclature. By default, the Seastar application will take over all the available cores, starting one thread per core. We can see this with the following program, printing smp::count which is the number of started threads:

#include "core/app-template.hh"
#include "core/reactor.hh"
#include <iostream>

int main(int argc, char** argv) {
    app_template app;
    app.run(argc, argv, [] {
            std::cout << smp::count << "\n";
            return make_ready_future<>();
    });
}

On a machine with 4 hardware threads (two cores, and hyperthreading enabled), Seastar will by default start 4 engine threads:

$ ./a.out
4

Each of these 4 engine threads will be pinned (a la taskset(1)) to a different hardware thread. Note how, as we mentioned above, the app's initialization function is run only on one thread, so we see the ouput "4" only once. Later in the tutorial we'll see how to make use of all threads.

The user can pass a command line parameter, -c, to tell Seastar to start fewer threads than the available number of hardware threads. For example, to start Seastar on only 2 threads, the user can do:

$ ./a.out -c2
2

When the machine is configured as in the example above - two cores with two hyperthreads on each - and only two threads are requested, Seastar ensures that each thread is pinned to a different core, and we don't get the two threads competing as hyperthreads of the same core (which would, of course, damage performance).

We cannot start more threads than the number of hardware threads, as allowing this will be grossly inefficient. Trying it will result in an error:

$ ./a.out -c5
terminate called after throwing an instance of 'std::runtime_error'
  what():  insufficient processing units
abort (core dumped)

The error is an exception thrown from app.run, which we did not catch, leading to this ugly uncaught-exception crash. It is better to catch this sort of startup exceptions, and exit gracefully without a core dump:

#include "core/app-template.hh"
#include "core/reactor.hh"
#include <iostream>
#include <stdexcept>

int main(int argc, char** argv) {
    app_template app;
    try {
        app.run(argc, argv, [] {
            std::cout << smp::count << "\n";
            return make_ready_future<>();
        });
    } catch(std::runtime_error &e) {
        std::cerr << "Couldn't start application: " << e.what() << "\n";
        return 1;
    }
    return 0;
}
$ ./a.out -c5
Couldn't start application: insufficient processing units

Note that catching the exceptions this way does not catch exceptions thrown in the application's actual asynchronous code. We will discuss these later in this tutorial.

Seastar memory

As explained in the introduction, Seastar applications shard their memory. Each thread is preallocated with a large piece of memory (on the same NUMA node it is running on), and uses only that memory for its allocations (such as malloc() or new).

By default, the machine's entire memory except a small reservation left for the OS (defaulting to 512 MB) is pre-allocated for the application in this manner. This default can be changed by either changing the amount reserved for the OS (not used by Seastar) with the --reserve-memory option, or by explicitly giving the amount of memory given to the Seastar application, with the -m option. This amount of memory can be in bytes, or using the units "k", "M", "G" or "T". These units use the power-of-two values: "M" is a mebibyte, 2^20 (=1,048,576) bytes, not a megabyte (10^6 or 1,000,000 bytes).

Trying to give Seastar more memory than physical memory immediately fails:

$ ./a.out -m10T
Couldn't start application: insufficient physical memory

Introducing futures and continuations

Futures and continuations, which we will introduce now, are the building blocks of asynchronous programming in Seastar. Their strength lie in the ease of composing them together into a large, complex, asynchronous program, while keeping the code fairly readable and understandable.

A future is a result of a computation that may not be available yet. Examples include:

  • a data buffer that we are reading from the network
  • the expiration of a timer
  • the completion of a disk write
  • the result of a computation that requires the values from one or more other futures.

The type future<int> variable holds an int that will eventually be available - at this point might already be available, or might not be available yet. The method available() tests if a value is already available, and the method get() gets the value. The type future<> indicates something which will eventually complete, but not return any value.

A future is usually returned by a promise, also known as an asynchronous function, a function or object which returns a future and arranges for this future to be eventually resolved. One simple example is Seastar's function sleep():

future<> sleep(std::chrono::duration<Rep, Period> dur);

This function arranges a timer so that the returned future becomes available (without an associated value) when the given time duration elapses.

A continuation is a callback (typically a lambda) to run when a future becomes available. A continuation is attached to a future with the then() method. Here is a simple example:

#include "core/app-template.hh"
#include "core/sleep.hh"
#include <iostream>

int main(int argc, char** argv) {
    app_template app;
    app.run(argc, argv, [] {
        std::cout << "Sleeping... " << std::flush;
        using namespace std::chrono_literals;
        return sleep(1s).then([] {
            std::cout << "Done.\n";
        });
    });
}

In this example we see us getting a sleep(1s) future, and attaching to it a continuation which prints a message and exits. The future will become available after 1 second has passed, at which point the continuation is executed. Running this program, we indeed see the message "Sleeping..." immediately, and one second later the message "Done." appears and the program exits.

To avoid repeating the boilerplate "app_engine" part in every code example in this tutorial, let's create a simple main() with which we will compile the following examples. This main just calls the function future<> f(), and does the appropriate exception handling:

#include "core/app-template.hh"
#include <iostream>
#include <stdexcept>

extern future<> f();

int main(int argc, char** argv) {
    app_template app;
    try {
        app.run(argc, argv, f);
    } catch(std::runtime_error &e) {
        std::cerr << "Couldn't start application: " << e.what() << "\n";
        return 1;
    }
    return 0;
}

Compiling together with this main.cc, the above sleep() example code becomes:

#include "core/sleep.hh"
#include <iostream>

future<> f() {
    std::cout << "Sleeping... " << std::flush;
    using namespace std::chrono_literals;
    return sleep(1s).then([] {
        std::cout << "Done.\n";
    });
}

So far, this example was not very interesting - there is no parallelism, and the same thing could have been achieved by the normal blocking POSIX sleep(). Things become much more interesting when we start several sleep() futures in parallel, and attach a different continuation to each. Futures and continuation make parallelism very easy and natural:

#include "core/sleep.hh"
#include <iostream>

future<> f() {
    std::cout << "Sleeping... " << std::flush;
    using namespace std::chrono_literals;
    sleep(200ms).then([] { std::cout << "200ms " << std::flush; });
    sleep(100ms).then([] { std::cout << "100ms " << std::flush; });
    return sleep(1s).then([] { std::cout << "Done.\n"; });
}

Each sleep() and then() call returns immediately: sleep() just starts the requested timer, and then() sets up the function to call when the timer expires. So all three lines happen immediately and f returns. Only then, the event loop starts to wait for the three outstanding futures to become ready, and when each one becomes ready, the continuation attached to it is run. When the future returned by f becomes ready, the whole application exits. The output of the above program is of course:

$ ./a.out
Sleeping... 100ms 200ms Done.

sleep() returns future<>, meaning it will complete at a future time, but once complete, does not return any value. More interesting futures do specify a value of any type (or multiple values) that will become available later. In the following example, we have a function returning a future<int>, and a continuation to be run once this value becomes available. Note how the continuation gets the future's value as a parameter:

#include "core/sleep.hh"
#include <iostream>

future<int> slow() {
    using namespace std::chrono_literals;
    return sleep(100ms).then([] { return 3; });
}

future<> f() {
    return slow().then([] (int val) {
        std::cout << "Got " << val << "\n";
    });
}

The function slow() deserves more explanation. As usual, this function returns a future immediately, and doesn't wait for the sleep to complete, and the code in f() can chain a continuation to this future's completion. The future returned by slow() is itself a chain of futures: It will become ready once sleep's future becomes ready and then the value 3 is returned. We'll explain below in more details how then() returns a future, and how this allows chaining futures.

This example begins to show the convenience of the futures programming model, which allows the programmer to neatly encapsulate complex asynchronous operations. slow() might involve a complex asynchronous operation requiring multiple steps, but its user can use it just as easily as a simple sleep(), and Seastar's engine takes care of running the continuations whose futures have become ready at the right time.

Ready futures

A future value might already be ready when then() is called to chain a continuation to it. This important case is optimized, and usually the continuation is run immediately instead of being registered to run later in the next iteration of the event loop.

This optimization is done usually, though sometimes it is avoided: The implementation of then() holds a counter of such immediate continuations, and after many continuations have been run immediately without returning to the event loop (currently the limit is 256), the next continuation is deferred to the event loop in any case. This is important because in some cases (such as future loops, discussed later) we could find that each ready continuation spawns a new one, and without this limit we can starve the event loop. It important not to starve the event loop, as this would starve continuations of futures that weren't ready but have since become ready, and also starve the important polling done by the event loop (e.g., checking whether there is new activity on the network card).

make_ready_future<> can be used to return a future which is already ready. The following example is identical to the previous one, except the promise function fast() returns a future which is already ready, and not one which will be ready in a second as in the previous example. The nice thing is that the consumer of the future does not care, and uses the future in the same way in both cases.

#include "core/future.hh"
#include <iostream>

future<int> fast() {
    return make_ready_future<int>(3);
}

future<> f() {
    return fast().then([] (int val) {
        std::cout << "Got " << val << "\n";
    });
}

Capturing state in continuations

We've already seen that Seastar continuations are lambdas, passed to the then() method of a future. In the examples we've seen so far, lambdas have been nothing more than anonymous functions. But C++11 lambdas have one more trick up their sleeve, which is extremely important for future-based asynchronous programming in Seastar: Lambdas can capture state. Consider the following example:

#include "core/sleep.hh"
#include <iostream>

future<int> incr(int i) {
    using namespace std::chrono_literals;
    return sleep(10ms).then([i] { return i + 1; });
}

future<> f() {
    return incr(3).then([] (int val) {
        std::cout << "Got " << val << "\n";
    });
}

The future operation incr(i) takes some time to complete (it needs to sleep a bit first 😉), and in that duration, it needs to save the i value it is working on. In the early event-driven programming models, the programmer needed to explicitly define an object for holding this state, and to manage all these objects. Everything is much simpler in Seastar, with C++11's lambdas: The capture syntax [i] in the above example means that the value of i, as it existed when incr() was called() is captured into the lambda. The lambda is not just a function - it is in fact an object, with both code and data. In essence, the compiler created for us automatically the state object, and we neither need to define it, nor to keep track of it (it gets saved together with the continuation, when the continuation is deferred, and gets deleted automatically after the continuation runs).

One implementation detail worth understanding is that when a continuation has captured state and is run immediately, this capture incurs no runtime overhead. However, when the continuation cannot be run immediately (because the future is not yet ready) and needs to be saved till later, memory needs to be allocated on the heap for this data, and the continuation's captured data needs to be copied there. This has runtime overhead, but it is unavoidable, and is very small compared to the parallel overhead in the threaded programming model (in a threaded program, this sort of state usually resides on the stack of the blocked thread, but the stack is much larger than our tiny capture state, takes up a lot of memory and causes a lot of cache pollution on context switches between those threads).

In the above example, we captured i by value - i.e., a copy of the value of i was saved into the continuation. C++ has two additional capture options: capturing by reference and capturing by move:

Using capture-by-reference in a continuation is almost always a mistake, and would lead to serious bugs. For example, if in the above example we captured a reference to i, instead of a copy to it,

future<int> incr(int i) {
    using namespace std::chrono_literals;
    return sleep(10ms).then([&i] { return i + 1; });   // Oops, the "&" here is wrong.
}

this would have meant that the continuation would contain the address of i, not its value. But i is a stack variable, and the incr() function returns immediately, so when the continuation eventually gets to run, long after incr() returns, this address will contain unrelated content.

Using capture-by-move in continuations, on the other hand, is valid and very useful in Seastar applications. By moving an object into a continuation, we transfer ownership of this object to the continuation, and make it easy for the object to be automatically deleted when the continuation ends. For example, consider a function taking a std::unique_ptr.

int do_something(std::unique_ptr<T> obj) {
     // do some computation based on the contents of obj, let's say the result is 17
     return 17;
     // at this point, obj goes out of scope so the compiler delete()s it.  

By using unique_ptr in this way, the caller passes an object to the function, but tells it the object is now its exclusive responsibility - and when the function is done with the object, it should delete the object. How do we use unique_ptr in a continuation? The following won't work:

future<int> slow_do_something(std::unique_ptr<T> obj) {
    using namespace std::chrono_literals;
    return sleep(10ms).then([obj] { return do_something(std::move(obj))}); // WON'T COMPILE
}

The problem is that a unique_ptr cannot be passed into a continuation by value, as this would require copying it, which is forbidden because it violate the guarantee that only one copy of this pointer exists. We can, however, move obj into the continuation:

future<int> slow_do_something(std::unique_ptr<T> obj) {
    using namespace std::chrono_literals;
    return sleep(10ms).then([obj = std::move(obj)] {
        return do_something(std::move(obj))});
}

Here the use of std::move() causes obj's move-assignment is used to move the object from the outer function into the continuation. C++11's notion of move (move semantics) is similar to a shallow copy, followed by invalidating the source copy (so that the two copies do not co-exist, as forbidden by unique_ptr). After moving obj into the continuation, the top-level function can no longer use it (in this case it's of course ok, because we return anyway).

The [obj = ...] capture syntax we used here is new to C++14. This is the main reason why Seastar requires C++14, and does not support older C++11 compilers.

Handling exceptions

An exception thrown in a continuation, if implicitly captured by the system and stored in the future. A future that stores such an exception is similar to a ready future in that it can cause its continuation to be launched, but it does not contain a value -- only the exception.

Calling .then() on such a future skips over the continuation, and transfers the exception for the input future (the object on which .then() is called) to the output future (````.then()```'s return value).

This default handling parallels normal exception behavior -- if an exception is thrown in straight-line code, all following lines are skipped:

    line1();
    line2(); // throws!
    line3(); // skipped

is similar to

    return line1().then([] {
        return line2(); // throws!
    }).then([] {
        return line3(); // skipped
    });

Usually, aborting the current chain of operations and returning an exception is what's needed, but sometimes more fine-grained control is required. There are several primitives for handling exceptions:

  1. .then_wrapped(): instead of passing the values carried by the future into the continuation, .then_wrapped() passes the input future to the continuation. The future is guaranteed to be in ready state, so the continuation can examine whether it contains a value or an exception, and take appropriate action.
  2. .finally(): similar to a Java finally block, a .finally() continuation is executed whether or not its input future carries an exception or not. The result of the finally continuation is its input future, so .finally() can be used to insert code in a flow that is executed unconditionally, but otherwise does not alter the flow.

End of part 1

More topics will be covered in future tutorial chapters.