✨ paekli-rs ✨

This is a guide for an exercise project in Rust. It is intended as the next thing to do after reading The Rust Programming Language.

The aim of paekli-rs is to bridge the gap between the book and production-ready code, emphasizing among other things:

  • library usage
  • inter-process communication
  • integration in a polyglot environment
  • automated testing and deployment (CI/CD)
  • having lots of fun! 😃

The Topic

We humans learn best when we're having fun, so the application will be themed as a simulation of a postal service. In practice, that's just a messaging application. This topic was chosen because its feature set is very flexible and can adapt to what and how much you want to do.

Our postal service could be sending "packages" around, but that word has another meaning in the software world. So, we'll be using the less ambiguous and more fun term ✨ paekli ✨ to refer to deliverables in our business domain.

Etymology

Päkli is Swiss German and derives from Paket, the German word for package.

A notable difference is that Päkli is a diminutive. Swiss German uses the diminutive form much more generously than standard German, which makes the entire language a little less serious and a little more cute.

Whenever it's impossible or inconvenient to use proper diaeresis, Germans will replace ä, ö, ü with ae, oe, ue. As this book is not targeted to Germans specifically, that's exactly what we'll do.

Choose your own components

This book is structured around self-contained guides for what we'll call components. You can choose to implement almost any combination of them in almost any order. In its simplest form, the application can be a single component, e.g. a CLI app, without any integrations with other components. If you choose to go the extra mile, the application can grow into a diverse set of interacting clients and servers.

For example, a python script might call into a Rust library and send a paekli to an http server which live-updates a wasm-based browser app over websocket! So cool! 🤩

As you can see in the sidebar, the components are loosely categorized into clients, servers and storage backends.

  • Clients can be stand-alone and may use servers and storage backends.
  • Servers can be stand-alone, but are probably difficult to use without a client. They may also use storage backends.
  • Storage backends can't be stand-alone, as their only purpose is to be used by clients and servers.

Unless you have a strong desire to do something else, I recommend to start with the CLI. It is the easiest component to implement and its guide is the most detailed. The other guides are still self-contained, but they provide fewer explanations of steps that are identical or similar for all components.

Choose your own difficulty

The guides are supposed to be at moderate to low difficulty, although there is some variability. If you'd like a bigger challenge, you are always welcome to go off the beaten path! Are you a web developer and find the HTTP server guide boring? Invent a new feature that requires you to use query params and try to make them type-safe. You get the idea. The more you go off the beaten path, the more difficult it will be to integrate different components, which might be exactly the challenge you're looking for.

Requirements

The book assumes that you're using a Unix-like operating system. Linux is officially supported, but Mac should work just as well. If you're on Windows, you should be fine by using WSL. However, please do not open issues related to that.

You also need to be familiar with version control, i.e. git. If you aren't, paekli-rs is probably not right for you. This book assumes you use git, but if you know what you're doing, feel free to use a git-compatible alternative like the excellent jujutsu.


If you're ready to get started, proceed to the project setup.

Project Setup

Since the purpose of this project is to be production-ready, we won't take any shortcuts. The following concepts are central to the project:

  • version control
  • automatic testing
  • automatic deployment

For the automation part, we'll be using GitHub Actions. This is unfortunately a proprietary automation platform by GitHub, which is owned by Microsoft. However, it is provided for free and many developers have experience with it, so it is the most pragmatic choice for now. I'm keeping my eyes open for more FOSS-friendly alternatives.

If you already have a suitable repository, e.g. a rust-exercises repo, you can simply add all paekli-rs related things in a subdirectory. Otherwise...

Create a new repository on GitHub. You may name it whatever you like, but this book assumes you named it peakli-rs. If you keep that name, it will be more convenient to copy-paste commands from the book.

Lastly, clone the repo to your local machine. Unless otherwise specified, assume that any commands you're asked to run should be run in the root of your paekli-rs repository.

Where to next

Now you're ready to choose your first component to implement! Remember, I recommend starting off with the CLI.

CLI

Oh, hi there! I'm happy you came by ❤️ I guess that means you want to write a paekli-cli with me? Here's what the final product might look like:

> paekli-cli send carrots --to lisa
Thank you for trusting Paekli LLC!
We will deliver your paekli in mint condition.

> paekli-cli receive --for jeremiah
Error: There is no paekli for jeremiah.

> paekli-cli receive --for lisa
Here is your paekli:
carrots

Let's get started!

Setup

Initializing a new package

We're going to keep our CLI in the directory paekli-rs/. Here are the commands to let cargo bootstrap it:

cd paekli-rs
cargo new paekli-cli

You should now have two new files:

  • paekli-cli/Cargo.toml
  • paekli-cli/src/main.rs

...as is the case with any new executable Rust package.

Also note that the members key of the top-level Cargo.toml should have been modified automatically to include your new package:

# rust-exercises/Cargo.toml
members = ["day_?/*", "paekli-rs/paekli-cli"]

That's perfectly fine. We didn't worry much about cargo workspaces in this workshop, which is what that top-level Cargo.toml defines. They simply give you some quality-of-life improvements for managing multiple packages in a single project / repository.

Shipping the first version

Before we add any features, we need to make sure we can ship our software efficiently. Let's just change the print statement in the main function for now:

fn main() {
    println!("Paekli LLC is currentli closed 😢");
}

Remembering the release-workflow we've already seen during the workshop, all we have to do is commit our changes and push a new version tag. You have probably already "used" some version tags in your repository, so just pick the next higher one to release the first version of paekli-cli, for example v0.1.2.

# paekli-cli/Cargo.toml
version = "0.1.2"
git add --all
git commit
git push
git tag v0.1.2
git push --tags

Release

That should do the trick! It's still a good idea to keep an eye on the release job on GitHub and try out the finished executable manually.

Command Line Argument Parsing

The library we'll be using to write our CLI is called clap. You can find its documentation as usual on docs.rs/clap. There are other libraries to parse CLI-arguments, but clap is the most popular and user-friendly one. The alternatives focus on fast compile times, which isn't a top-priority for clap.

Adding the dependency

A small stumbling block for Rust's dependency management can be libraries with missing feature flags. Helpfully, the output of cargo add shows us a list of included and excluded features. I recommend to check this list every time you add a dependency and look for missing features you may want to include as well. The coolest feature of clap is also its most compile-time heavy one, so it is gated behind a non-default feature flag called derive.

Let's add clap to our dependencies:

# cd paekli-cli
cargo add clap --features derive

If we forget about --features derive, some of the later code won't compile, and it wouldn't be obvious why. We can confirm that it worked by checking that paekli-cli/Cargo.toml now contains the following:

[dependencies]
clap = { version = "4.5.1", features = ["derive"] }

A bare bones clap-app

Here is the absolute minimum code to use clap:

use clap::Parser;

#[derive(Parser)]
struct Cli;

fn main() {
    let _args = Cli::parse();
    println!("Paekli LLC is currentli closed 😢");
}

What we're doing here is defining our CLI as a data type. If you think about it, the structure of a standard CLI can easily be represented as a native Rust type. For now, our struct Cli; is empty and therefore doesn't accept any arguments.

To parse the actual command line arguments into this data structure, we #[derive(Parser)] on it, which is a macro we imported with use clap::Parser;. Finally, we simply call Cli::parse(); in our main function.

If we run this program, we should get the same output as before. So you'd be forgiven to think that didn't accomplish anything. However, clap generates a help page for you, even if you don't specify a single CLI argument. You can see it by running:

cargo run --quiet -- --help

Note how -- is used to distinguish between the arguments passed to cargo and the ones passed to your program. If we had a pre-compiled binary, we could simply run paekli-cli --help.

> cargo run --quiet -- --help
Usage: paekli-cli

Options:
  -h, --help  Print help

It's great to have a standard help page, as is expected from every CLI tool. It will automatically be kept up-to-date with the structure of our Cli data type, courtesy of Rust's powerful macros.

It is also good practice to let your users check which version of your program their running. You wouldn't want to get bug reports for outdated software! Change your Cli definition like so, to add a version flag:

#![allow(unused)]
fn main() {
#[derive(Parser)]
#[clap(version)]
struct Cli;
}

Now observe the output of:

cargo run --quiet -- --version
cargo run --quiet -- --help

We should also provide at least a small description of what paekli-cli does to our users. clap automatically includes your doc-comments in the output to users, so let's add one to our CLI:

#![allow(unused)]
fn main() {
/// send and receive joy with ✨ paekli-cli ✨
#[derive(Parser)]
#[clap(version)]
struct Cli;
}

Note the tripe-slash ///, which distinguished a doc-comment from a regular comment. The comment should now show up on the help page.

Lovely! That's a great foundation to build an enjoyable CLI on top of.

Release

Our users can now get information about the purpose of the app, how to use it and the version they're running. That's definitely worthy of a new release! Recall the process:

update Cargo.toml ; commit ; push ; tag v0.1.X ; push --tags

Sending and Receiving Paekli

It's time to build our first feature. The most basic service we offer to our customers is sending and receiving paekli. How should we model this in the CLI? The most natural choice is the subcommand, which is often used by programs that offer many different functionalities. For example: add, commit and push are subcommands of git.

Info

The code snippets in this guide will become less complete as we go along. It is your responsibility to make sure the things you copy-paste integrate correctly with the rest of your code. This is also because you are encouraged to add, modify and experiment with things according to your whim and curiosity. It is your final project after all! 😃

Subcommands represent a choice out of a finite set of alternatives, that's a perfect fit for an enum. Let's try it:

#![allow(unused)]
fn main() {
#[derive(Subcommand)]
enum Command {
    Send,
    Receive,
}

#[derive(Parser)]
struct Cli {
    #[command(subcommand)]
    command: Command,
}
}

The enum definition should be intuitive, but you'll notice the #[derive(Subcommand)] annotation. In the struct Cli, we have to add the command as a field, again with a new annotation #[command(subcommand)].

The compiler error messages for libraries that provide such annotation-based functionality are usually not very helpful, because the compiler cannot know what annotations you should have added. For such libraries, it's best to refer to the documentation. Because the libraries are aware of the bad error messages, they usually have great documentation and clap is no exception.

Now we can use the parsed subcommand in our main function:

const SEND_MESSAGE: &str = "\
Thank you for trusting Paekli LLC!
We will deliver your paekli in mint condition.
* throws your paekli directly in the trash *";

const RECEIVE_MESSAGE: &str = "\
There aren't any paekli for you at the moment.
* tries to hide paekli in the trash can *";

fn main() {
    let args = Cli::parse();

    match args.command {
        Command::Send => println!("{SEND_MESSAGE}"),
        Command::Receive => println!("{RECEIVE_MESSAGE}"),
    }
}

Release

Phew! We've made some great progress. Let's cut a new release to let our users enjoy this new feature.

Content and Storage

At this point, our users can send and receive paekli. However, they probably want to send paekli with some content and they want that content to be received. Silly users with their unrealistic feature requests!

But let's try to make them happy.

Sending paekli with content

The content of a paekli will be another CLI argument. Since we only expect content when sending a paekli, we will add the argument to that subcommand. Specifying the content of a paekli you're receiving doesn't make sense.

#![allow(unused)]
fn main() {
enum Command {
    Send { content: String },
    Receive,
}
}

We will also need to adjust the match-expression to ignore the content in our main function:

#![allow(unused)]
fn main() {
Command::Send { content: _ } => println!("{SEND_MESSAGE}"),
}

Now, try to send a paekli without content and see how the error message helps the user figure out how to use the CLI correctly.

Storing paekli for delivery

Applications are expected to store their data in different locations depending on the operating system. We might be tempted to tell our users to just install Linux when they're bugging us about supporting their platform. Instead, let's use the directories crate to not have to worry about it at all.

Here's the code we'll need to add:

#![allow(unused)]
fn main() {
let project_dir = directories::ProjectDirs::from("dev", "buenzli", "paekli")
    .expect("the user's home directory seems to be corrupt");
let storage_dir = project_dir.data_dir();
std::fs::create_dir_all(storage_dir).expect("failed to create storage directory");

Command::Send { content } => {
    std::fs::write(storage_dir.join("content"), content)
        .expect("failed to store paekli");
}
}

On Linux, you can confirm that a paekli was sent correctly with:

cat ~/.local/share/paekli/content

Less than terrible error handling

We now have a few calls to .expect() in our code. This is great for whipping up a quick program that works, but it immediately crashes our program in case of an error. There are libraries for more scaleable error handling with great usability. The most popular one for applications (as opposed to libraries) is anyhow, so let's use that.

fn main() -> anyhow::Result<()> {
    // --snip --

    Ok(())
}

anyhow::Result is a different type than Result from the standard library. It only takes one type parameter for the Ok case. The Err case always holds a value of type anyhow::Error. This makes it easy to bubble up errors when they're all the same type. Here we are returning Result<()>, because we don't return any value in the success case. That means we now need to return Ok(()) at the end of main.

returning Result from main

You may not have known that the main function can actually return regular values. This is mostly useful for returning Results, so you can do normal Rust-style error handling in the main function. However, main can technically return any type that implements the Termination trait.

So, how do we refactor our .expect() calls to return anyhow::Result instead? It's simple, first we import the trait anyhow::Context. This attaches a new method .context() to any Result or Option to convert them into an anyhow::Result. Recall that this pattern is sometimes called an "extension trait". Lastly, we append the question mark operator ? to the call of .context() in order to return early in case of an error.

visually:

                               use anyhow::Context;
value.expect("error msg")  ->  value.context("error msg")?

The practical difference is not that big yet, but future-you will probably thank us for starting early with good error handling.

Preventing data loss

Currently, our app overwrites existing paekli with new ones. Here's a task you can do on your own: Check if the file already exists, and if it does, do not overwrite it and notify the user that our storage is full. To create an ad-hoc error using anyhow, you can use the macro anyhow::anyhow!.

Delivering paekli

I'll leave it up to you to deliver the paekli. Simply read from the file system and print the content to stdout. Remember to remove the file, otherwise a paekli could be delivered twice!

Release

Contratulations! We now have a fully-functioning minimum viable product (MVP). The basic functions of sending and receiving work as expected.

Pat yourself on the back and cut a new release! 🥳

Additional Features

As we go along building our application, we will quickly want to add more features. How to implement them for the CLI will be described in this section. You will be mostly on your own, but guidance will be given where new concepts / libraries etc. are required.

You can choose to skip this section for now and explore the other components and integrations. Just remember to come back here if some integration requires you to have these features implemented. Jump to the next section to explore other components and integrations or keep reading to implement more CLI features.

Once you're happy with the feature set of the CLI, don't forget to cut a new release!

Expanding our storage space

Currently we can only store one paekli at a time. Additional paekli are rejected until the existing one is received. Instead of storing the paekli in a single file with a hardcoded name, let's store them in a directory instead. The most obvious way to store multiple paekli is to use the time they were sent as their file name. For that, you're gonna need a crate for time handling, like time or chrono. time is very minimal, but sufficient for our use case. I would've let you figure out how to use it yourself, but its documentation is hard to navigate in my opinion. Just call time::OffsetDateTime::now_utc().to_string() to get the current time as a string.

We could just pick a random paekli out of the ones in storage whenever a paekli is received. However, let's challenge ourselves by making sure the paekli are received in FIFO order. The standard library function read_dir does not guarantee to yield directory entries in a platform-independent order. The crate walkdir has a function sort_by, which could come in handy. However, it should also be simple enough to implement this yourself.

Individual recipients

When people send paekli, they usually have a specific recipient in mind. In order to assign each paekli to a specific recipient, we need additional CLI arguments. The sender of a paekli needs to say who should receive it and the recipient must identify themselves.

For the sender, we could just extend the Send subcommand to also accept a recipient, like so:

#![allow(unused)]
fn main() {
Send {
    content: String,
    recipient: String,
}
}

This works, and there's nothing terribly wrong with it. However, CLI arguments defined this way are expected in a specific order. (Namely the order in which they were defined in the struct). As the number of arguments grows, it can become hard for users to get the order right. To alleviate this, we can introduce flags, which are basically named arguments. Because they are named, their order doesn't matter and it's always clear what's going to happen when typing in the command. Using clap we can turn an argument into a flag by giving it a short and a long name. (Or only one of the two, if we prefer.)

#![allow(unused)]
fn main() {
Send {
    content: String,
    #[arg(short, long)]
    recipient: String,
}
}

The recipient can now be specified with -r NAME, --recipient NAME or --recipient=NAME. It seems reasonable to keep the content as a positional argument, as it is the most important part of a paekli. However, you can turn that into a flag as well if you like.

You could also name the recipient flag to, which would enable a usage very close to natural English:

paekli-cli send "cheddar cheese" --to Elizabeth

Renaming can be accomplished in the macro annotation as well:

#![allow(unused)]
fn main() {
Send {
    content: String,
    #[arg(short('t'), long("to"))]
    recipient: String,
}
}

To complete the feature, you will need to add a recipient argument or flag to the Receive subcommand as well. Lastly, you'll need to change how you store and retrieve paekli so you can determine the intended recipient.

I will leave that up to you!

Express delivery

Our paekli are currently always received in FIFO order. However, what if some paekli was really important? For example, a paekli containing a programmable ergonomic split mechanical keyboard with no less than eight keys on each thumb cluster? Surely our users would like to receive such a marvelous paekli before all the other ones.

This feature will nicely demonstrate a boolean flag. To implement one with clap, do the same as with a regular flag, but use a bool as its type instead of String. The existence of the flag on the command line represents true.

The rest is up to you!

Release

Now that our CLI is jam-packed with exciting features, it's time for the next release.

Future releases likely won't add significant new features, but maybe our CLI will grow to interact with other components!

Where to Go Next

Most components don't have any requirements, so just pick one from the sidebar that interests you the most. The few ones that do have requirements will say so at the beginning.

Note that for clients to be connected, some kind of server is required. However, there is a reference implementation with API documentation hosted at paekli.buenzli.dev. Using that, you can connect all your clients without having to write a server yourself. Thank you for being gentle to my server! 😊

If you are not the decisive kind, here's a little inspiration to help pick your next component:

  • The HTTP server is fun and quite simple. It has a good contrast to the CLI and it's always great to see through all layers of the stack.
  • I have a web development background, so I am particularly fond of the web app. It's portable, easy to deploy and doesn't use a single line of JavaScript 😎
  • Another goal that might be worth working towards: Once you have a WebSocket-server and a GUI-client, you can make the GUI live-updating! 🤯

Python Extension Module

Python is a very popular language and there are many reasons we might want to interop with it.

For example, many Python libraries are written in more efficient languages like C and C++ under the hood. Rust can fulfill that purpose just as well, if not better. One such example that has generated some buzz recently is polars, a high-performance data frame library similar to pandas.

In addition, business applications are often written in Python at the start of a project for speed of development during the prototyping phase. As the project matures, performance and reliability may become bigger concerns. Instead of rewriting the whole thing in Rust in one fell swoop, it is more prudent to replace small pieces one step at a time. This allows you to keep adding new features and deliver your software continuously during the transition phase.

Now that we're all hyped up, let's write a little Python library in Rust!

Hello World

The most important project we'll be using for this component is PyO3. It does most of the heavy lifting for us so Python and Rust can talk to each other. That means we have more time to write actual code. Sweet!

Let's start by initializing a new package, sticking closely to PyO3's own guide.

cd paekli-rs
mkdir paekli-py
cd paekli-py
python -m venv .venv
source .venv/bin/activate

The PyO3 project has a build tool called maturin:

pip install maturin
maturin init --bindings pyo3

This will generate a Rust project with the necessary boilerplate to use it as a Python module. You might be interested to take a look at Cargo.toml, to see what kind of package configuration is needed.

Next, take some time to understand the generated src/lib.rs. You'll notice the annotations #[pyfunction] and #[pymodule] are used to make Rust code available to Python. There are also some Python-specific types like PyResult and PyModule.

Let's try to call the generated sum_as_string function from Python. The following command installs our Rust module into the virtual Python environment:

maturin develop

Now the module paekli_py should be available to import from Python:

> python
>>> import paekli_py
>>> paekli_py.sum_as_string(123, 678)
'801'

Release ?

Hurray! You can now call Rust code from Python 🥳

For most components, we put effort in a continuous release workflow. However, we won't do that for the Python extension module.

If you simply want to extend your own Python code with Rust, there is no loss. If you do want to publish a Python module on PyPI, I encourage you to explore that on your own.

maturin actually generates a GitHub Actions workflow to automatically publish to PyPI. It won't be active by default, because it's not at the root of the Git repository. You can delete it or leave it, doesn't matter.

Sending and Receiving Paekli

Let' start implementing the two basic interactions with the system. They're just going to be two stubbed-out functions for now. Remember to add them explicitly to your module!

#![allow(unused)]
fn main() {
/// Send a paekli
#[pyfunction]
fn send() -> &'static str {
    SEND_MESSAGE
}
#[pymodule]
fn paekli_py(_py: Python, m: &PyModule) -> PyResult<()> {
    m.add_function(wrap_pyfunction!(send, m)?)?;
}
}

That was easy! You should be able to confirm your changes with something like:

import paekli_py

print(paekli_py.send())
print(paekli_py.receive())

Content and Storage

Info

This guide assumes you have created the storage backend abstraction. If you haven't, I recommend you do it now.

Accepting input from Python is simple enough, you just declare the parameters in the function that you need. It's just like writing regular Rust.

To express that the function might fail but there's no output in the success case, we can use PyResult<()> as the return type.

#![allow(unused)]
fn main() {
fn send(_content: &str) -> PyResult<()> {
    Ok(())
}
}

Are you wondering about how it's possible that we're simply writing Rust code, including input parameters and return types, and Python can just call those functions?

The magic lies with the #[pyfunction] macro, which generates the glue code necessary to convert between Python and Rust values across the FFI boundary. You can make your own Rust types able to pass the FFI boundary to Python by implementing the traits FromPyObject (derivable) and IntoPy<PyObject>. If you're curious, you can read more about these conversions here.

Storing paekli

Using our storage backend abstraction, we'll be done with this in no time. Remember that you need to add paekli-core as a path dependency to paekli-py.

You should have a function to get access to a DistributionCenter, e.g. paekli_core::new_distribution_center(). If you have more than one storage backend already (or a configurable one), that function probably takes additional arguments.

Next, you just call .store() or .retrieve() on the distribution center with the arguments received from the Python code. Easy-peasy.

Success

That was super fast! PyO3 and our storage backend did almost all the work.

If you have the CLI already, you should be able to send and receive paekli between it and Python (assuming the same storage backend).

Additional Features

Now you have the opportunity to bring the Python extension module to feature-parity if you have already implemented the additional features for another component.

As your Python functions accumulate more parameters, you may want to use optional and keyword arguments, which are idiomatic in Python. I have good news for you, there is little work to be done to achieve that. From the PyO3 documentation:

Like Python, by default PyO3 accepts all arguments as either positional or keyword arguments. Most arguments are required by default, except for trailing Option<_> arguments, which are implicitly given a default of None.

Expanding our storage space

Requirements:

  • Multiple paekli can be sent before they are received.
  • Paekli are received in the same order as they were sent.

Individual recipients

Requirements:

  • Paekli can be sent to a specific recipient
  • Recipients of a paekli can identify themselves and only receive paekli intended for them.

Express delivery

Requirements:

  • Paekli can optionally be sent with express delivery
  • Express paekli are always received before non-express paekli.

Where to Go Next

The Python extension module doesn't really have any special integrations. Just pick the next component you're interested in.

Web App

Interested in building a web app with Rust, I see? I understand you. For too long, JavaScript has held the industry hostage with its iron grip on the browser.

Even if you are not yet traumatized by JavaScript, web apps are just so darn useful. They are extremely easy to deploy and arguably the most platform-independent way to make a GUI.

There is good news for the mental health of web developers around the globe: WebAssembly (short: wasm) is a byte code supported by all major browsers since 2017. Finally, we have a real alternative to JavaScript! Wasm quite low-level and intended as a compilation target. Rust has industry-leading support for compiling to wasm, so it is a surprisingly good choice for building web apps.

Requirements

For building a web app, we need some basic knowledge about HTML. This guide will not provide that. If you know nothing about HTML, you should still be able to follow along until the MVP, but going beyond that will be frustrating if not impossible. Knowledge of some JavaScript UI-framework like React, Angular, Vue etc. are benefitial but not required. (The library we'll be using is most similar to Solid.) If you know about CSS or Tailwind, feel free to make the app as pretty as you like! ✨

Due to its nature, this component is relatively library-heavy. The browser is not really the native environment of Rust, so we need a bit of cushioning to get comfortable. We're gonna need a UI-rendering library that's made for the web, one that provides wrappers of browser APIs that are only available to JavaScript as well as an additional build tool.

Hello World

Fair warning: The web app has a bit more setup than the other components. Don't worry though, we'll manage just fine together!

Installing dependencies

When you install the Rust toolchain, it only supports compiling to your native platform by default. To add support to compile to WebAssembly, let's add the necessary target:

rustup target add wasm32-unknown-unknown

We're gonna use a bundler called trunk to help us manage all the boilerplate around a web app:

cargo install --locked trunk

Let's initialize a new package for the web app:

cd paekli-rs
cargo new paekli-web

Add a couple libraries we'll need:

cd paekli-web
cargo add gloo
cargo add leptos --features csr
cargo add console_error_panic_hook

Trunk puts its output into a different directory than cargo, you probably want to git-ignore that:

# still inside paekli-web/
echo dist > .gitignore

Dummy index.html

A normal, plain website might be nothing more than an HTML file. That's exactly how our web app is going to start as: a plain HTML file without any content. Its only purpose is to load the wasm code that generates the interactive web app. Hopefully that makes sense, but if it doesn't, don't worry. I'm just trying to explain why we need the following boilerplate.

Add an index.html next to your Cargo.toml. The location is important. Add the following content:

<!DOCTYPE html>
<html lang="en">
  <head>
    <title>Hello Rust Workshop!</title>
  </head>

  <body></body>
</html>

That's already enough to get the (empty) web app off the ground. Let's start the development server of our bundler trunk:

trunk serve

It might take a minute to compile the first time, but then it should display a URL where you can see your web app. It's probably localhost:8080.

The page will be empty (because of the empty <body></body> tag), but the custom title should be visible in the browser tab.

Running Rust in the browser

So far, we haven't actually written any Rust that compiles to wasm to run in the browser. Let's change that. How about this:

use leptos::*;

fn main() {
    mount_to_body(|| {
        view! {
            <h1>Hello WebAssembly!</h1>
        }
    })
}

If you add this change to main.rs while you keep trunk serve running, your web app should automatically recompile and reload in the browser once compilation is done. Pretty decent development experience! Can you see the result in the browser?

Let's explain a little bit what's going on here. Remember the empty <body></body> from the index.html? This body tag is the container of all the content on a website. With the function mount_to_body, we're letting our Rust code take over the body tag and therefore the entire content of the website. The argument passed to mount_to_body is a simple function (notice that || is in this case the start of a closure without any arguments). The function returns a view! macro with the content <h1>Hello WebAssembly!</h1>. The view! macro allows us to write HTML-like syntax which will be inserted into the website. This might look alien to you, but it's very convenient for seasoned web developers and pervasive in the JavaScript world.

Setting up decent error messages

We're still inexperienced when it comes to web development with Rust, so we might make a mistake or two. If our app crashes, it would be nice to get some decent error message. Let's add the following at the top of our main function:

#![allow(unused)]
fn main() {
console_error_panic_hook::set_once();
panic!("I don't know what to do!");
}

In the browser, the "Hello WebAssembly" text should be gone. This is because the app crashed before it was able to display the text. If you open the browser dev tools with F12 and click on the "Console" tab, you should see our custom error message and a line number. That's good enough for our purposes!

Remove the panic! statement (but keep the panic hook) and your app should work again.

Release sorry, we're not quite there yet

Congratulations! You are now a fully-oxidized web developer 🥳

You might expect me to pester you about cuttting a relaeas at this point. But setting up the release of the web app is a little more work, so it deserves its own section. It'll be worth it though! In contrast to the other components, your users won't even have to install your web app. We will be deploying to GitHub Pages, freely accessible to anyone who can click a link!

GitHub Pages

GitHub Pages is a free static website hosting service. It's very convenient to have a nice website for your GitHub projects. However, if your project is precisely a website, GitHub Pages can be used as the actual deployment!

This section is mostly about how to write a GitHub Action to automate that deployment process. It's not too much work, so let's get started.

A new workflow for automatic deployment

First, we need a new workflow. There should already be a couple in .github/workflows. Let's create the file .github/workflows/gh_pages.yml.

Our workflow should run every time we push to the main branch:

name: GitHub Pages
on:
  push:
    branches: main

It's also gonna need write-access to our repository, so it can make a commit where the finished website will be stored. Don't worry, that commit won't be polluting the main branch.

permissions:
  contents: write

We're gonna have a single job to run in the default Ubuntu environment:

jobs:
  pages:
    name: Deploy GitHub Pages
    runs-on: ubuntu-latest

Now we need to define the steps to run. The first step is the same for almost all workflows: uses: actions/checkout to get access to the code of the repository itself.

    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

Next up we'll use Swatinem/rust-cache@v2 to cache our build. This will speed up future builds quite a bit, because dependencies won't have to download and compile every time.

      - uses: actions/checkout@v4
      - uses: Swatinem/rust-cache@v2

The rest is basically just bash scripting:

      - uses: Swatinem/rust-cache@v2
      - run: |
          rustup target add wasm32-unknown-unknown
          cd paekli-rs/paekli-web
          wget -qO- https://github.com/trunk-rs/trunk/releases/download/v0.19.0/trunk-x86_64-unknown-linux-gnu.tar.gz | tar -xzf-
          ./trunk build --release --public-url /rust-exercises
          mv dist ../../docs
          git config --global user.name "GitHub Actions Bot"
          git config --global user.email "bot@invalid.local"
          git checkout -b gh-pages
          git add ../../docs
          git commit --message "GitHub Pages Deployment"
          git push --force --set-upstream origin gh-pages

If you prefer, you can put this script in a regular .sh file in your repository so it can be tested more easily.

Let's explain a couple things that might not be obvious:

  • wasm32-unknown-unknown is needed to compile Rust to WebAssembly.
  • The wget command downloads a binary of trunk into the current directory.
  • The --public-url /rust-exercises is necessary because our website is not located at the root path of the domain.
  • We move the dist folder to rust-exercises/docs because that's where GitHub Pages expects our website to be located for deployment.
  • The git configuration of username and email is irrelevant, these commits will be overwritten regularly.
  • Lastly, we force push the built website to a branch called gh-pages. More on that next.

Enabling GitHub Pages

  1. go to GitHub
  2. navigate to your repo
  3. go to the "Settings" tab
  4. click on "Pages" in the sidebar
  5. under "Source", "Deploy from a branch" should already be selected
  6. under "Branch", change "None" to "gh-actions" and "/ (root)" to "/docs"

And that should be it! With this configuration, GitHub Pages will look inside the /docs directory of your repository on the gh-actions branch for a website to deploy.

Success

Go to YOUR_GH_USERNAME.github.io/rust-exercises to enjoy the fruit of your labor!

Sending and Receiving

Let' start implementing the two basic interactions with the system.

First we'll need an HTML button the user can click on:

#![allow(unused)]
fn main() {
view! {
    <button>Send</button>
}
}

That's great, but it doesn't to anything yet. To make the button functional, we'll attach a function to the button, such that the function will be executed when the button is clicked. If this looks kinda magical to you, then that's because it is. The view! macro is doing a lot of heavy lifting here.

#![allow(unused)]
fn main() {
use gloo::dialogs::alert;

view! {
    <button
        on:click=|_| alert("paekli was sent!")
    >
        Send
    </button>
}
}

Apart from the formatting, what has changed? We gave a new "attribute" to the button tag, although it's not quite valid HTML. The on:click "attribute" is the special way to attach event handlers to DOM elements in leptos. (In the JavaScript world, React has the almost identical onClick.) The value of the special attribute is a function or closure. It takes one parameter but ignores it. The parameter would've been an click-event object with some metadata about what triggered the button-press. The body of the function calls gloo::dialogs::alert, which is a binding to the alert API of the browser. It should create a little pop-up window with the given text.

Check in your browser if this behaves the way you would expect!

Then, add a second button for receiving paekli.

Release

The buttons aren't very useful yet, but let's not make our users wait for any new features. Cut a new release by pushing these changes to the main branch.

Remember, for the web app you don't even have to push a version tag to trigger a release. The version tag is the trigger for cargo dist, which manages our downloadable binaries. The GitHub Pages deployment is triggered by every push to main.

Content and Storage

Accepting user input in HTML is simple enough:

<input placeholder="paekli content"></input>

You should already be able to use this text field in the browser. However, it's not obvious how our Rust code can access the text our user types in.

Storing values in Rust

There are multiple ways to do this, but we'll go with the simplest route. We'll create a variable in our Rust code where the text will be stored and instruct the <input> element to update the value of that variable when the user types in a character. A regular let mut variable doesn't quite cut it though, we'll need a special type from the leptos library called a Signal. But the principle is exactly the same!

#![allow(unused)]
fn main() {
// reading and writing of the signal are separated into two variables
let (get_content, set_content) = create_signal(String::default());

view! {
    <input
        placeholder="paekli content"
        prop:value=move || get_content.get()
        on:input=move |e| set_content.set(event_target_value(&e))
    ></input>
}
}

Alright, I'll explain this in a second. But first, you're probably getting a compiler error at this point. It says something like:

compiler error

bla bla closure may outlive the current function bla bla use the move keyword.

Okay, that sounds kinda difficult. It's related to Rust's lifetime system. The fundamental problem here is that user interfaces are long-lived programs where is becomes harder for the borrow checker to make sure you're not using some value after it isn't valid anymore. Luckily, the developers of the leptos library have a perfect solution for this problem and the Rust compiler even tells us the correct thing to do. We have to put the move keyword in front of the closure passed to mount_to_body:

#![allow(unused)]
fn main() {
mount_to_body(move || {
    // ...
}
}

The only downside of this solution is that it's not obvious at all what's going on. To truly understand it, one needs decent experience with the borrow checker and a good understanding of how the leptos library works internally. Needless to say, this is beyond the scope of this guide. So for now, I can only tell you to "trust me bro" and move all the closures!

Alright, I still owe you an explanation of the code snippet above. First we create a "signal", leptos special variable type that's separated into a getter and setter function. The reason we need a signal instead of a regular variable is because leptos automatically updates all the places in the UI where the signal is used when it changes. That's not something we would be able to do with regular variables. We control the current content of the input field with the prop:value attribute. It's value is a function that returns the current content of the signal. That way, the input field and the signal are always in sync. We also have a new event handler for the on:input event. This function will be executed every time the user changes the content of the input field. The function accepts an event e and sets the value of the signal to the event_target_value of e, which is precisely the new content of the input field. "Event", "target" and "value" are all terms with specific meaning in the world of browsers and web development, so don't worry about it too much if it seems alien to you.

You can read more about the nuances of handling user input in the Leptos documentation.

You might find get_content.get() and set_content.set() a little verbose. There is a nicer way to write this, but it requires the nightly compiler for now, so I chose to avoid it.

Just one last thing, let's display the paekli content in the alert when sending it, to confirm everything works as expected:

#![allow(unused)]
fn main() {
<button
    on:click=move |_| alert(&format!("paekli with {} was sent!", get_content.get()))
>
    Send
</button>
}

Now enter some text, click send and observe that the correct value is displayed.

Phew. I don't know about you, but I'm exhausted now. Feel free to take a break, web development is hard! 😮‍💨

Paekli storage

We now have a signal to store the input of our users. But we should probably clear that when a paekli is sent and commit the content to a different signal. Otherwise we cannot distinguish between paekli content that was actually sent and the user just derping around in the text field.

I think you can manage on own. I believe in you!! 💪

Receiving paekli

I'll leave this up to you as well. You should be fine with all the things we've seen already. Make sure that a paekli cannot be received twice!

Release

I gotta say, it's not as exiting to tell you to release your software when it's literrally just pushing to the main branch 🥲

Additional Features

Now you have the opportunity to bring the web app to feature-parity if you have already implemented the additional features for another component. You should be mostly fine on your own with everything we've seen so far. You'll need more (complicated) signals to store additional data and more user input.

However, half the fun of the web app is integrating it with the HTTP server. So feel free to implement these additional features as you go along with that integration.

Expanding our storage space

Requirements:

  • Multiple paekli can be sent before they are received.
  • Paekli are received in the same order as they were sent.

Individual recipients

Requirements:

  • Paekli can be sent to a specific recipient
  • Recipients of a paekli can identify themselves and only receive paekli intended for them.

Express delivery

Requirements:

  • Paekli can optionally be sent with express delivery
  • Express paekli are always received before non-express paekli.

Release

Don't forget to cut a release once you've implemented these features!

Where to Go Next

The web app is a bit more limited in the set of integrations it supports, because it runs in the browser. For example, a web app cannot talk to a Unix socket or SQLite database.

However, there are fantastic integrations with technologies native to the browser.

Firstly, we have of course the HTTP server. This is the place to start if you want to connect your web app to your other components.

Secondly, there is WebSocket. This technology enables a web app and a server to have an ongoing connection where both sides can initiate messages. HTTP already enables client-to-server initiated messages, so the really interesting part is is server-to-client initiated messages via websocket. In simple terms: push notifications! That means with websocket, your web app can instantaneously update its GUI when your server receives a new paekli from a completely unrelated component.

The way to get there involves a couple steps though. You probably want to have an HTTP server already, otherwise stuff just gets more complicated (even though it's possible). Then you'll need to implement the websocket server, shared message types in the shared library and then integrate an ongoing websocket connection into your web app. For the exciting Aha!-moment, you'll also need another component to be integrated with the HTTP server, for example the CLI.

You get the point, it's a decent chunk of work. Don't worry though, I'll be there to help at every step! Just be prepared to spend some time on this if you choose to accept the challenge.

HTTP Server

HTTP servers are lots of fun! They are simple and versatile, allowing you to make your application available anywhere with an internet connection.

This guide does not assume any prior knowledge about writing web servers. So don't be scared if you've never done this before! If you're already a battle-hardened web dev, you can simply skip over the explanations of the basics.

What we're going to build here is sometimes also referred to as a REST-API. We're not gonna be too strict about that term though, we'll just stick to the basic conventions of web development.

Here's an example of what the product might look like:

> curl --header 'Content-Type: application/json' \
       --data '{ "content": "strawberries" }' \
       https://paekli.buenzli.dev/paekli

> curl --request DELETE https://paekli.buenzli.dev/paekli
{"content":"strawberries"}

The curl commands are a little verbose, but this is not the primary intended use case. The point is that the sender and the recipient could be completely different devices and clients on the network!

This guide assumes that you already created the CLI component. You can totally follow it if you haven't, but there will be fewer explanations of the basic steps.

Let's get started!

The Hello World of HTTP Servers

Initializing a new package

Just like for the CLI, we'll work with a new binary crate paekli-http.

cd paekli-rs
cargo new paekli-cli

Axum

There are many great choices for web-server libraries in Rust. blessed.rs lists a few of them. But I'm the one writing this book, so I get to decide. Axum is simple, modular, performant, well-maintained and popular. You can't go wrong with it!

cargo add axum --features macros

We don't actually need the feature flag macros for any functionality, but it has a handy tool to improve error messages in case we make a mistake.

async

Axum - as most other web-server libraries in Rust - makes use of a language feature called "asynchronous IO" or simply async. Some other languages have similar features, including C#, JavaScript and Python, so you might be familiar with it.

We will completely ignore that stuff in this guide, because there's quite a bit to learn about it and it really only matters for performance.

You will however see some stuff that might seem weird if you don't know about asnyc. The guide will point it out as well and remind you not to worry about it.

If you want to get serious about writing web-servers in Rust, it's definitely a good idea to learn about async on your own.

Tokio

Every Rust-program that makes use of async must have a so-called async runtime. This will be tokio in our case, it's pretty much the "default" one in the ecosystem. Tokio has many feature-flags, let's enable all of them to not have to worry about it:

cargo add tokio --features full

We will let the tokio runtime take over our main function, so let's add this goofy-looking setup:

#[tokio::main]
async fn main() {
    println!("Hello, world!");
}

Now, if you cargo run your project, it should still print "Hello, world!".

Spinning up a useless HTTP server

We're not quite done with the boilerplate yet:

async fn main() {
    let router = axum::Router::new();

    let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();

    axum::serve(listener, router).await.unwrap();
}

Confused about the .await? Don't worry about it, it's async stuff 🤫

Here we are creating a Router from the axum library. The router is responsible for deciding how to handle incoming HTTP requests. Right now, it doesn't do anything.

After that, we create a TCP-listener with the tokio library and bind it to the port 3000.

Lastly, we use the function axum::serve to make our previously created router respond to incoming requests on that TCP-listener.

If you cargo run this, there won't be any output in the terminal. However, you can already send requests and receive responses:

> curl --verbose 0.0.0.0:3000
* processing: 0.0.0.0:3000
*   Trying 0.0.0.0:3000...
* Connected to 0.0.0.0 (127.0.0.1) port 3000
> GET / HTTP/1.1
> Host: 0.0.0.0:3000
> User-Agent: curl/8.2.1
> Accept: */*
>
< HTTP/1.1 404 Not Found
< content-length: 0
< date: Fri, 08 Mar 2024 07:13:00 GMT
<
* Connection #0 to host 0.0.0.0 left intact

This is using curl to send an empty HTTP request to our server. Among all this HTTP-gobbledygook, the most interesting piece is 404 Not Found. I'm sure you've already seen this response in the browser!

So our server is telling us it didn't find the thing we were asking for, which is a reasonable default chosen by the axum library.

Handling our first request

Let's write a simple function that will be responsible for handling a request:

#![allow(unused)]
fn main() {
async fn hello_world() -> &'static str {
    "hello world"
}
}

All HTTP handlers need the async keyword before fn. Don't worry about it 🙂

Now we can tell the router to let some requests be handled by this function:

use axum::routing::get;

async fn main() {
    let router = axum::Router::new().route("/", get(hello_world));
}

The router needs two pieces of information to decide which handler is responsible for an incoming request:

  • "/" is the so-called path. It corresponds to the part of a URL after the domain. For example, if you go to docs.rs/axum/latest/axum/struct.Router.html in your browser, the request you're sending has the path /axum/latest/axum/struct.Router.html.
  • get(hello_world) tells axum that only requests with the method GET should be handled. The method is a part of the HTTP protocol and GET is the default one. We'll see more methods later on.

We will learn about bits and pieces of the HTTP protocol as we need them.

If we take a second look at the output of the curl command from above, we might notice this line:

> GET / HTTP/1.1

This is saying exactly that curl sent a request with the method GET and the path /. That means our request should now be handled! Let's try again, (remember to restart the server with cargo run):

> curl 0.0.0.0:3000
hello world

Hurray! You should also see the greeting when you navigate to http://0.0.0.0:3000 in the browser.

Shipping the first version

Let's again make sure we can release our software efficiently. Remember that you should use a version number that is higher than the last one you used for any other component. Otherwise, our distribution-tool cargo-dist might get confused.

Assuming the last highest version used was 0.1.9:

# paekli-http/Cargo.toml
version = "0.1.10"

With those changes: git commit, push, tag and push the tag!

Release

You've now shipped a functioning HTTP server ready to download for your users. You're awesome! 😎

Sending and Receiving Paekli

Our server can now say hello, but we want it to help us send and receive paekli. Let's stub out a handler for each type of request.

#![allow(unused)]
fn main() {
#[axum::debug_handler]
async fn send_paekli() -> &'static str {
    "\
Thank you for trusting Paekli LLC!
We will deliver your paekli in mint condition.
* throws your paekli directly in the trash *"
}

#[axum::debug_handler]
async fn receive_paekli() -> &'static str {
    "\
There aren't any paekli for you at the moment.
* tries to hide paekli in the trash can *"
}
}

Note the new #[debug_handler] attribute above the handler function. This is what we needed --features macros for when we added the axum dependency. It is a macro that provides better error messages in case there's something wrong with our handler function. I recommend you add it to every new handler you're going to write. I also recommend not making any mistakes in the first place, but... you know.

Picking the right HTTP method

We have already seen the HTTP method GET, which is the default one. It is used to ask a server for some information. There are other methods with specific meanings, the most common ones are:

  • GET for reading data
  • POST for storing new data
  • PUT for modifying existing data
  • DELETE for deleting data

These four methods cover all the operations that are commonly performed on data.

POST is the method that most closely matches the operation of sending a paekli, because the server will need to store a new piece of data. DELETE seems appropriate for receiving, since the paekli should be deleted on the server after it has been received.

Recall that the router also needs a path to register a handler, but we won't have to worry about that. We just use the empty or "root" path. The path is conventionally used to specify a resource. Since we only have one resource (paekli), the path doesn't matter. Here's how we can register these new handlers with different HTTP methods:

#![allow(unused)]
fn main() {
let router = axum::Router::new()
    .route("/", post(send_paekli))
    .route("/", delete(receive_paekli));
}

Let's try them out:

> curl --request POST 0.0.0.0:3000
Thank you for trusting Paekli LLC!
We will deliver your paekli in mint condition.
* throws your paekli directly in the trash *

> curl --request DELETE 0.0.0.0:3000
There aren't any paekli for you at the moment.
* tries to hide paekli in the trash can *

Great! We can now execute different functions based on the incoming request.

Content and Storage

The send- and receive-handler are just dummies at this point, let's change that.

Sending paekli with content

These days, it is common to use JSON to transmit information via HTTP. We have already seen how the libraries serde and serde_json make JSON handling a walk in the park.

The content of a paekli will be another CLI argument. Since we only expect content when sending a paekli, we will add the argument to that subcommand. Specifying the content of a paekli you're receiving doesn't make sense. When adding serde, don't forget the derive feature.

Let's define the structure of the input we are expecting to our send handler, it's simple enough:

#![allow(unused)]
fn main() {
#[derive(Deserialize)]
struct SendRequest {
    content: String,
}
}

Now we're about to see the fun part of axum. We can simply add a Json<SendRequest> to the parameters of our handler. The Json type is from the axum library and instructs it to take the body of an incoming HTTP request, deserialize it into the specified type (SendRequest) and pass it into the handler function.

#![allow(unused)]
fn main() {
async fn send_paekli(Json(request): Json<SendRequest>) {
    println!("sending: {}", request.content);
}
}

This is very cool and avoids boilerplate, but it's not without its downsides. The types and order of parameters and the return type of your handler function have great significance for the behavior of your server. That being said, the #[axum::debug_handler] should go a long way in helping with that.

Let's explore how our changes affected the behavior of our send-handler:

> curl --request POST 0.0.0.0:3000
Expected request with `Content-Type: application/json`

If we don't send a body with our request, we will be told that some JSON was expected. Not too bad! Just by adding a typed parameter to our function, axum generates meaningful error messages for us. Let's lie to the server in the request by saying that we're sending JSON, without actually doing it:

> curl --request POST \
    --header 'Content-Type: application/json' \
    0.0.0.0:3000
Failed to parse the request body as JSON: EOF while parsing a value at line 1 column 0

We're being told the JSON we sent was invalid, that's also reasonable. What about valid JSON that doesn't contain the required content: String?

> curl --request POST \
    --header 'Content-Type: application/json' \
    --data '{ "some_other_stuff": 1234 }' \
    0.0.0.0:3000
Failed to deserialize the JSON body into the target type: missing field `content` at line 1 column 28

Awesome. With very little code, we're getting perfect validation and good errors every step along the way. For completeness' sake, here's a correct example:

❯ curl --request POST \
    --header 'Content-Type: application/json' \
    --data '{ "content": "Legos" }' \
    0.0.0.0:3000
Thank you for trusting Paekli LLC!
We will deliver your paekli in mint condition.
* throws your paekli directly in the trash *

The curl commands are getting quite verbose at this point. You might want to keep a file or even a script with a bunch of them. The HTTP server component is not primarily intended for direct user interaction, but rather to connect other, local components with each other. However, that will be part of the integrations.

Storing paekli for delivery

We could do the same as with the CLI and store the paekli in the file system. But at that point, we would be duplicating the code interacting with the file system. If we made a mistake or changed something in one place but not the other, it could lead to data corruption. The solution to this is a shared storage backend, which is a component you can implement later. For now, we're intentionally going to implement a terrible way to store our data:

#![allow(unused)]
fn main() {
static PAEKLI_STORAGE: Mutex<Option<String>> = Mutex::new(None);
}

Global mutable state. Are you as disgusted as I am? Great, this will give you motivation to implement the shared storage backend component later 😉 Here is how you can store an incoming paekli:

#![allow(unused)]
fn main() {
let mut guard = PAEKLI_STORE.lock().unwrap();
*guard = Some(request.content);
}

We're not handing the error-case where a paekli exists already for now.

Delivering paekli

Again, I'll leave the delivery of the paekli up to you.

If you are interested in doing idiomatic error-handling both in terms of Rust and web standards, here's the type your handler should probably return:

-> Result<Json<ReceiveResponse>, StatusCode>

In the success-case, a JSON-formatted response with whatever information the custom type ReceiveResponse holds. In the error-case, an HTTP status code. 404 Not Found seems like a reasonable choice, in case there is no paekli.

Release

That was the MVP! A release is in order.

Additional Features

Now you have the opportunity to bring the HTTP server to feature-parity if you have already implemented the additional features for the CLI. However, note that it will require you to make the already bad storage system even worse. So I will softly recommend to you to implement a storage backend first. The file system storage backend should be easy to do, because most code can be copied from the CLI.

If you're not the type to finish what you've started, you can always do something new instead! Here's the guide on where to go next.

There's one more thing to consider when implementing these additional features for the HTTP server. If you implement them in a way that they are required, then that means all you client components will have to implement the feature before being compatible with your HTTP server. For example, your server might require that a client provide a recipient when sending a paekli. If you want to ensure that other clients can integrate as quickly as possible, even without the full feature set, make sure to keep these features optional. In the case of the recipient, your HTTP server could default to some shared inbox where everyone can send and receive paekli without identification.

Expanding our storage space

Requirements:

  • Multiple paekli can be sent before they are received.
  • Paekli are received in the same order as they were sent.

Individual recipients

Requirements:

  • Paekli can be sent to a specific recipient
  • Recipients of a paekli can identify themselves and only receive paekli intended for them.

Express delivery

Requirements:

  • Paekli can optionally be sent with express delivery
  • Express paekli are always received before non-express paekli.

Release

Don't forget to cut a release once you've implemented these features!

Where to Go Next

Now that you have an HTTP server, you can pretty much do whatever you want! All clients can integrate with the HTTP server and thereby become connected.

The best way to integrate a client with the HTTP server is by implementing the HTTP client storage backend. The only client component that cannot benefit from that is the web app, because it runs in the browser. To integrate that one with the HTTP server, see Web App and HTTP.

The one thing the HTTP server cannot do is live-updating GUIs. If you want that, you should implement the WebSocket component.

WebSocket Server

Building on the HTTP server

The WebSocket component can technically be implemented without the HTTP server, but it's not very useful and makes everything more complicated.

Therefore, this guide assumes you have completed the HTTP server.

WebSocket is a protocol that allows us to easily send messages from the server to the client. This is in contrast to HTTP, where we are limited to requests initiated by the client. The most important use case for websockets is a live-updating GUI. In our case, the server can let its clients know immediately when a new paekli is available.

The Hello World of WebSocket

The websocket stuff will be built on top of the HTTP server, so we don't need a new package this time. We're mostly going to work in paekli-http.

Axum WebSocket feature flag

Axum has built-in support for websockets, but it is gated behind a feature flag. Add it with the following command:

cargo add axum --features ws

Opening a WebSocket connection

First of all, we'll need a new route where clients can request to open a websocket connection:

#![allow(unused)]
fn main() {
axum::Router::new()
    // ...
    .route("/notifications", get(subscribe_to_notifications))
}

Next, we actually have to write that handler. Don't worry, everything will be explained in a second.

#![allow(unused)]
fn main() {
#[axum::debug_handler]
async fn subscribe_to_notifications(ws: WebSocketUpgrade) -> axum::response::Response {
    ws.on_upgrade(|mut socket| async {
        socket
            .send(axum::extract::ws::Message::Text("Hello, world!".into()))
            .await
            .unwrap();
        socket.close().await.unwrap();
    })
}
}
  • WebSocketUpgrade is a type from axum, assuming the feature ws is enabled. Axum will only call this handler with requests for websocket connections.
  • In the body, we immediately pass a closure to ws.on_upgrade(), which is run once the connection is successfully established.
  • Once we have the socket in the closure, we send a single message "Hello, world!" and close it again.
  • Don't worry about async and await.

Websockets can be tricky to debug. One option is to use websocat:

websocat ws://127.0.0.1:4200/notifications

Success

This should print "Hello, world!" to the terminal.

Notifications

Let's say we want to send a notifications to all of our subscribers once a new paekli was sent, i.e. it is ready to be received.

Now we've got a bit of a problem. The two handler functions send_paekli and subscribe_to_notifications are completely separate! They cannot talk to each other at all.

Thankfully, axum has a solution for this problem. We can pass some "shared state" to all handlers. In this case, the state is just going to be a channel through which the handlers can communicate with each other. The channel itself is provided by the tokio runtime.

#![allow(unused)]
fn main() {
let (notification_sender, _) = tokio::sync::broadcast::channel(16);
}

The argument 16 is the channel capacity, it shouldn't matter for our purpose. The second return value is a receiver from that channel, but we don't care about it for now. We can always create new receivers by subscribing on the sender again. Let's pass this channel as shared state to all handler functions:

#![allow(unused)]
fn main() {
axum::Router::new()
    // ...
    .route()
    .with_state(notification_sender)
}

There will likely be some compile error now because a type cannot be inferred. This should be fixed shortly. Let's use this sender in the send_paekli handler:

#![allow(unused)]
fn main() {
async fn send_paekli(
    State(sender): State<Sender<()>>,
    // other params...
) {
    // ...
    sender.send(()).unwrap();
}
}

The necessary imports are axum::extract::State and tokio::sync::broadcast::Sender. We specify the type of notification being sent as the empty tuple with Sender<()>. This can be changed later, if we actually want to send information with the notification.

The semantics are simple, the State wrapper around our sender tells axum to pass in the shared state we gave to the router earlier. Then we send an empty tuple over the channel to indicate a paekli was sent.

Now we can listen on this channel in the subscribe_to_notifications handler:

#![allow(unused)]
fn main() {
async fn subscribe_to_notifications(
    ws: WebSocketUpgrade,
    State(sender): State<Sender<()>>,
) -> axum::response::Response {
    // ...
}
}

Within the function body, you'll have to make three modifications:

  • get a new receiver by calling sender.subscribe()
  • loop over incoming notifications with receiver.recv().await (just send any string over websocket in the loop body)
  • move the channel into the closure: ws.on_upgrade(|mut socket| async move { (otherwise you'll get a lifetime error)

Success

At this point, you should be able to listen on mutliple websocket connections and get a notification every time a paekli is sent.

This is already enough in terms of the MVP. Feel free to work on an integration next, like Web App and WebSocket. That being said, there is one more section about handling individual recipients.

Individual Recipients

We can now send and receive notifications anytime a paekli is sent. However, our users probably only want to be notified if there is a paekli for a specific recipient, i.e. themselves.

Let's broadcast information about the recipient of the paekli in our send_paekli handler. We'll need to change the type of our channel from Sender<()> to Sender<String>. Then we just... sender.send(recipient) in the body. You should already have access to this recipient if you implemented the additional feature of individual recipients for the HTTP server component. We receive the notifications in the subscribe_to_notifications handler function, so we'll need to update the Sender type there as well to fix any compilation errors.

Now we could just pass along the name of the recipient from the internal channel to the websocket. That would be simple and serve the purpose. But it's a little nicer if the user can specify a recipient up front and only receive those notifications.

The recipient as path parameter

In terms of our API, we'll allow users to subscribe to notifications on different URL paths. For example, Alice can subscribe to her notifications at /notifications/alice and Bob can subscribe at /notifications/bob.

But we don't want to write a new handler function for ever possible recipient... that would be a lot. Axum allows us to turn some segments of the path into parameters to the handler function. To do that, we need to first declare the route with the new path:

#![allow(unused)]
fn main() {
.route("/notifications/:recipient", get(subscribe_to_notifications))
}

Notice the colon before recipient, it means this is a parameter instead of a literal part of the path. Next, we can accept the parameter in our handler:

#![allow(unused)]
fn main() {
async fn subscribe_to_notifications(
    Path(recipient): Path<String>,
    // ...
) -> Response {
}

Path is imported from axum::extract::Path. This type tells the axum library to extract the value of recipient from the URL of the request. That only works because we declared the parameter previously in the route declaration with "/notifications/:recipient".

More than one path parameter

We only need one path parameter here, so there's not much that can go wrong. But what if we had multiple path parameters? For example: "/notifications/:recipient/:sender" to only get notified for paekli from a specific sender?

That works too, but then we have to accept the parameters as a tuple inside the Path wrapper from axum:

#![allow(unused)]
fn main() {
async fn subscribe_to_notifications(
    Path((recipient, sender)): Path<(String, String)>,
    // ...
) -> Response {
}

The names of the parameters recipient and sender don't really matter. axum gives us the parameters in a tuple in the order they appear in the URL path.

Inside the body of subscribe_to_notifications, we can now send a notification on the websocket only if the recipient matches the one of our URL parameter. The code you'll need might look a little something like this:

#![allow(unused)]
fn main() {
let Ok(channel_recipient) = receiver.recv().await
if url_path_recipient == channel_recipient {
    socket.send(/**/)
}
}

Trying it out

With all these connections, it's becoming more tricky to test that everything works. Here are a couple commands to get you started. You might have to adjust some details, especially if you designed your HTTP API a little differently.

# Terminal 1: run HTTP & WebSocket server
cd paekli-rs/paekli-http
cargo run
# Terminal 2: listen to notifications for Alice
websocat ws://127.0.0.1:4200/notifications/alice
# Terminal 3: listen to notifications for Bob
websocat ws://127.0.0.1:4200/notifications/bob
# Terminal 4: send paekli

# send paekli to Alice
curl --header 'Content-Type: application/json' --data '{ "content": "shoes", "recipient": "alice" }' localhost:4200/paekli

# send paekli to Bob
curl --header 'Content-Type: application/json' --data '{ "content": "shoes", "recipient": "alice" }' localhost:4200/paekli

Instead of using curl to send paekli, you can obviously also use any of you client components if you have already integrated them with the HTTP client storage backend.

Success

Phew, that was a little tedious! Hopefully everything worked out as expected. 🤞

Obviously we don't want to handle websocket connections as manually as we did just now. The real fun starts when we integrate that with our GUI clients! I recommend you do that next, otherwise all that work was for nothing.

File System Storage

Many of the available components can use the file system to store paekli. In order to avoid code duplication, we'll implement the DistributionCenter trait. If you haven't created this trait yet, do the Storage Backend guide first and then come back here.

Shortcut if you have the CLI

You have likely already implemented storage of paekli in the file system for the CLI. In that case you can mostly copy-paste the functionality from there.

If you didn't implement the CLI, jump ahead to the next section. The file system related instructions from the CLI guide are duplicated here.

#![allow(unused)]
fn main() {
struct FileSystemStorage;

impl DistributionCenter for FileSystemStorage {
    // ... copy-paste code from CLI ...
}
}

When copy-pasting from the CLI, remember to add any libraries the code uses to paekli-core as well, including the required feature flags. (check paekli-cli/Cargo.toml)

Already done? Read up on how to use the storage backend next.

Storing paekli

Applications are expected to store their data in different locations depending on the operating system. We might be tempted to tell our users to just install Linux when they're bugging us about supporting their platform. Instead, let's use the directories crate to not have to worry about it at all.

Here's a couple lines of code you'll probably need:

#![allow(unused)]
fn main() {
let project_dir = directories::ProjectDirs::from("dev", "buenzli", "paekli")
    .expect("the user's home directory seems to be corrupt");

let storage_dir = project_dir.data_dir();

std::fs::create_dir_all(storage_dir).expect("failed to create storage directory");

std::fs::write(storage_dir.join("content"), content)
    .expect("failed to store paekli");
}

On Linux, a paekli will be stored at the following location:

~/.local/share/paekli/content

Retrieving paekli

Retrieving paekli should be easy:

  • Read the same file where the paekli is stored.
  • Handle the case when there's no paekli.
  • Delete any retrieved paekli so they don't get delivered twice.

Error handling

I will leave this up to you. The CLI guide includes some additional instructions about using the anyhow crate for error handling. Whatever you choose to do, it may impact the function signatures in your DistributionCenter trait. That's totally fine, the compiler will help you with any necessary refactoring.

Done

Jump over here to find out how to use the DistributionCenter.

SQL Database

For this component, you need the DistributionCenter trait. If you don't have it yet, do the Storage Backend guide first and then come back here.

SQL databases are ubiquitous and the ability to use them is a marketable skill. You don't need to know SQL to follow this guide, but the provided code snippets won't be explained either. At the start of a new project, we have to ask ourselves which database to pick, like PostgreSQL, MySQL etc. We also have to decide if and what library we are going to use to make queries from our general-purpose programming language.

If we wanted to pull out the big guns, we could go with PostgreSQL and a full-blown ORM like diesel. But for our purposes, we'll travel a little lighter with SQLite as our database and sqlx as our query library.

Connecting to the database

In paekli-core, create a new storage backend like you've probably done before:

#![allow(unused)]
fn main() {
struct SqlDatabase;

impl DistributionCenter for SqlDatabase {
    // ...
}
}

sqlx is database agnostic and provides compile-time checked queries without abstracting the raw power of SQL away from you. It is also fundamentally async, a language feature we did not discuss. Luckily, async is not necessary to understand the rest of what's going on, so it won't be explained here either.

Let's add the dependency with the feature flags sqlite and runtime-tokio, which is necessary to run async code. We'll need to use the runtime tokio directly as well, let's add it with the full feature set.

cargo add sqlx --features sqlite,runtime-tokio
cargo add tokio --features full

In contrast to the file system storage or the HTTP client, the SQL database needs some initialization code.

#![allow(unused)]
fn main() {
impl SqlDatabase {
    fn new() -> Self {
        // ...
    }
}
}

To determine the location to store our database, we'll use the directories crate. You've probably already done this for the CLI.

#![allow(unused)]
fn main() {
let project_dir = directories::ProjectDirs::from("dev", "buenzli", "paekli")
    .expect("the user's home directory seems to be corrupt");

let storage_dir = project_dir.data_dir();

std::fs::create_dir_all(storage_dir).expect("failed to create storage directory");

let db_path = storage_dir.join("db.sqlite");
if !db_path.exists() {
    std::fs::File::create(&db_path).expect("failed to create database");
}
let db_url = format!("sqlite:{}", db_path.display());
}

Next, we need to create a database connection pool. The way to do that with sqlx is an async task, so we need a tokio runtime to execute it. We'll also need the runtime to execute queries later, so we'll store it in a variable rt.

realistic async

This is not really how you would do async programming in a serious project. It's just the simplest way to sweep the async stuff under the rug. Don't worry about it, just make a mental note that for a serious async project, we'd do things differently.

#![allow(unused)]
fn main() {
let rt = tokio::runtime::Runtime::new().unwrap();
let pool_task = SqlitePool::connect(&db_url);
let pool = rt.block_on(pool_task).unwrap();
}

A connection pool to a SQLite database? What?

You're right, a connection pool doesn't really make sense in the context of SQLite. However, to be database agnostic, sqlx uses the same abstractions for SQLite as for PostgreSQL etc. We could create a single connection to SQLite, but then we'd need a mutable reference to it to execute queries. Connection pools in sqlx have the additional convenience that queries can be executed on an immutable reference to them.

Initial migrations

Now that we have an open database connection, we need to create the schema.

sqlx has a built-in feature for migrations. It allows you to store them as scripts in some directory and automatically execute all of them. However, since we just need a single table, we'll keep it simple and use a regular query.

#![allow(unused)]
fn main() {
let create_table_task = sqlx::query(
    "
    CREATE TABLE IF NOT EXISTS paekli (
        content TEXT
    )
    ",
)
.execute(&pool);
rt.block_on(create_table_task).unwrap();
}

Storing paekli

We can finally start implementing the functionality of DistributionCenter. Here's the query to insert a paekli into the database. This async task needs to be executed on the tokio runtime with .block_on().

#![allow(unused)]
fn main() {
sqlx::query("INSERT INTO paekli VALUES (?)")
    .bind(content)
    .execute(&pool)
}

Prepared queries

The ? in the query string and .bind(content) are executed as a prepared statement. Prepared statements have built-in protection against SQL injection (a common security vulnerability).

You should NEVER construct a SQL query from user input with normal string manipulation like the format!() macro.

Retrieving paekli

A reasonable approach for retrieving paekli is a query like the following. (rowid is automatically added to every table by SQLite.)

#![allow(unused)]
fn main() {
let select_task = sqlx::query(
    "
    SELECT rowid, content FROM paekli
    LIMIT 1
    ",
)
.fetch_one(&pool)
}

This would work, but the returned values would be an SQL row, not the most convenient format. Ideally, we want the result to be filled directly into a nice Rust type. We can do that with query_as and a derived FromRow implementation on our own type:

#![allow(unused)]
fn main() {
#[derive(sqlx::FromRow)]
struct PaekliRow {
    rowid: i64,
    content: String,
}

let select_task = sqlx::query_as().fetch_one(&pool);

let PaekliRow { rowid, content } = rt.block_on(select_task).unwrap();
}

Instead of calling .unwrap(), we should handle the case where no paekli exist.

Lastly, execute another query to delete the retrieved paekli from the database. The SQL query to delete a row with a specific ID is: DELETE FROM paekli WHERE rowid = ?.

Success

The implementation of DistributionCenter is complete. Now you can extend your constructor function for DistributionCenter and enable your clients to select the new backend.

To enable all additional features for this storage backend, you might need a little more knowledge about SQL than what we've seen so far... good luck!

HTTP Client

Tip

You don't need your own HTTP server for the HTTP client to work! There is a reference implementation of the server which your client can talk to.

Pretty much all of the available components can use an HTTP server to store paekli. In order to avoid code duplication, we'll implement the DistributionCenter trait. If you haven't created this trait yet, do the Storage Backend guide first and then come back here.

#![allow(unused)]
fn main() {
// somewhere in paekli-core
struct HttpClient;

impl DistributionCenter for HttpClient {
    // ...
}
}

Storing paekli

The most common library for making HTTP requests is reqwest (nope, that's not a typo 😄). Add it to your shared library paekli-core with a couple feature flags:

cargo add reqwest --features blocking,json

Here's a brief explanation for each feature flag. They are appropriately mentioned in the documentation (docs.rs/reqwest), so you shouldn't have any issues picking the right ones if you use reqwest on your own.

  • blocking
    reqwest is async by default, which is a language feature that we haven't discussed. In order to use the simpler blocking API, we need to enable that flag.
  • json
    This one is intuitive, it enables JSON (de-)serialization via serde_json.

Now, to create an HTTP request for storing a paekli, we're gonna need to create a client with reqwest and call its post method:

#![allow(unused)]
fn main() {
let client = reqwest::blocking::Client::new();
client
    .post("https://paekli.buenzli.dev/paekli")
    .json(todo!())
    .send()
    .unwrap();
}

Using your own HTTP server

If you want to send the request to your own server, just replace the URL string with something like http://localhost:4200/paekli (make sure to get the right port). If you want to get real fancy, try making the URL configurable in the HttpClient! That way, users of this storage backend could use it to connect to any HTTP server they like.

Notice that I left out the JSON body. You have two options here:

  1. If you're using your own server, you should extract the types like SendRequest and ReceiveResponse into the shared library paekli-core so you can use them in the client as well. That way you can ensure that client and server always agree on the structure of the data that's being sent.
  2. If you're using the reference server, read its API documentation at paekli.buenzli.dev to find out what structure of data it's expecting.

Either way, you should have some kind of struct which implements serde::Serialize which you can then pass into the .json() call.

Retrieving paekli

To retrieve paekli, we need the delete method instead of post. Obviously we also have to send a different request body, depending on what the server expects.

You may have discarded the response from the store operation, or you may have read it for error handling / reporting purposes. Now, for the retrieve operation, we have to read the response to get the paekli content. The response is what you get from the .send().unwrap(). (you are encouraged to do better error handling than me 🙈)

Here's how to get the JSON response:

#![allow(unused)]
fn main() {
let data: ReceiveResponse = resp.json().unwrap();
}

Again, ReceiveResponse just has to be some type that implements serde::Deserialize and actually matches what we expect the server to respond.

Extending the constructor function

The implementation of the DistributionCenter trait should be complete at this point, but we still need a way for paekli-core users to get their hands on an instance of HttpClient. You will likely have to add some parameter to the constructor function, so users can select which storage backend they want. As the number of backends grows, maybe an enum makes sense? If some backends can even be configured further (e.g. server URL), such configuration data can be stored alongside the enum variant. I'll leave the details up to you.

Success

That should be it!

Now you can extend your clients to make use of the new backend.

Creating a Shared Library

For multiple reasons, our different components may need to share some code. Luckily this is very simple. Let's start by initializing a new library package:

cd paekli-rs
cargo new --lib paekli-core

This new library can be used in any of our other packages by adding it as a dependency first. Note that we need the --path flag for a local library, as opposed to ones from crates.io.

cd paekli-rs/paekli-cli
cargo add --path ../paekli-core

Lastly, you can confirm it works by trying to import the generated add function. Note how the dash in paekli-core changes to an underscore in Rust code. This is automatic, because Rust identifiers cannot contain dashes.

#![allow(unused)]
fn main() {
// paekli-cli/src/main.rs
use paekli_core::add;
}

Now you can add some functionality to paekli-core and use it in your other packages.

Target-incompatible dependencies

As you implement more components, your shared library will accumulate its own dependencies. Some of these dependencies are not compatible with the web app. This can lead to difficult to understand error messages. Essentially, the web app will fail to compile because a bunch of stuff was "not found". This is because it doesn't exist in the browser, where our app will run. The next section will explain how to deal with this. If you have no intention of implementing the web app, you can safely ignore it.

Target-specific code and dependencies

Some functionality in paekli-core will only be available for components that run in a normal operating system. This is notably not the case for the web app.

So, we need to be able to include or exclude some functionality of paekli-core based on which platform it's being compiled for.

Compiling code only for specific targets

Let's assume your library is organized in two modules, wasm_compatible and wasm_incompatible. In that case, you can skip compiling the incompatible module for wasm-builds like so:

#![allow(unused)]
fn main() {
pub mod wasm_compatible;

#[cfg(not(target_family = "wasm"))]
pub mod wasm_incompatible;
}

Depending on how you organize your code, you may have to add multiple of these annotations.

See also the cfg page of Rust By Example and the Rust Reference.

Declaring target-specific dependencies

Even if you already excluded your own target-incompatible code as explained above, dependencies declared in Cargo.toml will still (fail to) compile. We need to tell cargo to completely exclude these libraries from the build, based on the target platform.

Take a look at the following example. The dependency serde is compiled for every target, while tokio is not compiled for the web app.

# paekli-core/Cargo.toml
[dependencies]
serde = { version = "1.0.197", features = ["derive"] }

[target.'cfg(not(target_family = "wasm"))'.dependencies]
tokio = { version = "1.36.0", features = ["full"] }

Note how the cfg syntax is identical to the one used in the source code attribute. The documentation for this is in the cargo book.

Storage Backend

Most of the components use identical methods to store paekli, but the method itself may be configurable. For example, both the CLI and the python module might store paekli in the file system, an SQL database or delegate storage to the HTTP server. The server itself may use the file system, a database or just keep it in-memory... you get the point.

This is the perfect place to introduce an abstraction. We can program a component to work independently of how paekli are stored. Different storage methods can then be swapped out easily.

If we think about a postal service, we might make the analogy of a distribution center. The postal office doesn't need to know how the distribution center works, it only cares about the functionality it provides.

This is a perfect use case for a trait:

#![allow(unused)]
fn main() {
trait DistributionCenter {
    fn store();
    fn retrieve();
}
}

The fundamental operations store and retrieve are obvious, but I'll let you decide what the parameters and return types should be. Don't worry about getting it right immediately, you can always refactor the interface as the need arises. Rust's strong type system will help you correctly change all the places where the interface is already used.

Since this DistributionCenter will be used by most of the components, it needs to go into our paekli-core. Read the instructions for creating a shared library if you haven't already.

The first implementation

Since you probably already implemented file system storage for the CLI component, it makes sense to turn that into your first implementation of the trait.

For the next section to make sense, make sure you have at least one implementation.

Using the trait

The first time you're reading this, you will only have one storage backend. It would be simple to just use that directly. To future proof for additional storage backends, let's assume we already have two:

#![allow(unused)]
fn main() {
// paekli-core/src/storage.rs
trait DistributionCenter {}

struct FileSystemStorage;
impl DistributionCenter for FileSystemStorage {}

struct HttpClient;
impl DistributionCenter for HttpClient {}
}

Let's try to write a simple function for creating a DistributionCenter:

#![allow(unused)]
fn main() {
fn new_distribution_center(local: bool) -> ? {
    if local {
        FileSystemStorage
    } else {
        HttpClient
    }
}
}

Hm, this doesn't quite work. There is no way for us to write a correct return type for this function, because the two possible values have different types.

What we need is dynamic dispatch:

#![allow(unused)]
fn main() {
fn new_distribution_center(local: bool) -> Box<dyn DistributionCenter> {
    if local {
        Box::new(FileSystemStorage)
    } else {
        Box::new(HttpClient)
    }
}
}

How dynamic dispatch works

The full explanation of dynamic dispatch can be found in chapter 17 of the Rust book. Here's a condensed version:

The dyn keyword generates a table of function pointers (vtable) for each type that implements a certain trait. The vtable contains pointers to all methods of the trait and the drop method (destructor).

A pointer to this vtable is then stored alongside a pointer to the actual value. Such a value is often referred to as a trait object. It uses the same mechanism of storing metadata with a pointer (fat pointer) as the slice type.

This means that dynamic dispatch only works with some kind of pointer type, like Box<dyn Foo> or &dyn Foo. The memory size of the value behind the poiner is unknown at compile time. That's OK though, because the only thing you can do with such a value is call methods on it that actually do know its size.

Finally, in your clients that need to access a distribution center, you can write:

#![allow(unused)]
fn main() {
let center = new_distribution_center();
center.store();
center.retrieve();
}

Web App and HTTP

The MVP of the web app stores paekli for delivery in a local signal. This means the web app cannot send or receive paekli to and from other components. Also, the paekli that were sent but not received are lost when the web app reloads. (That could be fixed though without any integrations, browsers do allow websites to persist data.)

These limitations will be fixed by the integration with the HTTP server.

Sharing request types

Our HTTP server expects requests from us to be in specific JSON formats. Those formats are derived from our Rust types, so we didn't need to specify them explicitly.

Therefore, the easiest way to ensure we are always sending and receiving correctly-formatted JSON is to use the type system!

  • Create a shared library if you haven't already.
  • Copy the request types (SendRequest, ReceiveResponse etc.) from paekli-http to paekli-core. You're free to organize it into a module or not. All these types need to derive Serialize and Deserialize.
  • You might have to add some of the dependencies of paekli-http to paekli-core. Don't forget to add all the --features you need.
  • Run cargo add --path ../paekli-core for the http and web app component to access the shared type.

Sending paekli

You probably have some signal to store sent paekli at this point. We'll get rid of that signal and tell our HTTP server to store it instead. We do that with the HTTP request which the server is programmed to accept.

We already had to add the dependency gloo, which is also going to help us send HTTP requests. Let's start by defining the request:

#![allow(unused)]
fn main() {
use gloo::net::http::Request;

let request = Request::post("https://paekli.buenzli.dev/paekli")
    .json(&SendRequest {
        content: todo!(),
        recipient: None,
        express: false,
    })
    .unwrap();
}

We use Request::post to say we want to send an HTTP request with the method POST. We also pass the URL we want to send it to as the argument. Notice the path /paekli at the end, it has to match the router configuration of the server. You can replace the domain with localhost:3000 or whatever port you have, if you want to talk to your own server. You can even make the domain configurable in the UI, if you like! But I leave that up to you, hard coding is perfectly fine for our purposes.

Notice the argument to the .json() method. It's a reference to our SendRequest type! This is possible, because it implements the Serialize trait, which we were able to derive. We can be pretty confident the body of that request is exactly as the HTTP server expects it. You just need to add the correct content.

We still have to actually send the request, here's how that goes:

#![allow(unused)]
fn main() {
spawn_local(async {
    request.send().await.unwrap();
});
}

What?

There is quite a bit of unintuitive boilerplate here. The reason is that browsers don't (yet) allow WebAssembly to make HTTP requests. Rather, wasm has to ask JavaScript to do it instead. Luckily, this glue code is generated by our libraries.

The browser only allows JavaScript to make HTTP requests via a browser API, which is asynchronous for performance reasons. So these weird-looking three lines of code construct an asynchronous JavaScript task in WebAssembly and instruct JavaScript to process it via its own runtime.

Still confused? Yeah, me too buddy.

Try it out!

You can test whether or not your send request worked by making a receive request via curl.

If you're using paekli.buenzli.dev, you can also trigger a receive request in the API docs.

Receive paekli

The general idea here is the same as for sending paekli.

Firstly, you need to consider the correct Request to construct. Method, path, body... make sure to get them right.

Secondly, when you send the request in the async block, you need to actually read the response from the server. There is a method .json() on gloo's Response type for this purpose. It is generic, so the compiler might need some kind of hint that you want a ReceiveResponse as we've defined it.

Also note that it's not trivial to get the response out of the async block. You can display an alert from the async block, which I recommend. Otherwise, if you'd like a more sophisticated user experience, you can use a signal to store the received paekli.

Additional features

If you have already implemented the additional features in your web app, make sure they work with this integration as well!

Release

You've made some amazing progress! The result is a full-stack application written entirely in Rust.

We are left to question the very Existenzberechtigung of JavaScript...

Web App and WebSocket

If you don't have the WebSocket component yet, you can still integrate with the reference implementation. This guide assumes you have the feature for individual recipients implemented, but it should work without it with a few adjustments.

Let's first think about how we want to handle the websocket connection on the client side. When users first open the app, we don't want any websocket connection to be opened. Users should then be able to select a recipient (presumably themselves) and open a websocket connection to receive the notifications for that recipient. Users should also be able to stop receiving notifications, i.e. close the websocket connection.

The most idiomatic way to achieve such a life cycle for resources in our UI library leptos is to tie it to the life cycle of a UI component. To open a websocket connection, we render the component. To close the connection, we stop rendering the component. Sounds complicated and abstract for now, but we'll get there one step at a time.

Creating a custom component

Until now, we only had a main function and all our declarative UI was in a single view! macro call. That macro contained standard HTML elements. leptos allows us to define our own components and use them as if they were HTML elements:

#[component]
fn NotificationListener() -> impl IntoView {
    view! {
        "Hello from a custom component!"
    }
}

fn main() {
    // ...

    mount_to_body(move || {
        view! {
            // ...
            <NotificationListener/>
        }
    })
}

If you have trunk serve running, you should now see the text in the custom component in your app.

Rendering conditionally

Thinking about our final goal again, we need to be able to render this component conditionally, based on whether a websocket connection should be open or not at any given moment.

Let's start simple and use a signal to store a boolean. A button can then toggle the signal and cause our component to be rendered or not.

#![allow(unused)]
fn main() {
let (get_should_render, set_should_render) = create_signal(false);
let toggle_should_render = move |_| set_should_render.update(|prev| *prev = !*prev);
view! {
    <button on:click=toggle_should_render>
        toggle rendering
    </button>
}
}

Great, now let's render our custom component only if the boolean signal is true. We can totally use if expressions for this, but it actually gets quite ugly to nest such expressions inside a view! macro. It's also easy to accidentally make the UI non-reactive that way. (Nested expressions have to be wrapped in closures to stay reactive.) leptos has a nice Show component to make conditional rendering readable and robust.

#![allow(unused)]
fn main() {
view! {
    <Show when=move || get_should_render.get()>
        <NotificationListener/>
    </Show>
}
}

The text of the custom component should now appear and disappear when you click the button.

Using a websocket connection

Now we want to associate an open websocket connection to our custom component. Implementing this completely ourselves would be quite complicated. Luckily, the library leptos-use already did it for us!

cargo add leptos-use

We can simply call the function leptos_use::use_websocket with the URL. The return type is UseWebsocketReturn, which is a struct that we can destructure to only accept what we actually need and discard anything else. This struct contains a bunch of things to control the websocket connection at a low level, but we only care about being able to read incoming messages.

#![allow(unused)]
fn main() {
let url = "ws://localhost:4200/notifications/alice";
let UseWebsocketReturn { message, .. } = leptos_use::use_websocket(url);
}

What we now need to do is create a side effect that triggers every time we receive a notification. The concept of a side effect is related to leptos' reactivity system, which we discuss in detail here.

#![allow(unused)]
fn main() {
create_effect(move |_| {
    if let Some(message) = message.get() {
        gloo::dialogs::alert(&message);
    }
});
}

Suffice it to say, the closure passed to create_effect will rerun every time there is a new message coming in over the websocket connection.

You should now be able to observe that an alert is triggered in the browser if a paekli is sent to alice.

Making the recipient configurable

This I will leave mostly up to you, except for one feature of leptos that we haven't seen so far. You can pass arguments to custom components, also referred to as "attributes" from the HTML perspective. When declaring the component, it looks like a regular parameter. When you use the component, it works with the key=value syntax like a normal HTML element.

#![allow(unused)]
fn main() {
#[component]
fn NotificationListener(recipient: String) -> impl IntoView {}

view! {
    <NotificationListener recipient=String::from("alice") />
}
}

Success

That's a live-updating GUI! You made it! 🥳

Feel free to improve the UX in any way you like. What if instead of an alert, you displayed a counter of unreceived paekli?