This blogpost is not about Kinbox per se, but you will forgive a Rust fanboy for info-dumping an idea to save his most beloved programming language.
Rust is my favorite programming language. Its type system is tremendously expressive and precise, and between structs, enums, and traits, you can design some really nice APIs. Its error model is best in class, leveraging the type system to force you to handle your errors-as-values, wherein the Result type requires you to explicitly handle the error somehow. If you are confident the error will never happen, or are otherwise comfortable with your program crashing on error, you can of course still do that—you just have to be explicit about that with an unwrap()
, or maybe an expect()
if you would like to output a message in the crash. All that, and we haven’t even yet mentioned Rust’s biggest claim to fame in low level memory safety.
There is, however, one aspect of Rust that leaves a lot to be desired. As you may have guessed by the title, it’s Rust’s async model. It suffers from function coloring, which effectively splits the entire language in half, because synchronous functions cannot run asynchronous functions without an async runtime, and synchronous functions block asynchronous execution. Several popular crates offer both synchronous and asynchronous versions of the same functions, which has got to be annoying for maintainers.
One cool thing about Rust’s async model, that I genuinely like, is it does not prescribe an async runtime, and lets you use whichever you want. However, due to the above reasons, async runtimes find themselves tasked with reimplementing many std
functions, most notably around various I/O operations. Rust’s most popular async runtimes, async-std, Tokio, and Monoio did exactly this, and all the crates using those functions are married to those runtimes. Consequently, despite Rust’s desire to let you bring your own runtime, function coloring has kneecapped this promise.
Why did Rust design things this way?
I can think of two reasons. The first is limited foresight, I don’t believe they predicted how the ecosystem would evolve around this. The other reason, and probably the larger one, is they wanted to stay really low level, and not provide any more than the minimum scaffolding needed for concurrency. They originally only provided Futures, and following popular demand from developers more familiar with Javascript’s async model, they also added async/await syntax as a thin layer over these Futures to make developer experience easier. They likely believed anything more belongs in the domain of the runtimes. Ultimately, for one reason or another, they did not consider what would ultimately make for the best developer experience and healthiest ecosystem.
(Side note: Javascript has no excuse, save for backwards compatibility I suppose. It comes with a runtime. It could have adopted coroutines back in the day but didn’t and now we’re stuck with it and life is hard in web development.)
What is a runtime, anyway?
Let’s take a brief detour to consider what a runtime even is. According to Rust, a runtime is responsible for polling Futures and driving their execution. This is true of course, but I want to be much more specific. A runtime is responsible for managing concurrency by allowing Futures to be scheduled, taking them off that schedule, and then polling them somewhere. The schedule itself is part of a runtime, but Futures should still have a function to add themselves to it, agnostic of how the schedule is internally represented and processed.
The runtime, then, is responsible for defining the schedule data structure, implementing the Future’s scheduling function, and finally, taking Futures off its schedule to poll them.
The runtime itself does not need to be responsible for coroutines and join handles, and it most definitely should not need to replicate the standard library. All of which are things existing runtimes currently provide, on top of core runtime functionality, not just because of function coloring, but because Rust itself does not provide them.
How can we avoid function coloring?
So, the author of the above blogpost actually offers a solution: Coroutines. It’s clearly the most ergonomic way to use async, seeing as all the async runtimes offer Task oriented APIs, which all look similar because they were themselves inspired by Rust’s own Thread model. When we look to other programming languages for inspiration, Go lets you launch any regular function as a goroutine. In each of these cases, though, the Task API belongs to an async runtime. That’s no good for us; this is Rust and we don’t want to prescribe a runtime.
But even if we did, that’s not really the fundamental solution when you get in the weeds anyway. It’s a happy side effect of the solution. The real solution is rather straightforward: If the problem of function coloring is needing to declare functions as async, then the real solution is to not need that declaration.
To that end, I propose that functions should compile as sync or async based on their calling context. You can think of it as though all functions are generic over sync vs async, monomorphizing based on whether the function is being called from an async runtime. Going even further, they could also be effectively generic over the runtime if necessary, which is also known in that context.
To make this model, there’s a few new language features we’ll need to introduce.
First, use of coroutines would only be permitted in an async context, but one concurrency primitive could be used for both sync and async: The yield. It performs cooperative scheduling in an async context, and can be safely ignored in a sync context. This helps for reasons explained below.
Second, since functions will all implicitly await when called in an async context by default, we will need a syntax to hold the Future without yet driving it. This is important to allow developers to use combinators to build more complex Futures before spawning coroutines with them.
How do you use this model?
All together, this has different ramifications for libraries vs binaries.
Libraries, including the standard library, would not have to maintain two versions of every function to support both sync and async. The same functions would work for both, provided they did not spawn or await coroutines. They could include yield points to make their functions nonblocking in async, without interfering with sync at all. Further, they would not have to rely on any specific runtime to use the standard library functions in an async manner. Thus, many headaches are spared for library maintainers, and async runtimes would not need to maintain their standard library redundancies.
Binaries would be able to enter async mode with a simple proc macro from the standard library:
#[async(/* runtime here */);
When you call a function with this macro on it, it desugars into a function and a Future. The function spins up the runtime using an expansion defined by the runtime (similar to the type oriented derive macro), awaits the Future with the runtime, and returns its result. The Future is the original function but async. The compiler, helpful as ever, will error and remind you to add this macro if you try to use coroutines in a sync context without it. (You can use coroutines in functions that compile in an async context without issue.) You can even use this macro on the main function, for behavior similar to existing runtimes’ #[main]
macros.
Is it too late for Rust?
One might be worried for Rust after reading for this, but it’s actually not too late, because Rust could implement this model on top of what it already has. None of the ideas outlined here prohibit Rust’s current async/await syntax. There is nothing fundamentally stopping you from declaring a function async and forcing it to compile only as a Future. If you would still like to color your function, go right ahead. The usual rules still apply, and existing runtimes would continue working as they are.
Even if Rust doesn’t add this, though, we can still write a compiler which compiles to Rust. That means compiled code can work seamlessly with the existing Rust ecosystem. Not only that, everyone maintaining Rust crates with both sync and async will be able to write their code in this slightly higher level language, and generate both versions of their crate with a single compilation step.
To facilitate this compilation, we can also write a Rust crate with the coroutine primitives and traits for existing runtimes to implement. This would enable those runtimes to work with our new and improved async model.
Closing Remarks
Rust’s async model is incomplete, but we can build on it to make something better without throwing away the existing ecosystem. We can even continue to use Tokio and other runtimes if they implement the necessary traits for their core. With just a few more language features and a bit more compilation, we can have our async cake and eat it too.
If this sort of toolset interests you, let me know in the comments. If you would like to help make this, sound off as well and follow our Github. If you would like to support
and I in all our endeavors, you can donate to our Open Collective. Lastly, if you or someone you know would like to hire a Rust developer in the PDX area, full time or part time, I am looking for work. Let’s chat!