Rust Async Runtimes Compared: tokio vs smol vs io_uring for Network Programming
Rust Networking: Comparing Async Runtimes for Network Programming
Rust's async ecosystem has consolidated around tokio as the dominant runtime, with smol serving specialized use cases and io_uring-based runtimes emerging for high-performance I/O workloads. Note: async-std was officially discontinued in 2025 (exact date unverified); existing projects should migrate to smol or tokio.
Versions referenced: tokio 1.x, smol 2.x, glommio 0.x, monoio 0.x (as of 2025).
Runtime Architecture Comparison
tokio
Work-stealing multi-threaded executor with dynamic task distribution. Uses a global reactor for I/O event notification across threads.
Key characteristics:
- Work-stealing scheduler (default), single-threaded option available
- Built-in async DNS resolver
- Extensive ecosystem (axum, warp, reqwest, tonic)
- Mature and production-proven
- Industry standard for async Rust
smol
Modular single-runtime approach prioritizing minimal overhead and explicit control.
Key characteristics:
- Single-threaded by default, multi-threaded via
async-executor - Minimal runtime footprint
- Explicit async ecosystem building blocks
- Shares underlying executor components (async-task, async-io) with async-std's architecture
- Recommended migration target for async-std users (similar API philosophy)
async-std (Discontinued)
Status: Officially discontinued in 2025. No longer maintained. Existing codebases should migrate to smol (similar API philosophy) or tokio (broader ecosystem).
Architectural note: async-std and smol share common building blocks (async-task, async-executor, async-io) but maintained separate implementations. Migration is straightforward due to similar API design, not shared internals.
Modern Alternatives
glommio - io_uring-based runtime for Linux with a thread-per-core model. Provides superior throughput for high-I/O workloads but requires Linux kernel 5.1+.
Differentiating features:
- Shares API: Fine-grained control over task priorities via "shares" - tasks receive CPU time proportional to their share allocation
- Latency hints: Explicit latency requirements per I/O operation, allowing the scheduler to optimize for throughput vs. latency trade-offs
- Best for storage-intensive applications where io_uring's zero-copy benefits and explicit scheduling control matter
monoio - io_uring-based thread-per-core runtime developed by ByteDance. Similar architecture to glommio but with different API ergonomics and growing adoption in proxy/server use cases.
Key characteristics:
- Thread-per-core with io_uring (Linux-only)
- Designed for high-throughput network workloads
- Avoids
Send + Sync + 'staticconstraints of work-stealing runtimes - Gaining traction in proxy and gateway implementations
embassy - async runtime for embedded systems. Designed for no_std environments with deterministic execution. The standard choice for embedded async programming.
Performance Characteristics
| Runtime | Context Switch Cost | Throughput | Memory Footprint |
|---|---|---|---|
| tokio | Varies by config; work-stealing adds overhead under contention | Excellent; benchmarked extensively | Larger due to feature breadth |
| smol | Lowest in single-threaded mode; comparable when multi-threaded | Good; predictable tail latency | Smallest |
| glommio | Low (io_uring reduces syscalls) | Excellent for I/O-bound on Linux | Moderate |
| monoio | Low (thread-per-core, io_uring) | Excellent for I/O-bound on Linux | Moderate |
Important: Performance metrics vary significantly based on workload characteristics, hardware, and configuration. Single-threaded modes reduce context switch overhead but limit CPU utilization. Multi-threaded modes improve throughput for CPU-bound tasks but introduce synchronization costs. Always benchmark with your specific workload.
Code Examples: TCP Echo Server
tokio Implementation
Cargo.toml:
[dependencies]
tokio = { version = "1", features = ["full"] }
use tokio::net::TcpListener;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let listener = TcpListener::bind("127.0.0.1:8080").await?;
loop {
let (mut socket, _) = listener.accept().await?;
tokio::spawn(async move {
let mut buf = [0; 1024];
loop {
let n = match socket.read(&mut buf).await {
Ok(0) => return, // Connection closed
Ok(n) => n,
Err(e) => {
eprintln!("read error: {e}");
return;
}
};
if let Err(e) = socket.write_all(&buf[..n]).await {
eprintln!("write error: {e}");
return;
}
}
});
}
}
smol Implementation
Cargo.toml:
[dependencies]
smol = "2"
use smol::net::TcpListener;
use smol::io::{AsyncReadExt, AsyncWriteExt};
fn main() -> Result<(), Box<dyn std::error::Error>> {
smol::block_on(async {
let listener = TcpListener::bind("127.0.0.1:8080").await?;
loop {
let (mut socket, _) = listener.accept().await?;
// Note: Using spawn and storing the handle allows proper error handling.
// .detach() silently swallows task errors and panics - avoid in production.
let task = smol::spawn(async move {
let mut buf = [0; 1024];
loop {
let n = match socket.read(&mut buf).await {
Ok(0) => return Ok(()),
Ok(n) => n,
Err(e) => {
eprintln!("read error: {e}");
return Err(e);
}
};
if let Err(e) = socket.write_all(&buf[..n]).await {
eprintln!("write error: {e}");
return Err(e);
}
}
});
// In production, collect task handles and await them on shutdown
// or use a task group for proper cleanup
drop(task); // Task runs independently; errors logged above
}
})
}
async-std Implementation (Legacy - Discontinued)
Cargo.toml:
[dependencies]
async-std = "1" # Discontinued - migrate away
// WARNING: async-std is discontinued (2025)
// Migrate to smol or tokio for new projects
use async_std::net::TcpListener;
use async_std::io::{AsyncReadExt, AsyncWriteExt};
#[async_std::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let listener = TcpListener::bind("127.0.0.1:8080").await?;
loop {
let (mut socket, _) = listener.accept().await?;
async_std::task::spawn(async move {
let mut buf = [0; 1024];
loop {
let n = match socket.read(&mut buf).await {
Ok(0) => return,
Ok(n) => n,
Err(e) => {
eprintln!("read error: {e}");
return;
}
};
if let Err(e) = socket.write_all(&buf[..n]).await {
eprintln!("write error: {e}");
return;
}
}
});
}
}
Blocking Code Handling
Real-world network programming often requires mixing async with blocking operations. Each runtime provides mechanisms:
- tokio:
tokio::task::spawn_blocking- offloads blocking work to a dedicated thread pool - smol: Use
blockingcrate or manage your own thread pool with channels
Improper handling of blocking code in async contexts will severely degrade performance across all runtimes.
Trait Compatibility Warning
Critical incompatibility: tokio's AsyncRead/AsyncWrite traits are distinct from the futures crate traits used by smol and the discontinued async-std. Code using one runtime's traits cannot directly work with libraries expecting the other without bridge crates:
tokio-utilprovidesCompatwrappers - ensure version matches your tokio versionfuturescrate offers adapters viaCompattypes
Version compatibility note: Bridge crate versions must align with runtime versions. tokio-util 0.7.x requires tokio 1.x. Mismatched versions cause compilation failures or runtime panics.
This affects any code that passes stream types between runtime-specific libraries.
Ecosystem Integration
tokio dominates with frameworks like:
- axum (HTTP server)
- warp (HTTP server)
- reqwest (HTTP client)
- tonic (gRPC)
- Most major async libraries target tokio first
smol provides building blocks for:
- async-executor (underlying executor)
- async-io (async I/O primitives)
- Often embedded in other runtimes
- Limited framework ecosystem;
smol-axumexists but has minimal maintenance - preferasync-compatfor tokio library compatibility
glommio supports:
- Custom storage engines
- High-performance file I/O workloads
- Limited framework ecosystem
monoio supports:
- High-throughput proxy and gateway workloads
- Growing ecosystem from CloudWeGo (monoio-netreq, etc.)
- Linux-only deployment
embassy supports:
- Embedded networking (embassy-net)
- Various MCU targets
no_stdenvironments
Selection Guidelines
Choose tokio when:
- Building production web services
- Need maximum ecosystem compatibility
- Require proven performance at scale
- Working with existing tokio-based libraries
- Want the most community support and documentation
Choose smol when:
- Minimal runtime overhead required
- Building embedded or resource-constrained systems
- Need explicit control over async primitives
- Migrating from async-std (similar API philosophy)
- Prefer predictable tail latency over maximum throughput
Choose glommio when:
- Building storage-intensive applications on Linux
- Need fine-grained scheduling control (shares, latency hints)
- io_uring benefits are measurable for your workload
- Willing to accept Linux-only deployment
Choose monoio when:
- Building high-throughput network proxies on Linux
- Want thread-per-core simplicity without
Send + Syncconstraints - io_uring benefits are measurable for your workload
Choose embassy when:
- Targeting embedded/no_std environments
- Need deterministic async execution
- Working with microcontrollers
Do not use async-std: Discontinued in 2025. No security updates. Migrate existing projects to smol or tokio.
Migration Considerations
From async-std to smol: Straightforward migration. Similar API structures and shared design philosophy. Most code requires minimal changes.
From async-std to tokio: More involved. Different trait implementations require bridge crates. API patterns differ. Plan for refactoring.
Cross-runtime compatibility exists through:
futurescrate compatibility layer withCompatwrapperstokio-utilfor bridging tokio and futures traits (version-match required)- Standard
async/awaitsyntax works across all runtimes
The choice ultimately depends on ecosystem requirements, performance needs, and deployment constraints. For most projects, tokio's ecosystem dominance makes it the pragmatic choice.
Share this Guide:
More Guides
Agentic Workflows: Building Self-Correcting Loops with LangGraph and CrewAI State Machines
Build production-ready AI agents that iteratively improve their outputs through automated feedback loops, combining LangGraph's state machine architecture with CrewAI's multi-agent orchestration for robust, self-correcting workflows.
14 min readBun Runtime Migration: Porting High-Traffic Node.js APIs with Native APIs and SQLite
Learn how to migrate high-traffic Node.js APIs to Bun for 4× HTTP throughput and 3.8× database performance gains using native APIs and bun:sqlite.
10 min readDeno 2.0 Workspaces: Build Monorepos with JSR Packages and TypeScript-First Development
Learn how to configure Deno 2.0 workspaces for monorepo management, publish TypeScript packages to JSR, and automate releases with OIDC-authenticated CI/CD pipelines.
7 min readGleam on BEAM: Building Type-Safe, Fault-Tolerant Distributed Systems
Learn how Gleam combines Hindley-Milner type inference with Erlang's actor-based concurrency model to build systems that are both compile-time safe and runtime fault-tolerant. Covers OTP integration, supervision trees, and seamless interoperability with the BEAM ecosystem.
5 min readHono Edge Framework: Build Ultra-Fast APIs for Cloudflare Workers and Bun
Master Hono's zero-dependency web framework to build low-latency edge APIs that deploy seamlessly across Cloudflare Workers, Bun, and other JavaScript runtimes. Learn routing, middleware, validation, and real-time streaming patterns optimized for edge computing.
6 min readContinue Reading
Agentic Workflows: Building Self-Correcting Loops with LangGraph and CrewAI State Machines
Build production-ready AI agents that iteratively improve their outputs through automated feedback loops, combining LangGraph's state machine architecture with CrewAI's multi-agent orchestration for robust, self-correcting workflows.
14 min readBun Runtime Migration: Porting High-Traffic Node.js APIs with Native APIs and SQLite
Learn how to migrate high-traffic Node.js APIs to Bun for 4× HTTP throughput and 3.8× database performance gains using native APIs and bun:sqlite.
10 min readDeno 2.0 Workspaces: Build Monorepos with JSR Packages and TypeScript-First Development
Learn how to configure Deno 2.0 workspaces for monorepo management, publish TypeScript packages to JSR, and automate releases with OIDC-authenticated CI/CD pipelines.
7 min readShip Faster. Ship Safer.
Join thousands of engineering teams using MatterAI to autonomously build, review, and deploy code with enterprise-grade precision.
