Rust Async Runtimes Compared: tokio vs smol vs io_uring for Network Programming
Rust Networking: Comparing Async Runtimes for Network Programming
Rust's async ecosystem has consolidated around tokio as the dominant runtime, with smol serving specialized use cases and io_uring-based runtimes emerging for high-performance I/O workloads. Note: async-std was officially discontinued in 2025 (exact date unverified); existing projects should migrate to smol or tokio.
Versions referenced: tokio 1.x, smol 2.x, glommio 0.x, monoio 0.x (as of 2025).
Runtime Architecture Comparison
tokio
Work-stealing multi-threaded executor with dynamic task distribution. Uses a global reactor for I/O event notification across threads.
Key characteristics:
- Work-stealing scheduler (default), single-threaded option available
- Built-in async DNS resolver
- Extensive ecosystem (axum, warp, reqwest, tonic)
- Mature and production-proven
- Industry standard for async Rust
smol
Modular single-runtime approach prioritizing minimal overhead and explicit control.
Key characteristics:
- Single-threaded by default, multi-threaded via
async-executor - Minimal runtime footprint
- Explicit async ecosystem building blocks
- Shares underlying executor components (async-task, async-io) with async-std's architecture
- Recommended migration target for async-std users (similar API philosophy)
async-std (Discontinued)
Status: Officially discontinued in 2025. No longer maintained. Existing codebases should migrate to smol (similar API philosophy) or tokio (broader ecosystem).
Architectural note: async-std and smol share common building blocks (async-task, async-executor, async-io) but maintained separate implementations. Migration is straightforward due to similar API design, not shared internals.
Modern Alternatives
glommio - io_uring-based runtime for Linux with a thread-per-core model. Provides superior throughput for high-I/O workloads but requires Linux kernel 5.1+.
Differentiating features:
- Shares API: Fine-grained control over task priorities via "shares" - tasks receive CPU time proportional to their share allocation
- Latency hints: Explicit latency requirements per I/O operation, allowing the scheduler to optimize for throughput vs. latency trade-offs
- Best for storage-intensive applications where io_uring's zero-copy benefits and explicit scheduling control matter
monoio - io_uring-based thread-per-core runtime developed by ByteDance. Similar architecture to glommio but with different API ergonomics and growing adoption in proxy/server use cases.
Key characteristics:
- Thread-per-core with io_uring (Linux-only)
- Designed for high-throughput network workloads
- Avoids
Send + Sync + 'staticconstraints of work-stealing runtimes - Gaining traction in proxy and gateway implementations
embassy - async runtime for embedded systems. Designed for no_std environments with deterministic execution. The standard choice for embedded async programming.
Performance Characteristics
| Runtime | Context Switch Cost | Throughput | Memory Footprint |
|---|---|---|---|
| tokio | Varies by config; work-stealing adds overhead under contention | Excellent; benchmarked extensively | Larger due to feature breadth |
| smol | Lowest in single-threaded mode; comparable when multi-threaded | Good; predictable tail latency | Smallest |
| glommio | Low (io_uring reduces syscalls) | Excellent for I/O-bound on Linux | Moderate |
| monoio | Low (thread-per-core, io_uring) | Excellent for I/O-bound on Linux | Moderate |
Important: Performance metrics vary significantly based on workload characteristics, hardware, and configuration. Single-threaded modes reduce context switch overhead but limit CPU utilization. Multi-threaded modes improve throughput for CPU-bound tasks but introduce synchronization costs. Always benchmark with your specific workload.
Code Examples: TCP Echo Server
tokio Implementation
Cargo.toml:
[dependencies]
tokio = { version = "1", features = ["full"] }
use tokio::net::TcpListener;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let listener = TcpListener::bind("127.0.0.1:8080").await?;
loop {
let (mut socket, _) = listener.accept().await?;
tokio::spawn(async move {
let mut buf = [0; 1024];
loop {
let n = match socket.read(&mut buf).await {
Ok(0) => return, // Connection closed
Ok(n) => n,
Err(e) => {
eprintln!("read error: {e}");
return;
}
};
if let Err(e) = socket.write_all(&buf[..n]).await {
eprintln!("write error: {e}");
return;
}
}
});
}
}
smol Implementation
Cargo.toml:
[dependencies]
smol = "2"
use smol::net::TcpListener;
use smol::io::{AsyncReadExt, AsyncWriteExt};
fn main() -> Result<(), Box<dyn std::error::Error>> {
smol::block_on(async {
let listener = TcpListener::bind("127.0.0.1:8080").await?;
loop {
let (mut socket, _) = listener.accept().await?;
// Note: Using spawn and storing the handle allows proper error handling.
// .detach() silently swallows task errors and panics - avoid in production.
let task = smol::spawn(async move {
let mut buf = [0; 1024];
loop {
let n = match socket.read(&mut buf).await {
Ok(0) => return Ok(()),
Ok(n) => n,
Err(e) => {
eprintln!("read error: {e}");
return Err(e);
}
};
if let Err(e) = socket.write_all(&buf[..n]).await {
eprintln!("write error: {e}");
return Err(e);
}
}
});
// In production, collect task handles and await them on shutdown
// or use a task group for proper cleanup
drop(task); // Task runs independently; errors logged above
}
})
}
async-std Implementation (Legacy - Discontinued)
Cargo.toml:
[dependencies]
async-std = "1" # Discontinued - migrate away
// WARNING: async-std is discontinued (2025)
// Migrate to smol or tokio for new projects
use async_std::net::TcpListener;
use async_std::io::{AsyncReadExt, AsyncWriteExt};
#[async_std::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let listener = TcpListener::bind("127.0.0.1:8080").await?;
loop {
let (mut socket, _) = listener.accept().await?;
async_std::task::spawn(async move {
let mut buf = [0; 1024];
loop {
let n = match socket.read(&mut buf).await {
Ok(0) => return,
Ok(n) => n,
Err(e) => {
eprintln!("read error: {e}");
return;
}
};
if let Err(e) = socket.write_all(&buf[..n]).await {
eprintln!("write error: {e}");
return;
}
}
});
}
}
Blocking Code Handling
Real-world network programming often requires mixing async with blocking operations. Each runtime provides mechanisms:
- tokio:
tokio::task::spawn_blocking- offloads blocking work to a dedicated thread pool - smol: Use
blockingcrate or manage your own thread pool with channels
Improper handling of blocking code in async contexts will severely degrade performance across all runtimes.
Trait Compatibility Warning
Critical incompatibility: tokio's AsyncRead/AsyncWrite traits are distinct from the futures crate traits used by smol and the discontinued async-std. Code using one runtime's traits cannot directly work with libraries expecting the other without bridge crates:
tokio-utilprovidesCompatwrappers - ensure version matches your tokio versionfuturescrate offers adapters viaCompattypes
Version compatibility note: Bridge crate versions must align with runtime versions. tokio-util 0.7.x requires tokio 1.x. Mismatched versions cause compilation failures or runtime panics.
This affects any code that passes stream types between runtime-specific libraries.
Ecosystem Integration
tokio dominates with frameworks like:
- axum (HTTP server)
- warp (HTTP server)
- reqwest (HTTP client)
- tonic (gRPC)
- Most major async libraries target tokio first
smol provides building blocks for:
- async-executor (underlying executor)
- async-io (async I/O primitives)
- Often embedded in other runtimes
- Limited framework ecosystem;
smol-axumexists but has minimal maintenance - preferasync-compatfor tokio library compatibility
glommio supports:
- Custom storage engines
- High-performance file I/O workloads
- Limited framework ecosystem
monoio supports:
- High-throughput proxy and gateway workloads
- Growing ecosystem from CloudWeGo (monoio-netreq, etc.)
- Linux-only deployment
embassy supports:
- Embedded networking (embassy-net)
- Various MCU targets
no_stdenvironments
Selection Guidelines
Choose tokio when:
- Building production web services
- Need maximum ecosystem compatibility
- Require proven performance at scale
- Working with existing tokio-based libraries
- Want the most community support and documentation
Choose smol when:
- Minimal runtime overhead required
- Building embedded or resource-constrained systems
- Need explicit control over async primitives
- Migrating from async-std (similar API philosophy)
- Prefer predictable tail latency over maximum throughput
Choose glommio when:
- Building storage-intensive applications on Linux
- Need fine-grained scheduling control (shares, latency hints)
- io_uring benefits are measurable for your workload
- Willing to accept Linux-only deployment
Choose monoio when:
- Building high-throughput network proxies on Linux
- Want thread-per-core simplicity without
Send + Syncconstraints - io_uring benefits are measurable for your workload
Choose embassy when:
- Targeting embedded/no_std environments
- Need deterministic async execution
- Working with microcontrollers
Do not use async-std: Discontinued in 2025. No security updates. Migrate existing projects to smol or tokio.
Migration Considerations
From async-std to smol: Straightforward migration. Similar API structures and shared design philosophy. Most code requires minimal changes.
From async-std to tokio: More involved. Different trait implementations require bridge crates. API patterns differ. Plan for refactoring.
Cross-runtime compatibility exists through:
futurescrate compatibility layer withCompatwrapperstokio-utilfor bridging tokio and futures traits (version-match required)- Standard
async/awaitsyntax works across all runtimes
The choice ultimately depends on ecosystem requirements, performance needs, and deployment constraints. For most projects, tokio's ecosystem dominance makes it the pragmatic choice.
Share this Guide:
More Guides
eBPF Networking: High-Performance Policy Enforcement, Traffic Mirroring, and Load Balancing
Master kernel-level networking with eBPF: implement XDP firewalls, traffic mirroring for observability, and Maglev load balancing with Direct Server Return for production-grade infrastructure.
18 min readFinOps Reporting Mastery: Cost Attribution, Trend Analysis & Executive Dashboards
Technical blueprint for building automated cost visibility pipelines with SQL-based attribution, Python anomaly detection, and executive decision dashboards.
4 min readJava Performance Mastery: Complete JVM Tuning Guide for Production Systems
Master Java performance optimization with comprehensive JVM tuning, garbage collection algorithms, and memory management strategies for production microservices and distributed systems.
14 min readPrisma vs TypeORM vs Drizzle: Performance Benchmarks for Node.js Applications
A technical deep-dive comparing three leading TypeScript ORMs on bundle size, cold start overhead, and runtime performance to help you choose the right tool for serverless and traditional Node.js deployments.
8 min readPlatform Engineering Roadmap: From Ad-Hoc Tooling to Mature Internal Developer Platforms
A practical guide to advancing platform maturity using the CNCF framework, capability assessment matrices, and phased strategy for building self-service developer platforms.
9 min readContinue Reading
eBPF Networking: High-Performance Policy Enforcement, Traffic Mirroring, and Load Balancing
Master kernel-level networking with eBPF: implement XDP firewalls, traffic mirroring for observability, and Maglev load balancing with Direct Server Return for production-grade infrastructure.
18 min readFinOps Reporting Mastery: Cost Attribution, Trend Analysis & Executive Dashboards
Technical blueprint for building automated cost visibility pipelines with SQL-based attribution, Python anomaly detection, and executive decision dashboards.
4 min readJava Performance Mastery: Complete JVM Tuning Guide for Production Systems
Master Java performance optimization with comprehensive JVM tuning, garbage collection algorithms, and memory management strategies for production microservices and distributed systems.
14 min readReady to Supercharge Your Development Workflow?
Join thousands of engineering teams using MatterAI to accelerate code reviews, catch bugs earlier, and ship faster.
