Rust HTTP Server

Introducing Hyperlane: The Lightweight, High-Performance Rust HTTP Server Hyperlane is a pure-Rust HTTP server library built on Tokio’s async runtime, designed to simplify network service development without sacrificing performance. With Hyperlane, you get a truly cross‑platform API that runs seamlessly on Windows, Linux, and macOS, while supporting advanced features like middleware, WebSocket, and Server‑Sent Events (SSE). Key Features Zero external dependencies: Pure Rust + standard library Cross‑platform compatibility: Same API on Windows, Linux, macOS High performance: QPS on par with Tokio, outperforming Rocket, Go, Node.js Flexible middleware: Request and response middleware for logging, authentication, headers Real‑time support: Built‑in WebSocket and SSE handling Installation cargo add hyperlane Quick Start Example Clone the quick‑start repository and explore Hyperlane in action: git clone https://github.com/ltpp-universe/hyperlane-quick-start.git Sample Server Code use hyperlane::*; async fn request_middleware(ctx: Context) { let socket_addr: String = ctx.get_socket_addr_or_default_string().await; ctx.set_response_header(SERVER, HYPERLANE) .await .set_response_header(CONNECTION, CONNECTION_KEEP_ALIVE) .await .set_response_header(CONTENT_TYPE, content_type_charset(TEXT_PLAIN, UTF8)) .await .set_response_header(DATE, gmt()) .await .set_response_header("SocketAddr", socket_addr) .await; } async fn response_middleware(ctx: Context) { let _ = ctx.send().await; let request: String = ctx.get_request_string().await; let response: String = ctx.get_response_string().await; ctx.log_info(&request, log_handler) .await .log_info(&response, log_handler) .await; } async fn root_route(ctx: Context) { ctx.set_response_status_code(200) .await .set_response_body("Hello hyperlane => /") .await; } async fn websocket_route(ctx: Context) { let request_body: Vec = ctx.get_request_body().await; let _ = ctx.send_response_body(request_body).await; } #[tokio::main] async fn main() { let server: Server = Server::new(); server.host("0.0.0.0").await; server.port(60000).await; server.enable_nodelay().await; server.log_dir("./logs").await; server.enable_inner_log().await; server.enable_inner_print().await; server.log_size(100_024_000).await; server.http_line_buffer_size(4096).await; server.websocket_buffer_size(4096).await; server.request_middleware(request_middleware).await; server.response_middleware(response_middleware).await; server.route("/", root_route).await; server.route("/websocket", websocket_route).await; let test_string: String = "Hello hyperlane".to_owned(); server .route( "/test/:text", future_fn!(test_string, |ctx| { let param: RouteParams = ctx.get_route_params().await; println_success!(format!("{:?}", param)); println_success!(test_string); panic!("Test panic"); }), ) .await; server.run().await.unwrap(); } Benchmark Results 1000 Concurrent Connections, 1,000,000 Requests Tokio runtime: 308,596.26 QPS Hyperlane framework: 307,568.90 QPS Rocket framework: 267,931.52 QPS Rust standard library: 260,514.56 QPS Go standard library: 226,550.34 QPS Gin framework: 224,296.16 QPS Node.js standard library: 85,357.18 QPS 360 Concurrent Connections, 60s Duration Tokio runtime: 340,130.92 QPS Hyperlane framework: 324,323.71 QPS Rocket framework: 298,945.31 QPS Rust standard library: 291,218.96 QPS Gin framework: 242,570.16 QPS Go standard library: 234,178.93 QPS Node.js standard library: 139,412.13 QPS With nearly Tokio‑level throughput and the convenience of a high‑level API, Hyperlane empowers you to build fast, reliable, and scalable Rust web services—try it today!

May 16, 2025 - 16:30
 0
Rust HTTP Server

Introducing Hyperlane: The Lightweight, High-Performance Rust HTTP Server

Hyperlane is a pure-Rust HTTP server library built on Tokio’s async runtime, designed to simplify network service development without sacrificing performance. With Hyperlane, you get a truly cross‑platform API that runs seamlessly on Windows, Linux, and macOS, while supporting advanced features like middleware, WebSocket, and Server‑Sent Events (SSE).

Key Features

  • Zero external dependencies: Pure Rust + standard library
  • Cross‑platform compatibility: Same API on Windows, Linux, macOS
  • High performance: QPS on par with Tokio, outperforming Rocket, Go, Node.js
  • Flexible middleware: Request and response middleware for logging, authentication, headers
  • Real‑time support: Built‑in WebSocket and SSE handling

Installation

cargo add hyperlane

Quick Start Example

Clone the quick‑start repository and explore Hyperlane in action:

git clone https://github.com/ltpp-universe/hyperlane-quick-start.git

Sample Server Code

use hyperlane::*;

async fn request_middleware(ctx: Context) {
    let socket_addr: String = ctx.get_socket_addr_or_default_string().await;
    ctx.set_response_header(SERVER, HYPERLANE)
        .await
        .set_response_header(CONNECTION, CONNECTION_KEEP_ALIVE)
        .await
        .set_response_header(CONTENT_TYPE, content_type_charset(TEXT_PLAIN, UTF8))
        .await
        .set_response_header(DATE, gmt())
        .await
        .set_response_header("SocketAddr", socket_addr)
        .await;
}

async fn response_middleware(ctx: Context) {
    let _ = ctx.send().await;
    let request: String = ctx.get_request_string().await;
    let response: String = ctx.get_response_string().await;
    ctx.log_info(&request, log_handler)
        .await
        .log_info(&response, log_handler)
        .await;
}

async fn root_route(ctx: Context) {
    ctx.set_response_status_code(200)
        .await
        .set_response_body("Hello hyperlane => /")
        .await;
}

async fn websocket_route(ctx: Context) {
    let request_body: Vec<u8> = ctx.get_request_body().await;
    let _ = ctx.send_response_body(request_body).await;
}

#[tokio::main]
async fn main() {
    let server: Server = Server::new();
    server.host("0.0.0.0").await;
    server.port(60000).await;
    server.enable_nodelay().await;
    server.log_dir("./logs").await;
    server.enable_inner_log().await;
    server.enable_inner_print().await;
    server.log_size(100_024_000).await;
    server.http_line_buffer_size(4096).await;
    server.websocket_buffer_size(4096).await;
    server.request_middleware(request_middleware).await;
    server.response_middleware(response_middleware).await;
    server.route("/", root_route).await;
    server.route("/websocket", websocket_route).await;
    let test_string: String = "Hello hyperlane".to_owned();
    server
        .route(
            "/test/:text",
            future_fn!(test_string, |ctx| {
                let param: RouteParams = ctx.get_route_params().await;
                println_success!(format!("{:?}", param));
                println_success!(test_string);
                panic!("Test panic");
            }),
        )
        .await;
    server.run().await.unwrap();
}

Benchmark Results

1000 Concurrent Connections, 1,000,000 Requests

  1. Tokio runtime: 308,596.26 QPS
  2. Hyperlane framework: 307,568.90 QPS
  3. Rocket framework: 267,931.52 QPS
  4. Rust standard library: 260,514.56 QPS
  5. Go standard library: 226,550.34 QPS
  6. Gin framework: 224,296.16 QPS
  7. Node.js standard library: 85,357.18 QPS

360 Concurrent Connections, 60s Duration

  1. Tokio runtime: 340,130.92 QPS
  2. Hyperlane framework: 324,323.71 QPS
  3. Rocket framework: 298,945.31 QPS
  4. Rust standard library: 291,218.96 QPS
  5. Gin framework: 242,570.16 QPS
  6. Go standard library: 234,178.93 QPS
  7. Node.js standard library: 139,412.13 QPS

With nearly Tokio‑level throughput and the convenience of a high‑level API, Hyperlane empowers you to build fast, reliable, and scalable Rust web services—try it today!