Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: check the server response size, should not be over 512kB #64

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

pimeys
Copy link
Owner

@pimeys pimeys commented Dec 7, 2024

The server can be any arbitrary web push server. We should have control what is the response size due to allocating a vector based on this value.

@pimeys pimeys requested a review from andyblarblar December 7, 2024 12:02
@niklasf
Copy link
Contributor

niklasf commented Dec 8, 2024

Hi. This is not quite enough. The server could still send a response stream without Content-Length header. Self-contained example:

Cargo.toml

[package]
name = "max-content-length-repro"
edition = "2021"
publish = false

[dependencies]
axum = "0.7.9"
axum-streams = { version = "0.19.0", features = ["text"] }
futures-lite = "2.5.0"
isahc = "1.7.2"
tokio = { version = "1.42.0", features = ["full"] }

src/main.rs

use std::error::Error;

use axum::{response::IntoResponse, routing::get, Router};
use axum_streams::StreamBodyAs;
use futures_lite::AsyncReadExt;
use isahc::{http::header::CONTENT_LENGTH, HttpClient};
use tokio::{net::TcpListener, task};

async fn infinite_stream() -> impl IntoResponse {
    StreamBodyAs::text(futures_lite::stream::repeat(
        "lorem ipsum dolor sit amet consecutor".into(),
    ))
}

async fn evil_server(listener: TcpListener) {
    let app = Router::new().route("/", get(infinite_stream));
    axum::serve(listener, app).await.unwrap();
}

const MAX_CONTENT_LENGTH: usize = 512_000;

#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
    // Start evil server
    task::spawn(evil_server(
        tokio::net::TcpListener::bind("127.0.0.1:8000")
            .await
            .unwrap(),
    ));

    // Make a request to the server. With the implementation below,
    // this will try to read the infinite response until the end,
    // consuming more and more memory.
    let client = HttpClient::new()?;
    let res = client.get_async("http://127.0.0.1:8000/").await?;

    let content_size = res
        .headers()
        .get(CONTENT_LENGTH)
        .and_then(|s| s.to_str().ok())
        .and_then(|s| s.parse().ok())
        .unwrap_or(0);

    if content_size > MAX_CONTENT_LENGTH {
        println!("not exploitable: rejected large content size");
        return Ok(());
    }

    let mut body = Vec::with_capacity(content_size);
    let mut chunks = res.into_body();

    chunks.read_to_end(&mut body).await?;

    assert!(
        body.len() <= MAX_CONTENT_LENGTH,
        "exploitable: body size {} too large",
        body.len()
    );

    Ok(())
}

@niklasf
Copy link
Contributor

niklasf commented Dec 8, 2024

https://docs.rs/http-body-util/latest/http_body_util/struct.Limited.html looks interesting, for a robust solution.

@niklasf
Copy link
Contributor

niklasf commented Dec 19, 2024

Attempt that handles this case: #68.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants