Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deadlock when captured stdout is too large #88

Closed
nbigaouette opened this issue Nov 12, 2020 · 3 comments
Closed

Deadlock when captured stdout is too large #88

nbigaouette opened this issue Nov 12, 2020 · 3 comments
Labels
bug Something isn't working time sink You may lose your mind if you try to fix this, but I won't stop you

Comments

@nbigaouette
Copy link

By default cucumber-rust captures the steps stdout/stderr.

This makes for nice tests output as its controlled by cucumber. Unfortunately, when the output is large, cucumber just deadlocks. I was hit by this problem at least twice (my steps are quite verbose...) and it took me some time to figure out the issue.

Here's a test that, for me, deadlocks or not depending on the max_range value:

extern crate cucumber_rust as cucumber;
use async_trait::async_trait;
use std::convert::Infallible;

pub struct MyWorld;

#[async_trait(?Send)]
impl cucumber::World for MyWorld {
    type Error = Infallible;

    async fn new() -> Result<Self, Infallible> {
        Ok(Self {})
    }
}

mod example_steps {
    use cucumber::Steps;

    pub fn steps() -> Steps<crate::MyWorld> {
        let mut builder: Steps<crate::MyWorld> = Steps::new();

        builder
            .given("a dummy step", |world: crate::MyWorld, _step| world)
            .when("it outputs a lot to stdout", |world, _step| {
                // ---------------------------------------------------------------
                // On my machine, 2019 ends successfully while 2020 deadlocks
                let max_range = 2019;
                // ---------------------------------------------------------------
                for i in 0..max_range {
                    println!("[{} / {}] Dummy verbose step", i + 1, max_range);
                }

                world
            })
            .then("Cucumber does not deadlock", |world, _step| {
                println!("Done!");
                world
            });

        builder
    }
}

fn main() {
    let runner = cucumber::Cucumber::<MyWorld>::new()
        .features(&["./features/large_stdout"])
        .steps(example_steps::steps())
        .enable_capture(true);
    futures::executor::block_on(runner.run());
}

and the corresponding feature file:

Feature: Large stdout output
    Scenario: Large stdout output
        Given a dummy step
        When it outputs a lot to stdout
        Then Cucumber does not deadlock

I am not sure if cucumber can do something about this, I'm filling this issue so I can find it back the next time it happens to me :D

In case of deadlock, instructing cucumber to not capture the output (enable_capture(false)) is sufficient to prevent the deadlock (but stdout will now obviously be flooded with the steps' output).

I think the goal of capturing the steps' output is to be able to print the tail when a step fails. If this is that's the only reason than some kind of ring buffer to store the stdout would prevent the deadlock. From what I can see cucumber is using the shh crate to keep the stdout/err. Not sure if it supports this...

@bbqsrc
Copy link
Member

bbqsrc commented Nov 12, 2020

Thanks, I hate it! Especially the implication that 2020 may never end.

This is worrying because it's not inconceivable that a common use case is to print a large chunk of stuff that will be printed in the case of an error for debugging (this is how I often do things...)

Can you tell me on which operating systems you can reproduce this issue on?

@bbqsrc bbqsrc added bug Something isn't working help wanted Extra attention is needed time sink You may lose your mind if you try to fix this, but I won't stop you labels Nov 12, 2020
@nbigaouette
Copy link
Author

Yeah it's not great... When that happens it's hard to see if it's waiting for something (f.e. I have steps that polls a system for some condition, which can take a couple of minutes) or if it simply deadlocks.

I caught this yesterday because I'm running those test in a k8s cluster; the livenessProbe failed and k8s restarted the pod. So this is a x86_64-unknown-linux-musl target with relatively low memory and cpus.

I was able to reproduce my issue locally (by using reducing actix-web's workers count to 1) inside docker and than finally natively on macOS (x86_64-apple-darwin). It's a macbook pro 15" (macOS Catalina 10.15.7, 16 GB).

I am pretty sure the 2020 limit above is system dependant. I guess simply increasing/decreasing its value on different systems will surely show the same problem.

@tyranron
Copy link
Member

Fixed in #128

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working time sink You may lose your mind if you try to fix this, but I won't stop you
Projects
None yet
Development

No branches or pull requests

3 participants