-
Notifications
You must be signed in to change notification settings - Fork 566
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
While downloading file the sha256 changes using helidon 2.6.1 #7407
Comments
@SkyGlancer You should be using |
@spericas Hi, I changed the code to output.write(buffer, 0, len) but still no luck |
That's odd, someone would need to look at this in more detail. How does the file returned compare to the original? Is it of different length? If you could provide a runnable reproducible, it may be easier for someone to evaluate faster. |
Reproduced with 2.x and 3.x: @Path("/download")
public class DownloadResource {
@GET
@Path("{fname}")
@Produces(MediaType.APPLICATION_OCTET_STREAM)
public Response download(@PathParam("fname") String fname) {
return Response.ok()
.entity((StreamingOutput) output -> Files.copy(Paths.get(fname), output))
.build();
}
} dd if=/dev/urandom of=sample.txt bs=100m count=1 for i in $(seq 10) ; do curl http://localhost:8080/api/sample.txt -o /tmp/sample.txt 2> /dev/null; cmp sample.txt /tmp/sample.txt || break ; sleep 1 ; done |
I was able to reproduce as well. The failure is intermittent. File size is correct but content differs. In one case I examined: on a 100MB file the corruption started at byte 48,996,353 and continued for about 128KB. Then the rest of the file was OK. Changing the value of With both 2.6.2 and 3.2.2
But, Romain said setting UNBOUNDED did not work-around the problem for him. So maybe it just altered timing. |
Hi @barchetta What does UNBOUNDED backpressure strategy mean, does it mean that whole file will be loaded in the memory? |
@SkyGlancer my understanding is that it means the server's reactive layer will not apply back-pressure to whoever is writing the data, so with a slow (reading) client it can end up buffering in memory. Some info here: BackpressureStrategy. @danielkec might have more insights. It looks like your code does some throttling itself. As a data point, could you try setting UNBOUNDED and see if it changes the symptoms you see? |
@barchetta Sounds like buffer ordering is somehow not preserved as part of the backpressure logic. I've modified our AutoFlushTest to compute a hash and I see that it fails intermittently as well. |
|
Hi @barchetta @spericas I am able to get around the problem with "backpressure-strategy": "UNBOUNDED" but we need to evaluate if this is correct. I am worried that it can cause OOM memory issues. Do we know why AUTOFLUSH is not working as expected? |
@SkyGlancer Using UNBOUNDED is a workaround for now, but we should really understand what is going on with AUTO_FLUSH. I will take another look at it today and report back. |
Signed-off-by: Daniel Kec <[email protected]> Co-authored-by: Santiago Pericas-Geertsen <[email protected]>
Signed-off-by: Daniel Kec <[email protected]> Co-authored-by: Santiago Pericas-Geertsen <[email protected]>
Signed-off-by: Daniel Kec <[email protected]> Co-authored-by: Santiago Pericas-Geertsen <[email protected]>
* Fixed problem in AUTO_FLUSH backpressure strategy (#6556) * Fixed problem in AUTO_FLUSH strategy that may result in pub-sub deadlock. Increment buffer sum before checking watermark and flushing. * Generate large binary file programmatically. Signed-off-by: Santiago Pericasgeertsen <[email protected]> * Use constant. Signed-off-by: Santiago Pericasgeertsen <[email protected]> --------- Signed-off-by: Santiago Pericasgeertsen <[email protected]> * Fix intermittent out-of-order chunk #7407 (#7441) Signed-off-by: Daniel Kec <[email protected]> Co-authored-by: Santiago Pericas-Geertsen <[email protected]> --------- Signed-off-by: Santiago Pericasgeertsen <[email protected]> Signed-off-by: Daniel Kec <[email protected]> Co-authored-by: Santiago Pericas-Geertsen <[email protected]>
Environment Details
Problem Description
While sending a file using StreamingOutput, when we try to download a large file(some specific file, around 511M) the sha256sum changes(binary file changes). (This is happening after upgrading to 2.6 from 1.x series)
`
private final long THREAD_SLEEP_TIME = 1000;
private final long EACH_CHUNK_SIZE = 10485760;
private final int MEMORY_LIMIT_MB = 1 * 1024 * 1024;
Steps to reproduce
The text was updated successfully, but these errors were encountered: