Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP2: Client and server disagree on active stream count #1586

Closed
Tratcher opened this issue Aug 28, 2019 · 49 comments · Fixed by #55835
Closed

HTTP2: Client and server disagree on active stream count #1586

Tratcher opened this issue Aug 28, 2019 · 49 comments · Fixed by #55835
Assignees
Labels
area-System.Net.Http bug tenet-reliability Reliability/stability related issue (stress, load problems, etc.)
Milestone

Comments

@Tratcher
Copy link
Member

From @JamesNK on Saturday, August 24, 2019 2:24:08 AM

Issue found during HTTP2 stress testing. One HttpClient is sending multiple requests in parallel, then canceling them via RST_STREAM sent to the server. For an unknown reason the server and client disagree on the active stream count, and the sever eventually sends RST_STREAM (REFUSED_STREAM) and GOAWAY (STREAM_CLOSED). This issue eventually leads to dotnet/aspnetcore#13405.

Client and server should agree on the active stream count and the client should wait to send new requests, without error.

Wireshark log: https://www.dropbox.com/s/5ndspth4t0qjyj6/clientreset-error.zip?dl=0

Exception on the client:

Unhandled exception. System.Net.Http.HttpRequestException: An error occurred while sending the request.
---> System.IO.IOException: The request was aborted.
---> System.IO.IOException: The response ended prematurely, with at least 9 additional bytes expected.
   at System.Net.Http.Http2Connection.ReadAtLeastAsync(Stream stream, Memory`1 buffer, Int32 minReadBytes)
   at System.Net.Http.Http2Connection.EnsureIncomingBytesAsync(Int32 minReadBytes)
   at System.Net.Http.Http2Connection.ReadFrameAsync(Boolean initialFrame)
   at System.Net.Http.Http2Connection.ProcessIncomingFramesAsync()
   --- End of inner exception stack trace ---
   at System.Net.Http.Http2Connection.Http2Stream.CheckResponseBodyState()
   at System.Net.Http.Http2Connection.Http2Stream.TryEnsureHeaders()
   at System.Net.Http.Http2Connection.Http2Stream.ReadResponseHeadersAsync(CancellationToken cancellationToken)
   at System.Net.Http.Http2Connection.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
   --- End of inner exception stack trace ---
   at System.Net.Http.Http2Connection.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
   at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken)
   at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
   at System.Net.Http.HttpClient.FinishSendAsyncUnbuffered(Task`1 sendTask, HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts)
   at Grpc.Net.Client.Internal.GrpcCall`2.SendAsync(HttpClient client, HttpRequestMessage message)
   at Grpc.Net.Client.Internal.GrpcCall`2.GetResponseHeadersAsync()

Repo is currently not public. Contact @JamesNK or @Tratcher for the repo source code and instruction.

Copied from original issue: dotnet/aspnetcore#13406

@Tratcher
Copy link
Member Author

From @jkotalik on Saturday, August 24, 2019 7:45:51 PM

So dotnet/aspnetcore#12704 was supposed to fix this issue. There may be a case we aren't covering.

@Tratcher Tratcher self-assigned this Aug 28, 2019
@Tratcher
Copy link
Member Author

From @Tratcher on Saturday, August 24, 2019 8:10:53 PM

Indeed. dotnet/corefx#12704 was focused on the server terminating the request, but the new repro is focused on client aborts. A quick look through the code shows we did account for client RSTs, but maybe we missed something else.

@Tratcher
Copy link
Member Author

From @JamesNK on Monday, August 26, 2019 11:07:11 PM

This is a bug but not a critical one. 3.1 fix?

@Tratcher
Copy link
Member Author

Tratcher commented Aug 28, 2019

From @Tratcher on Wednesday, August 28, 2019 10:49:15 PM

Theory: The client has a race where it can cancel a stream, decrement its count, but not send a RST_STREAM. The server still considers the stream open and you exceed the stream limit.

Repro. I set the connection limit to 1, send two requests, and cancel one of them (the delay=100 request).

// Good:
trce: Microsoft.AspNetCore.Server.Kestrel[37]
      Connection id "0HLPC0DDKPENV" received HEADERS frame for stream ID 2545 with length 31 and flags END_STREAM, END_HEADERS
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
      Request starting HTTP/2 GET http://localhost:5005/?delay=100
trce: Microsoft.AspNetCore.Server.Kestrel[37]
      Connection id "0HLPC0DDKPENV" received RST_STREAM frame for stream ID 2545 with length 4 and flags 0x0
trce: Microsoft.AspNetCore.Server.Kestrel[37]
      Connection id "0HLPC0DDKPENV" received HEADERS frame for stream ID 2547 with length 29 and flags END_STREAM, END_HEADERS
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
      Request finished in 111.1166ms 200
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
      Request starting HTTP/2 GET http://localhost:5005/?delay=0
trce: Microsoft.AspNetCore.Server.Kestrel[37]
      Connection id "0HLPC0DDKPENV" sending HEADERS frame for stream ID 2547 with length 76 and flags END_HEADERS
trce: Microsoft.AspNetCore.Server.Kestrel[37]
      Connection id "0HLPC0DDKPENV" sending DATA frame for stream ID 2547 with length 21 and flags NONE
trce: Microsoft.AspNetCore.Server.Kestrel[37]
      Connection id "0HLPC0DDKPENV" sending HEADERS frame for stream ID 2547 with length 27 and flags END_STREAM, END_HEADERS
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
      Request finished in 0.2761ms 200
// Bad:
trce: Microsoft.AspNetCore.Server.Kestrel[37]
      Connection id "0HLPC0DDKPENV" received HEADERS frame for stream ID 2549 with length 31 and flags END_STREAM, END_HEADERS
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
      Request starting HTTP/2 GET http://localhost:5005/?delay=100
trce: Microsoft.AspNetCore.Server.Kestrel[37]
      Connection id "0HLPC0DDKPENV" received HEADERS frame for stream ID 2551 with length 29 and flags END_STREAM, END_HEADERS
trce: Microsoft.AspNetCore.Server.Kestrel[37]
      Connection id "0HLPC0DDKPENV" sending HEADERS frame for stream ID 2549 with length 76 and flags END_HEADERS
trce: Microsoft.AspNetCore.Server.Kestrel[37]
      Connection id "0HLPC0DDKPENV" sending DATA frame for stream ID 2549 with length 23 and flags NONE
trce: Microsoft.AspNetCore.Server.Kestrel[37]
      Connection id "0HLPC0DDKPENV" sending HEADERS frame for stream ID 2549 with length 33 and flags END_STREAM, END_HEADERS
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
      Request finished in 1106537.7128ms 200
            AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);
            using var client = new HttpClient
            {
                BaseAddress = new Uri("http://localhost:5005"),
                DefaultRequestVersion = HttpVersion.Version20
            };
            for (int i = 0; i < 1000; i++)
            {
                // Send a second one that will cancel
                var r2Task = client.GetAsync("?delay=100", new CancellationTokenSource(50).Token);

                // Send one request normally
                var r1Task = client.GetAsync("?delay=0");

                try
                {
                    var r1 = await r1Task;
                    r1.EnsureSuccessStatusCode();
                }
                catch (Exception ex)
                {
                    Console.WriteLine("R1 fail: " + ex.Message);
                }

                try
                {
                    var r2 = await r2Task;
                    r2.EnsureSuccessStatusCode();
                }
                catch (OperationCanceledException)
                {
                    Console.WriteLine("R2 canceled");
                }
                catch (Exception ex)
                {
                    Console.WriteLine("R2 fail: " + ex);
                }
            }
            app.Run(async context =>
            {
                var delay = context.Request.Query["delay"].ToString();
                if (!string.IsNullOrEmpty(delay))
                {
                    await Task.Delay(int.Parse(delay));
                }
                await context.Response.WriteAsync($"Hello World! {context.Request.Protocol} {delay}");
            });

@Tratcher Tratcher removed their assignment Aug 28, 2019
@scalablecory
Copy link
Contributor

@geoffkizer
Copy link
Contributor

@Tratcher In your repro, when you say "I set the connection limit to 1" what does this mean? I'm assuming you are configuring Kestrel to set MAX_CONCURRENT_STREAMS to 1, correct? How do you do this?

@geoffkizer
Copy link
Contributor

Theory: The client has a race where it can cancel a stream, decrement its count, but not send a RST_STREAM.

Yeah, it looks like there is a race here, introduced by the fix for dotnet/corefx#40180.

When the request is cancelled, and the request body is already complete (as is the case here, since it's a GET), we are now decrementing the stream count before we send the RST_STREAM. This means there's a window where if another request is waiting on stream count, it can start sending before the RST_STREAM is sent. See here: https://github.com/dotnet/corefx/blob/master/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/Http2Stream.cs#L377

We need to defer the call to Complete until after we send the RST_STREAM.

@geoffkizer
Copy link
Contributor

There are two other places we call SendReset as well, aside from the one I linked to above. All of these have the same issue and will need to be fixed.

Basically, whenever we call SendReset, we should avoid calling Complete before we do so, and then call it after the SendReset instead.

@Tratcher
Copy link
Member Author

@Tratcher In your repro, when you say "I set the connection limit to 1" what does this mean? I'm assuming you are configuring Kestrel to set MAX_CONCURRENT_STREAMS to 1, correct? How do you do this?

Yes, the streams per connection limit james linked to.

@eiriktsarpalis
Copy link
Member

eiriktsarpalis commented Aug 29, 2019

FWIW this reproduces immediately once I constrain kestrel MaxStreamsPerConnections in the corefx stress suite.

@davidsh
Copy link
Contributor

davidsh commented Aug 29, 2019

Yeah, it looks like there is a race here, introduced by the fix for dotnet/corefx#40180.

@geoffkizer Will you be able to get a fix into master and then release/3.0? The fix for dotnet/corefx#40180 already went into release/3.0. So, if that fix regresses something then we need to fix the fix.

@geoffkizer
Copy link
Contributor

Sure, I will put together a fix for it.

@JamesNK Can you send me the original repro?

@JamesNK
Copy link
Member

JamesNK commented Aug 29, 2019

GrpcStress.zip

Run (and pay attention to the listening port)
GrpcGreeter > dotnet run

Run (and augment the url if necessary):
GrpcGreeterClient > dotnet run basic --cancellation-window-ms 1000 --total-requests 10000000 --url https://localhost:5001

@geoffkizer
Copy link
Contributor

@JamesNK Thanks. How long does it usually take to repro?

@JamesNK
Copy link
Member

JamesNK commented Aug 29, 2019

I just ran it and it repoed almost instantly.

BasicStress
url: https://localhost:5001
totalRequests: 10000000
concurrent: 1000
enqueueTimeoutMs: 10000
dequeueTimeoutMs: 200
cancellationWindowMs: 1000
connectionLoad: 2147483647
cancellationLowerBoundMs: 0
serverDelayWindowMs: 100
Will connect to https://localhost:5001
Making 1 HttpClients.
Finished BasicStress in 0:00:00.4566555
Unhandled exception. System.Net.Http.HttpRequestException: An error occurred while sending the request.
 ---> System.IO.IOException: The request was aborted.
 ---> System.IO.IOException: The response ended prematurely, with at least 9 additional bytes expected.
   at System.Net.Http.Http2Connection.ReadAtLeastAsync(Stream stream, Memory`1 buffer, Int32 minReadBytes)
   at System.Net.Http.Http2Connection.EnsureIncomingBytesAsync(Int32 minReadBytes)
   at System.Net.Http.Http2Connection.ReadFrameAsync(Boolean initialFrame)
   at System.Net.Http.Http2Connection.ProcessIncomingFramesAsync()

@geoffkizer
Copy link
Contributor

@JamesNK

When I try to run the repro, I get:

Unhandled exception. System.Net.Http.HttpRequestException: The SSL connection could not be established, see inner exception.
---> System.Security.Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure.

@JamesNK
Copy link
Member

JamesNK commented Aug 29, 2019

@geoffkizer
Copy link
Contributor

It looks like the client is already doing that, but I'm still getting an SSL error:

            HttpClientHandler handler = new HttpClientHandler();
            handler.ServerCertificateCustomValidationCallback = delegate { return true; };
            var httpClient = new HttpClient(handler)
            {
                BaseAddress = new Uri(url),
            };

@JamesNK
Copy link
Member

JamesNK commented Aug 29, 2019

Hmmm, I don't know then. Just to get this sample working, did you try trusting the ASP.NET Core certificate?

Is there a bug in HttpClient around ignoring invalid certificates in HTTP/2?

@geoffkizer
Copy link
Contributor

Is there a bug in HttpClient around ignoring invalid certificates in HTTP/2?

Not that I'm aware of. Our tests do this.

Hmmm, I don't know then. Just to get this sample working, did you try trusting the ASP.NET Core certificate?

How do I do that?

@JamesNK
Copy link
Member

JamesNK commented Aug 29, 2019

It's in the documentation I linked.

@JamesNK
Copy link
Member

JamesNK commented Aug 29, 2019

It looks like the client is already doing that, but I'm still getting an SSL error:

            HttpClientHandler handler = new HttpClientHandler();
            handler.ServerCertificateCustomValidationCallback = delegate { return true; };
            var httpClient = new HttpClient(handler)
            {
                BaseAddress = new Uri(url),
            };

Hmm, where do you see that code? The only place I see HttpClient created is in GetClientFactory, Program.cs, line 259:

var buckets = (from i in Enumerable.Range(0, bucketCount) select new HttpClient { BaseAddress = new Uri(url) }).ToList();

@geoffkizer
Copy link
Contributor

Yeah, you are right. I had an old editor window open and got confused and looked at the wrong code for GrpcGreeterClient.

Apologies for the confusion...

@geoffkizer
Copy link
Contributor

I can now get it to fail in about 5-10 seconds, but what I'm seeing is different that what you describe above:

Unhandled exception. System.Net.Http.HttpRequestException: An error occurred while sending the request.
---> System.IO.IOException: The request was aborted.
---> System.Net.Http.Http2StreamException: The HTTP/2 server reset the stream. HTTP/2 error code 'ENHANCE_YOUR_CALM' (0xb).

@geoffkizer
Copy link
Contributor

@Tratcher @jkotalik What would cause Kestrel to send ENHANCE_YOUR_CALM?

@Tratcher
Copy link
Member Author

That's what you get when too many requests are cancelled but the app hasn't had a chance to finish cleaning them up yet. We have another thread discussing that.

Did you try the repro code I gave above? It avoided that issue. https://github.com/dotnet/corefx/issues/40663#issuecomment-525953428

@karelz
Copy link
Member

karelz commented Oct 4, 2019

@geoffkizer are you actively working on it?

@geoffkizer
Copy link
Contributor

I have the fix ready. Still trying to validate it using stress and confirm that we're no longer seeing this.

I'll aim to get a PR out relatively soon.

@danmoseley danmoseley transferred this issue from dotnet/corefx Jan 9, 2020
@Dotnet-GitSync-Bot Dotnet-GitSync-Bot added area-System.Net.Http untriaged New issue has not been triaged by the area owner labels Jan 9, 2020
@jkotas jkotas added tenet-reliability Reliability/stability related issue (stress, load problems, etc.) and removed reliability labels Feb 4, 2020
@karelz karelz removed the untriaged New issue has not been triaged by the area owner label Feb 13, 2020
@karelz karelz added this to the 5.0 milestone Feb 24, 2020
@martincostello
Copy link
Member

In case it's useful to you for this issue, I had one of the ASP.NET Core tests (as of dotnet/aspnetcore@5669e5e) fail locally with this error:

Test Name:	Settings_MaxConcurrentStreamsPost_Server(scheme: "http")
Test FullName:	Interop.FunctionalTests.Interop.FunctionalTests.HttpClientHttp2InteropTests.Settings_MaxConcurrentStreamsPost_Server(scheme: "http")
Test Source:	C:\Coding\dotnet\aspnetcore\src\Servers\Kestrel\test\Interop.FunctionalTests\HttpClientHttp2InteropTests.cs : line 1257
Test Outcome:	Failed
Test Duration:	0:00:00

Test Name:	Settings_MaxConcurrentStreamsPost_Server(scheme: "http")
Test Outcome:	Failed
Result StackTrace:	
at System.Net.Http.Http2Connection.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
   at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean async, Boolean doRequestAuth, CancellationToken cancellationToken)
   at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
   at System.Net.Http.HttpClient.FinishSendAsync(ValueTask`1 sendTask, HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts, Boolean buffered, Boolean async, CancellationToken callerToken, Int64 timeoutTime)
   at Microsoft.AspNetCore.Testing.TaskExtensions.TimeoutAfter[T](Task`1 task, TimeSpan timeout, String filePath, Int32 lineNumber) in C:\Coding\dotnet\aspnetcore\src\Testing\src\TaskExtensions.cs:line 29
   at Microsoft.AspNetCore.Testing.TaskExtensions.TimeoutAfter[T](Task`1 task, TimeSpan timeout, String filePath, Int32 lineNumber) in C:\Coding\dotnet\aspnetcore\src\Testing\src\TaskExtensions.cs:line 29
   at Interop.FunctionalTests.HttpClientHttp2InteropTests.Settings_MaxConcurrentStreamsPost_Server(String scheme) in C:\Coding\dotnet\aspnetcore\src\Servers\Kestrel\test\Interop.FunctionalTests\HttpClientHttp2InteropTests.cs:line 1302
--- End of stack trace from previous location ---
----- Inner Stack Trace -----
   at System.Net.Http.Http2Connection.ThrowRequestAborted(Exception innerException)
   at System.Net.Http.Http2Connection.Http2Stream.CheckResponseBodyState()
   at System.Net.Http.Http2Connection.Http2Stream.TryEnsureHeaders()
   at System.Net.Http.Http2Connection.Http2Stream.ReadResponseHeadersAsync(CancellationToken cancellationToken)
   at System.Net.Http.Http2Connection.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
----- Inner Stack Trace -----
Result Message:	
System.Net.Http.HttpRequestException : An error occurred while sending the request.
---- System.IO.IOException : The request was aborted.
-------- System.Net.Http.Http2StreamException : The HTTP/2 server reset the stream. HTTP/2 error code 'ENHANCE_YOUR_CALM' (0xb).

@karelz karelz modified the milestones: 5.0.0, 6.0.0 Aug 13, 2020
@karelz
Copy link
Member

karelz commented Aug 13, 2020

Moving to 6.0 and unassigning.
Next step: Figure out if it still reproduces.

@glucaci
Copy link

glucaci commented Oct 14, 2020

Update 2:
Fixed it with AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true); because TLS termination is done by the application gateway.

Update:
I've added the Kestrel trace logs.
This test was done with MaxStreamsPerConnection = 1. If the Streams limit is not set the only difference in logs is that the Connection id "x" reached the maximum number of concurrent HTTP/2 streams allowed is not appearing anymore.

--- Original post ---

I still have this error also with SDK Version: 5.0.100-rc.2.20479.15

dbug: Microsoft.AspNetCore.Server.Kestrel[1]
      Connection id "0HM3GSRG49SC7" started.
dbug: Microsoft.AspNetCore.Server.Kestrel.Https.Internal.HttpsConnectionMiddleware[3]
      Connection "0HM3GSRG49SC7" established using the following protocol: Tls12
trce: Microsoft.AspNetCore.Server.Kestrel[37]
      Connection id "0HM3GSRG49SC7" sending SETTINGS frame for stream ID 0 with length 18 and flags NONE
trce: Microsoft.AspNetCore.Server.Kestrel[37]
      Connection id "0HM3GSRG49SC7" sending WINDOW_UPDATE frame for stream ID 0 with length 4 and flags 0x0
trce: Microsoft.AspNetCore.Server.Kestrel[37]
      Connection id "0HM3GSRG49SC7" received SETTINGS frame for stream ID 0 with length 24 and flags NONE
trce: Microsoft.AspNetCore.Server.Kestrel[37]
      Connection id "0HM3GSRG49SC7" sending SETTINGS frame for stream ID 0 with length 0 and flags ACK
trce: Microsoft.AspNetCore.Server.Kestrel[37]
      Connection id "0HM3GSRG49SC7" received WINDOW_UPDATE frame for stream ID 0 with length 4 and flags 0x0
trce: Microsoft.AspNetCore.Server.Kestrel[37]
      Connection id "0HM3GSRG49SC7" received HEADERS frame for stream ID 1 with length 5049 and flags END_STREAM, END_HEADERS, PRIORITY
dbug: Microsoft.AspNetCore.Server.Kestrel[40]
      Connection id "0HM3GSRG49SC7" reached the maximum number of concurrent HTTP/2 streams allowed.
trce: Microsoft.AspNetCore.Server.Kestrel[37]
      Connection id "0HM3GSRG49SC7" received SETTINGS frame for stream ID 0 with length 0 and flags ACK
...
info: Microsoft.ReverseProxy.Service.Proxy.HttpProxy[48]
      Request: An error was encountered before receiving a response.
      System.Net.Http.HttpRequestException: An error occurred while sending the request.
       ---> System.IO.IOException: The request was aborted.
       ---> System.IO.IOException: The response ended prematurely while waiting for the next frame from the server.
         at System.Net.Http.Http2Connection.<ReadFrameAsync>g__ThrowMissingFrame|48_1()
         at System.Net.Http.Http2Connection.ReadFrameAsync(Boolean initialFrame)
         at System.Net.Http.Http2Connection.ProcessIncomingFramesAsync()
         --- End of inner exception stack trace ---
         at System.Net.Http.Http2Connection.ThrowRequestAborted(Exception innerException)
         at System.Net.Http.Http2Connection.Http2Stream.CheckResponseBodyState()
         at System.Net.Http.Http2Connection.Http2Stream.TryEnsureHeaders()
         at System.Net.Http.Http2Connection.Http2Stream.ReadResponseHeadersAsync(CancellationToken cancellationToken)
         at System.Net.Http.Http2Connection.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
         --- End of inner exception stack trace ---
         at System.Net.Http.Http2Connection.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
         at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean async, Boolean doRequestAuth, CancellationToken cancellationToken)
         at Microsoft.ReverseProxy.Service.Proxy.HttpProxy.ProxyAsync(HttpContext context, String destinationPrefix, HttpMessageInvoker httpClient, RequestProxyOptions proxyOptions, ProxyTelemetryContext proxyTelemetryContext)
...
trce: Microsoft.AspNetCore.Server.Kestrel[37]
      Connection id "0HM3GSRG49SC7" sending HEADERS frame for stream ID 1 with length 49 and flags END_STREAM, END_HEADERS

I'm using Microsoft.ReverseProxy and trying to connect to an service which is running in kubernetes. In cluster communication is working but from outside going through ingress is not working,

@alnikola alnikola self-assigned this Jun 21, 2021
@ghost ghost added the in-pr There is an active PR which will close this issue when it is merged label Jul 16, 2021
@ghost ghost removed the in-pr There is an active PR which will close this issue when it is merged label Jul 20, 2021
alnikola added a commit that referenced this issue Jul 20, 2021
…5835)

It changes the order of Complete and Send Reset/Send EndOfStream operations to prevent creation of a new Http2Stream while the old one has not yet written the final frame to wire.

Fixes #1586
@ghost ghost locked as resolved and limited conversation to collaborators Aug 19, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area-System.Net.Http bug tenet-reliability Reliability/stability related issue (stress, load problems, etc.)
Projects
None yet
Development

Successfully merging a pull request may close this issue.