-
Notifications
You must be signed in to change notification settings - Fork 293
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rework MARSConnection state machine #933
Conversation
if (!IsPacketEmpty(readPacket)) | ||
{ | ||
ReleasePacket(readPacket); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks redundant because of the check after this block of code for ManagedSNI before doing a packet release but it isn't. I tried putting just one release in place but any other combination causes a leak either in managed or native.
In Managed mode we have to release the packet only if there was a synchronous result.
In Native mode we have to release the packet if we received one.
src/Microsoft.Data.SqlClient/netcore/src/Microsoft/Data/SqlClient/SNI/SNICommon.cs
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I love it. It's definitely an improvement over the already-complex MARS logic. In order to get people comfortable with this change, can you list the tests that already cover the impacted functionality? Or maybe collect code coverage that shows all the changed code is covered? Also, are there any before/after perf changes?
src/Microsoft.Data.SqlClient/netcore/src/Microsoft/Data/SqlClient/SNI/SNIMarsConnection.cs
Outdated
Show resolved
Hide resolved
src/Microsoft.Data.SqlClient/netcore/src/Microsoft/Data/SqlClient/SNI/SNIMarsConnection.cs
Outdated
Show resolved
Hide resolved
src/Microsoft.Data.SqlClient/netcore/src/Microsoft/Data/SqlClient/SNI/SNIPacket.cs
Outdated
Show resolved
Hide resolved
src/Microsoft.Data.SqlClient/netcore/src/Microsoft/Data/SqlClient/SNI/SNIPhysicalHandle.cs
Outdated
Show resolved
Hide resolved
For coverage I'm relying on the standard tests. I rewrote and then ran all the functional and manual tests with forced managed mode. The first few times I found some problems and resolved them so it's certainly being used. I also had to track individual packet lifetimes as you can tell from the code I needed to add into Packet.Debug.cs and for that i used a version of the repro from 659 (which was where @cheenamalhotra noticed that there were packets being dropped to the GC) so I have run that for a while while exercises the single packet passthrough because they're small packets. I haven't tried to benchmark this because the main purpose was correctness and code clarity but it's a good point, I should see if anything interesting has changed. We should gain some memory back from recycling and a tiny amount of cpu on single packet reads. |
I tried to get some numbers and it looked like there might be a slight regression but I've found that there's also a very subtle locking issue as well that only seems to show under high load. I'm going to see what I can do to track it down but until I do please consider this too dangerous to merge. |
I've got a cleanly running version now which has changed the state machine to be more complicated. It has to do this to follow the exact and important locking scenario. We have to lock while demuxing but must not lock while dispatching to the lower handle, this was implicit in the code as written before and now I understand it I've put code checks in place as well as a few comments. The SMUXHeader has been changed from a class into a struct because I needed to be able to take an allocation free clone of it. It's setup in the lock and must be used outside so a copy is required. There have been some renames and scope changes to make it two nested state switches. I've been unable to find any noticeable performance difference between master and this PR but as I said earlier perf wasn't an objective. The perf improvements are long-term amortization of the packet arrays and in standard operations these happened enough to be noticeable under the debugger but not enough to cause an obvious degradation in throughput. Some perf numbers from the DataAccessPerformance project which implements the techempower Fortunes benchmark which is a raw throghput measurement:
Minor probably noise differences. I can't pick up anything interesting in microbenchmarks because they won't GC enough. So the question is whether the code is easier to understand and maintain than the previous version. I can't say I'm happy about how complicated or how much new code there is. I would have liked to keep it a lot shorter. I do think that it has value for documenting the flow of data through the method and adds some important information about what locks need to be taken. Do the team think it's an improvement or adds enough value to be used? |
Incidentally eiriktsarpalis posted a really good example of why I choose to use inline typeof(T) instead of put it into a variable dotnet/runtime#46414 (comment) |
src/Microsoft.Data.SqlClient/netcore/src/Microsoft/Data/SqlClient/SNI/SNICommon.cs
Show resolved
Hide resolved
src/Microsoft.Data.SqlClient/netcore/src/Microsoft/Data/SqlClient/SNI/SNIMarsHandle.cs
Show resolved
Hide resolved
src/Microsoft.Data.SqlClient/netcore/src/Microsoft/Data/SqlClient/SNI/SNIMarsHandle.cs
Outdated
Show resolved
Hide resolved
src/Microsoft.Data.SqlClient/netcore/src/Microsoft/Data/SqlClient/SNI/SNIMarsHandle.cs
Show resolved
Hide resolved
Also, can you pull latest changes from main branch and make some changes and push them back to the pipelines to rerun the the tests? I could not do it from our side. |
e14bc52
to
1ad1379
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With this change you shows a well knowledge of the SMP. Please, include any additional information like diagram here, especially, using 'State' and 'DemuxState' enums as a history to help perceiving it in the future if you have something.
Here are my understanding(feel free to verify it or replace it with a better version):
The MARSConnection class uses a confusing implicit state machine in HandleRecieveComplete to demux incoming packets and dispatch them to the correct MARSHandle instances. This PR reworks that state machine to be explicit allowing easier visualization of exactly what is happening as packets travel through it. It also allows a small optimization to be added which is where a packet header is decoded and the rest of the packet contents is exactly the payload it will be forwarded rather than reconstructed, this cuts down on copies allowing faster running.