-
Notifications
You must be signed in to change notification settings - Fork 15.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CSharp] Allow Span<byte>-based parsing in CodedInputStream #3431
Comments
Came here to say the same thing. Byte array copying inside Protobuf3 is one of the biggest sources of GC pressure in Akka.NET; we would like to be able to use pooled byte arrays (or Yes, we get it - it's dangerous. Understood, but let the end-users take the risk in order to put Protobuf to work in high performance contexts. |
Protobuf Span API could look like this namespace Google.Protobuf
{
public class CodedInputStream
{
public CodedInputStream(ReadOnlySpan<byte> buffer);
…
}
public class CodedOutputStream
{
public CodedOutputStream(Span<byte> buffer);
…
}
public class MessageParser
{
public IMessage ParseFrom(ReadOnlySpan<byte> input);
…
}
public static class MessageExtensions
{
public static void MergeFrom(this IMessage message, ReadOnlySpan<byte> data);
public static void WriteTo(this IMessage message, Span<byte> output);
…
}
} |
@Aaronontheweb At least this class should be public |
Thanks for backing me up :). If this proposal is accepted I am happy to create a proper PR. Problem with Span/ReadOnlySpan is that AFAIK it is not available yet and also that @jskeet will not be very keen on accepting PR requiring very-new compiler support and runtime support. So I would be probably rather for the unsafe approach which can be widely used today. |
@mkosieradzki if they don't want to go forward with just exposing the |
@mkosieradzki |
@alexvaluyskiy Thanks. I have just checked that this package has been released in preview 3 months ago. Are you aware whether it requires some specific compiler version? Or are you aware of any roadmap/official documentation? @Aaronontheweb ArraySegment is not comparable to Span, because ArraySegment requires |
I'm in two minds about this. I do see the point - but I'm definitely concerned about the level of complexity we end up with, in terms of lots of different builds for different scenarios. (Adding a dependency for Span also makes me nervous, but I'm gradually warming to the idea of ValueTask for the async part...) The async aspect adds another layer of complexity here - by the time we've got a matrix of managed/unmanaged, async/non-async, multiple versions of .NET supported, I think it's going to get tricky. It makes it really easy to break customers with what may seem like a trivial change. There's also the aspect of time commitment from Google to maintain all of this. I'm not on the protobuf team, and have a full plate already - so @anandolee would need to be very comfortable with it all. In terms of One option you may want to consider is forking the library entirely, creating an |
@jskeet Thanks a lot for your response. My opinion would be to take the System.Memory path (without explicit unsafe). If we could use So I would suggest waiting for Knowing that I also think it's very similiar case as with ValueTask... (and System.Buffers for a shared buffer pool) - it's yet another dependency required to achieve optimal performance. BTW. For more distant future I am also researching different ways to achieve Arena allocations for C#. There is a promising project called Snowflake https://www.microsoft.com/en-us/research/publication/project-snowflake-non-blocking-safe-manual-memory-management-net/# . |
Yes, I'm definitely happier with |
@jskeet Thanks! I will try to create a PR in August so we can preview this. |
@mkosieradzki @jskeet I think the new |
Exactly as I was afraid - Span requires new compiler version: aspnet/Announcements#268 ... |
Let's see where it lands, in terms of requirements. It may be that it'll be harmless to expose it so long as Google.Protobuf is compiled with C# 7.2, which I'd be comfortable with (after that's released). |
Tooling support for the C# 7.2 Span framework types was added with VS2017 15.5, which was RTMed on Dec 4. Span is now well documented and there are a couple of good collateral pieces, including one by Stephen Toub, linked here. https://blogs.msdn.microsoft.com/dotnet/2017/11/15/welcome-to-c-7-2-and-span/ A couple of issues for discussion:
I'd be willing to work on this if there were to be a clear interest (or acceptance criteria) with respect to number two. and we're within about six weeks on number one (@anandolee & @jskeet). |
I think it's reasonable to require that anyone developing the library uses VS2017 15.5 or the equivalent .NET Core SDK. There's a chunk of work required in order to update the SDK for continuous integration though. Whether the Span methods are included in all output libraries is a different matter though. We'll need to look at the dependencies and whether there are issues consuming Span from older versions of VS/C#. (For example, if VS2015 users could easily end up using Span in a dangerous way, we may want to add a netstandard2.0 target and only expose Span there. We'll have to see.) I think the first port of call should be some prototyping and benchmarking. |
I think there's a good potential for using Span optimizations in gRPC C# (general idea: there's the grpc_csharp_ext native library and the messages need to be moved between managed and native layer and copying can be expensive. Being able to transparently address both managed/unmanaged memory with the same code seems useful.) - but we haven't really looked into these optimizations in detail (and there's complications so it's not possible to just say if it is going to be worth it without proper analysis and some experimenting). |
I have started some experimentation in #3530 . I need to revisit my experiments since 15.5 is out. Before proper tooling support Also it's important to remember that
This is especially difficult in context of #3166 - I have a lot of doubts related to the proper async-support. I would really love to start a high level design discussion. |
Perhaps we should first discuss where we believe the value for C# protobuf could originate with the new framework types and then try to align on a preliminary approach to assess whether that value actually exists. I'm mindful that this is already a well-crafted code-base and we also face the barrier/cost @jskeet noted of changing the SDK. No one wishes to waste their time developing PRs that never see the light of day, so we should try to fail sooner rather than later in the case that there isn't much value. The rationale for the first wave of these framework types can be boiled down to a few major categories, based on what I'm seeing. Feel free to add or subtract from this picture:
I did a little preliminary mechanical code analysis/triage and here is what I found. Outside of the context of unit tests, there are about 150 instances in the code of obvious allocations as indicated from the following starter list of search terms: "new byte["; ".Copy"; ".BlockCopy"; ".ToArray("; ".Substring("; "new MemoryStream"; ".Clone()." If you are interested, you can see the list in the attached spreadsheet. Of these, probably only a small subset would be worth attacking, at least initially, and specifically those that:
(As we know, there is no unsafe code presently in the code base.) Purely for the purposes of initial discussion, a generic plan for the first couple of iterations in this type of situation, where we're skeptical of the benefits and want to proceed cautiously to avoid time wastage, might be as follows:
Let's have a good round of discussion on these details, alternative approaches, and any other thoughts. If something like the above turns out to make sense, I've also included a list of logistical questions in the attached file, which someone from Google could address. There are also some resources on this type of code-optimization effort. |
@mkosieradzki can you clarify the backwards compatibility of Span:
|
@jtattermusch
According to: https://github.com/dotnet/corefxlab/blob/master/docs/specs/span.md
From benchmarks at http://adamsitnik.com/Span/ we can see 5-10% performance degradation of Spans vs Arrays on reads (on pre .NET Core 2.0 runtimes), however I believe that introducing safe stackallocs might help regain the performance even on older runtimes. The biggest benefit from To be able to benefit from using And having this in mind. We have 2 possible approaches:
I believe that Also to address problems about async support:
This manner should be also compatible with gRPC streaming approach (and this is a level where I believe asynchrony fits the best). This minimizes the time buffer needs to be pinned and strongly simplifies the code. As I have mentioned before I have created a prototype for unsafe version of protobuf introducing even arena allocator approach to protobuf (see #3530 ). I believe that introducing arena allocation together with unsafe approach will bring more and more benefits. The last but not least: using unmanaged memory can allow implementing faster parsing by unsafe casting buggers into primitive types like int or long. Both protobuf and x86 are little-endian what should significantly improve parsing speed. |
On more point about the arena allocations support: In many scenarios long-living heap-allocated objects are not required or helpful at all. The fact that we are using C# should not force us to downgrade to a Java-like approach with zero control over the allocations. |
I'm thinking of Span as useful primarily in the narrow case of discrete, single threaded, synchronous functions and their helpers. In that case, the ability to potentially put a short array on the stack or to encapsulate neatly a reference to a slice or substring that can be passed to a helper without incurring an allocation or muddying the water with offsets is a nice (but limited) gain. As far as the (different) topic of a public interface for core memory-encapsulating buffer objects, I'll defer to the prior participants in that conversation except to note that in my own work, which heavily depends on these, I almost always use such objects now in the context of various (multi-threaded and often parallelized) pipeline patterns in which exposing (heap-based) arrays as public properties is indeed indispensable. I try to avoid GC-pressure from a ton of small buffer or string allocations in those patterns; size buffers based on what's found to be optimal for a machine architecture (not individual document/message/cell size); and I reuse managed arrays in rotating, multi-segment buffers (with coarse-grained write vs. read gating across stages). For what it's worth, my own benchmarking has not demonstrated performance benefits for sequential access scenarios from using pointers with or without native memory regions in that particular context (though I do use pointers, unsafe code, and native memory/file handles for other use cases). All the same, a reusable core buffer strategy requires considerable bookkeeping. A preference for managed core buffer objects with ordinary indexed access may come down simply to managed lifetime as has been noted and cross-platform portability. If folks ultimately conclude that Spans vs. core buffer refactoring are different topics, perhaps we should spin off the Spans to a separate issue? |
@mkosieradzki I looked a bit more into what would the dependency on Span<> mean for Protobuf and gRPC: System.Memory depends on netstandard1.0 (I believe it will use the unoptimized "fallback" version unless you target netstandard2.0), which means you can use it in net45 projects, but you will need a newer version of nuget and a newer IDE to be able to build those projects (they need to have knowledge of what "netstandard1.0" is) gRPC currently targets net45 and netstandard1.5, Protobuf targets net45 and netstandard1.0. As both gRPC and Protobuf currently target net45 explicitly, adding a dependency on System.Memory would have this effect:
|
@jtattermusch I might be wrong, but my understanding is:
|
Coming to this discussion late. Having gathered a little bit experience working on a Span based API for LMDB (https://github.com/LMDB/lmdb) I would agree that an entirely new API is necessary. From a user persective the simplest things would be to add an overload of WriteTo like this: and one for parsing, like this: Benefits are two-fold:
|
@jtattermusch I was rather thinking about When iterating over a BTW. My preliminary results
ParseUsingNew is a heavy-inlined version of classic (classic already benefits from some inlinining so it is not a true baseline) version optimized with an additional parameter The current experimental code is available: https://github.com/mkosieradzki/protobuf/tree/spans/csharp/src/TestProtoPiper/Benchmarks - it looks pretty bad, but it's only to prototype. |
Found 0.7us ;) and accidentally boosted classic parser
It was hidden in an ineffient IsAtEnd implementation which didn't take the fast track if |
Found another 0.2us in toxic inlining of slow-path, ~1.2 to go :)
|
Picked-up the low-hanging fruit of indirect
Long story short - I think that the concept of adding |
@jtattermusch I have created a branch https://github.com/mkosieradzki/protobuf/tree/spans-pr with added spans support. (also there is branch when I do most of active development https://github.com/mkosieradzki/protobuf/tree/spans-pr-workspace) It does not have all of the aforementioned (but significant amount) optimizations, however it is feature complete, passes all tests, has Code Access Security attributes. I have split my changes into 4 commits: SDK-change, protoc-change, regenerated code, updated tests This branch should be a nice starting point to add As a small bonus I have added How should we distribute/name/etc this package before it lands in the upsteam repo? |
Progress report for https://github.com/mkosieradzki/protobuf/tree/spans-pr-workspace
Future work:
|
@jtattermusch code in branch https://github.com/mkosieradzki/protobuf/tree/spans-pr-workspace is now entirely |
@jtattermusch I'm assigning this issue to you since you have been following up with the proposal. |
@jtattermusch It's ready. I don't plan any significant work on this repo now. I have also merged spans-pr-workspace to span-pr. |
This was a great discussion to read and I'm happy to share that we went through a very similar exercise when we were writing the HTTP/1.1 parser for Kestrel. The conclusion was that for optimal performance when using ReadOnlySequence, you need to parse a Span at a time, and only use the higher level multi buffer operations when crossing buffer boundaries. Simple example here -https://github.com/aspnet/KestrelHttpServer/blob/612fcca7291e5fad753fccbd8b15279c21e2e288/src/Kestrel.Core/Internal/Http/HttpParser.cs#L41-L62 More complex example here - https://github.com/aspnet/KestrelHttpServer/blob/612fcca7291e5fad753fccbd8b15279c21e2e288/src/Kestrel.Core/Internal/Http/HttpParser.cs#L198-L209 We're looking at adding a The other interesting thing in this discussion was about async parsing vs sync parsing. I believe that for optimal performance, the best thing to do is to write low level synchronous parsers and build asynchronous parsing on top of that. It looks like you guys have come to a similar conclusion here. In my mind there are 2 patters:
The first makes the parser very simple but consumers to worry about the max message size to avoid a DoS. The second parser is much harder to write and is where async helps (it basically automatically stores the state on the heap and resumes on your behalf). This state now needs to be managed manually by the parser itself. Using pipelines though the unconsumed buffer is managed managed on your behalf so the parser just needs to know where it left off and needs to be able to resume based on the parser context. Here's how I think each of these could look: Stateless Parsingasync Task StatelessParsing()
{
var input = connection.Input;
while (true)
{
var result = await input.ReadAsync();
var buffer = result.Buffer;
try
{
if (result.IsCanceled)
{
break;
}
if (!buffer.IsEmpty)
{
while (Protocol.TryParseMessage(ref buffer, out var message))
{
await _dispatcher.DispatchMessageAsync(connection, message);
}
}
if (result.IsCompleted)
{
if (!buffer.IsEmpty)
{
throw new InvalidDataException("Connection terminated while reading a message.");
}
break;
}
}
finally
{
// The buffer was sliced up to where it was consumed, so we can just advance to the start.
// We mark examined as buffer.End so that if we didn't receive a full frame, we'll wait for more data
// before yielding the read again.
input.AdvanceTo(buffer.Start, buffer.End);
}
}
} Stateful Parsingasync Task StatefulParsing()
{
var input = connection.Input;
ParserContext parserContext;
while (true)
{
var result = await input.ReadAsync();
var buffer = result.Buffer;
try
{
if (result.IsCanceled)
{
break;
}
if (!buffer.IsEmpty)
{
while (Protocol.TryParseMessage(ref buffer, ref parserContext, out var message))
{
await _dispatcher.DispatchMessageAsync(connection, message);
}
}
if (result.IsCompleted)
{
if (!buffer.IsEmpty)
{
throw new InvalidDataException("Connection terminated while reading a message.");
}
break;
}
}
finally
{
// The buffer was sliced up to where it was consumed, so we can just advance to the start.
// We mark examined as buffer.End so that if we didn't receive a full frame, we'll wait for more data
// before yielding the read again.
input.AdvanceTo(buffer.Start, buffer.End);
}
}
} Of course this is just the outer code, the more complicated code exists in the parser itself as each of the methods needs to flow the context in order to resume parsing. Glad to see all of the energy and progress here! |
@davidfowl Thank you for all your valuable points. I will take a closer look on the suggestions, but I really think we want to sacrifice async parsing for simplicity and performance. I was considering stateful parsing as an option before, but I think it would make the codegen much more complicated - and especially after removing this |
I have created a new branch: https://github.com/mkosieradzki/protobuf/tree/spans-pr-rebased It is a rebased core code to make code-review, and further development easier. Also I code on Windows (VS2017) and previously had problems with line endings in the generated code - so the code should be generated on Linux (or Bash on Ubuntu on Windows :) ). |
On Sat, Jul 21, 2018 at 9:37 PM David Fowler ***@***.***> wrote:
This was a great discussion to read and I'm happy to share that we went
through a very similar exercise when we were writing the HTTP/1.1 parser
for Kestrel. The conclusion was that for optimal performance when using
ReadOnlySequence, you need to parse a Span at a time, and only use the
higher level multi buffer operations when crossing buffer boundaries.
Simple example here -
https://github.com/aspnet/KestrelHttpServer/blob/612fcca7291e5fad753fccbd8b15279c21e2e288/src/Kestrel.Core/Internal/Http/HttpParser.cs#L41-L62
More complex example here -
https://github.com/aspnet/KestrelHttpServer/blob/612fcca7291e5fad753fccbd8b15279c21e2e288/src/Kestrel.Core/Internal/Http/HttpParser.cs#L198-L209
We're looking at adding a ref struct BufferReader/BufferWriter for the
next version to make things a a bit easier to use and more efficient in the
common case. We ended up doing this in Kestrel and work is happening now to
make it part of the BCL (
https://github.com/dotnet/corefxlab/blob/08a1874a0ca4a1a889b2045801c309a1e8575458/src/System.Buffers.ReaderWriter/System/Buffers/Reader/BufferReader.cs
).
The other interesting thing in this discussion was about async parsing vs
sync parsing. I believe that for optimal performance, the best thing to do
is to write low level synchronous parsers and build asynchronous parsing on
top of that. It looks like you guys have come to a similar conclusion here.
In my mind there are 2 patters:
1. The parser is stateless and works on complete messages
2. The parsers is stateful and resumable
3.
- The state can be passed into the parser
4.
- The state can be kept by the parser
Thanks for the good points, to add my 50cents about the stateless/stateful
parsing:
Currently for Google.Protobuf the CodedInputStream is the storage for
parsing state - it keeps track of where we are in the buffer, tracks
recursion depth, max message size and other things important for security.
So in a way, we are in the situation, where our current parser is "stateful
and resumable"+"state is kept by the parser". Changing that would require a
significant revamp, so unless that's turns out to be a blocker, we should
go this route.
… The first makes the parser very simple but consumers to worry about the
max message size to avoid a DoS.
The second parser is much harder to write and is where async helps (it
basically automatically stores the state on the heap and resumes on your
behalf). This state now needs to be managed manually by the parser itself.
Using pipelines though the unconsumed buffer is managed managed on your
behalf so the parser just needs to know where it left off and needs to be
able to resume based on the parser context.
Here's how I think each of these could look:
Stateless Parsing
async Task StatelessParsing()
{
var input = connection.Input;
while (true)
{
var result = await input.ReadAsync();
var buffer = result.Buffer;
try
{
if (result.IsCanceled)
{
break;
}
if (!buffer.IsEmpty)
{
while (Protocol.TryParseMessage(ref buffer, out var message))
{
await _dispatcher.DispatchMessageAsync(connection, message);
}
}
if (result.IsCompleted)
{
if (!buffer.IsEmpty)
{
throw new InvalidDataException("Connection terminated while reading a message.");
}
break;
}
}
finally
{
// The buffer was sliced up to where it was consumed, so we can just advance to the start.
// We mark examined as buffer.End so that if we didn't receive a full frame, we'll wait for more data
// before yielding the read again.
input.AdvanceTo(buffer.Start, buffer.End);
}
}
}
Stateful Parsing
async Task StatefulParsing()
{
var input = connection.Input;
ParserContext parserContext;
while (true)
{
var result = await input.ReadAsync();
var buffer = result.Buffer;
try
{
if (result.IsCanceled)
{
break;
}
if (!buffer.IsEmpty)
{
while (Protocol.TryParseMessage(ref buffer, ref parserContext, out var message))
{
await _dispatcher.DispatchMessageAsync(connection, message);
}
}
if (result.IsCompleted)
{
if (!buffer.IsEmpty)
{
throw new InvalidDataException("Connection terminated while reading a message.");
}
break;
}
}
finally
{
// The buffer was sliced up to where it was consumed, so we can just advance to the start.
// We mark examined as buffer.End so that if we didn't receive a full frame, we'll wait for more data
// before yielding the read again.
input.AdvanceTo(buffer.Start, buffer.End);
}
}
}
Of course this is just the outer code, the more complicated code exists in
the parser itself as each of the methods needs to flow the context in order
to resume parsing.
Glad to see all of the energy and progress here!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3431 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AJeq5HIiXbimws8G3zCTTee0-dTC5A3Rks5uI4LxgaJpZM4OoqYG>
.
|
@mkosieradzki I started reviewing your code, but haven't noticed the spans-pr-rebased branch initially, which made the review more complicated. Here are some initial comments: — I noticed there's a change from netstandard1.0 -> nestandard1.1, is that necessary? Making changes like this is theoretically possible but can slow down the review process quite a bit. — (this point might be not applicable for spans-pr-rebased branch). It seemed that the generated .cs files were out of date when starting the branch - we should make sure C# photos are regenerated (let’s create a PR for that) before starting, otherwise the diffs for generated files are confusing (it's hard to tell new apart the changed parsing logic from the fields added to .proto files) Ideally, regenerating the protos would only happen in a single commit so that the rest of the code can reviewed commit-by-commit without needing to scroll through hundreds of lines of diffs in generated files. — Providing the — aggressive inlining optimization (and other optimizations) should go in as a separate PR (1 PR per one optimization technique). — what's the difference between Wrapped vs no-suffix when parsing? Explaining this concept would simplify reading the code review a lot. This seems to be the most interesting piece when it comes to parsing from Spans so we should make sure we are on the same page here. — I've notices some refactoring in CodedOutputStream.ComputeSize.cs (and possibly elsewhere), but I think combining refactoring and changes in logic makes the change very hard to review and greatly increases the risk we will miss something important. — I've noticed CodedOutputStream.cs changes -> are they necessary? IMHO we should address the serialization and deserialization pieces separately. I'm currently trying to review spans-pr-rebased |
@mkosieradzki you've come up with a great amount of good changes, but I'm now thinking of how to make the review process efficient and how we can integrate this. Here's what I suggest:
What do you think? |
Yup this is exaclty why I have created
Fair enough. Not sure if the second is not used somewhere in the process...
Makes sense.
Wrapped is dedicated to handle the wrapped messages like
I hope I didn't introduce any logic changes there! Please let me know where. I have only added support for wrapped messages.
I'm ok with both approaches. |
@jtattermusch If you want to play with what I have built, I have created yet another branch spans-pr-rebased-build (with custom build process) and connected it to the AppVeyor. You can access the package from the AppVeyor nuget feed. My apologies, but creating this PR as is has consumed most of my free time (it was a considerable amount of work, most in experimentation), I can't promise I will be able to split the PR into smaller pieces very soon https://ci.appveyor.com/project/mkosieradzki/protobuf/build/1.0.30 - this is the best build I have currently.. There is one artifact with protoc for windows and one artifact with SDK nuget package. It is also available through this feed: as 4.0.0-pre-30 https://ci.appveyor.com/nuget/protobuf-spans |
@mkosieradzki I plan to take a closer look at |
I followed the trail of related issues all the way to this PR. I'm running a locally-built version of Protobuf where I have implemented some of these same optimizations. This issue and related PRs look dead. @mkosieradzki has done so much work on this I hate to put in effort and submit PRs of my own, but I don't see this issue moving otherwise. What's the best path forward to get these optimizations in Protobuf? |
@prat0088 we still plan to move ahead with the Span-based parsing but it's unclear which exact approach we're going to take (design discussions are ongoing). IMHO the PR that's the closest to the ideal design is #5888. |
This has been added in #7351 |
IMO C# protobuf implementation could strongly benefit performance-wise from allowing some unsafe code.
Background
First of all I am not saying that unsafe should be enabled in every build/every platform, having 100% managed library is always a nice feature. All I am suggesting is again some conditional feature on some platform.
However there is a trend to move towards unsafe mixing we can observe in .NET Core.
For example a lot of .NET Core libraries use unsafe code due to need to interop with different kind of unmanaged libraries or just to handle AOT compiled code.
API change
Today CodedInputStream works in two modes: streaming with byte[] buffer and with a fixed buffer.
It would be really nice to be able to provide byte * and length to CodedInputStream constructor.
Alternatively we could create an UnmanagedCodedInputStream which would work on unmanaged memory.
This would require to abstracting CodedInputStream as an interface
Affected APIs
Async API #3166 makes absolutely no sense for UnmanagedCodedInputStream because it assumes everything have been already read.
Benefits
Scenarios
Scenario 1. NoSQL Database
User is using native NoSQL database (for example RocksDB/LevelDB) and is using protobuf for data persistence. Database returns native allocated pointer with buffer. Instead of copying entire buffer to the memory user can deserialize record straight from the returned memory native memory and then free the pointer. Very little GC is involved and there is practically no overhead.
Scenario 2. Stackalloc-ated buffers
For performance reasons to decrease heap allocations and allocate buffer on stack. This scenario can be alternatively handled by Buffer Pooling, however stackalloc seems to be more efficient.
Affected by
https://github.com/dotnet/corefxlab/blob/master/docs/specs/span.md - this work in progress improvement can allow to achieve the same goals in a managed way by providing some platform improvements.
The text was updated successfully, but these errors were encountered: