Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JsonSerializerOptions.MemberAccessorStrategy shouldn't use Reflection.Emit when IsDynamicCodeCompiled is false #38693

Open
eerhardt opened this issue Jul 1, 2020 · 12 comments
Labels
area-System.Text.Json size-reduction Issues impacting final app size primary for size sensitive workloads
Milestone

Comments

@eerhardt
Copy link
Member

eerhardt commented Jul 1, 2020

When trimming a Blazor WASM app, the last usage of Reflection.Emit (after fixing #38678) is coming from System.Text.Json.Serialization.JsonSerializerOptions:

internal MemberAccessor MemberAccessorStrategy
{
get
{
if (_memberAccessorStrategy == null)
{
#if NETFRAMEWORK || NETCOREAPP
_memberAccessorStrategy = new ReflectionEmitMemberAccessor();
#else
_memberAccessorStrategy = new ReflectionMemberAccessor();
#endif
}
return _memberAccessorStrategy;
}
}

However, on Mono WASM, RuntimeFeature.IsDynamicCodeCompiled is always false, so using Reflection.Emit is probably a waste, and it brings in a decent amount of code. In my investigations I find it removing ~50KB of IL if we trim this usage of Reflection.Emit.

We should change this code to something more like:

        internal MemberAccessor MemberAccessorStrategy
        {
            get
            {
                if (_memberAccessorStrategy == null)
                {
#if NETFRAMEWORK || NETCOREAPP
                    if (RuntimeFeature.IsDynamicCodeCompiled)
                    {
                        _memberAccessorStrategy = new ReflectionEmitMemberAccessor();
                    }
                    else
                    {
                        _memberAccessorStrategy = new ReflectionMemberAccessor();
                    }
#else
                    _memberAccessorStrategy = new ReflectionMemberAccessor();
#endif
                }

                return _memberAccessorStrategy;
            }
        }

With changing the code to the above, on a default template Blazor WASM app, I am seeing size savings of:

Build Size
master 3,366,912 bytes
#38729 3,039,232 bytes
#38729 + this change 2,990,080 bytes

So almost a 50 KB savings by allowing the removing all usages of System.Reflection.Emit.

cc @steveharter @layomia @vitek-karas @marek-safar

@eerhardt eerhardt added area-System.Text.Json linkable-framework Issues associated with delivering a linker friendly framework labels Jul 1, 2020
@eerhardt eerhardt added this to the 5.0.0 milestone Jul 1, 2020
@Dotnet-GitSync-Bot Dotnet-GitSync-Bot added the untriaged New issue has not been triaged by the area owner label Jul 1, 2020
@marek-safar marek-safar removed the untriaged New issue has not been triaged by the area owner label Jul 2, 2020
@steveharter
Copy link
Member

FWIW we previously used RuntimeFeature.IsDynamicCodeSupported but that API requires netstandard 2.1, and we only had support for 2.0 in STJ.dll and due to the switch to netcoreapp (instead of netstandard) it was decided not worth the hit of a larger package just for this.

But using an ifdef branch for netcoreapp that checks RuntimeFeature should work great.

@eerhardt
Copy link
Member Author

eerhardt commented Jul 6, 2020

I ran the attached JSON perf application with and without this change.

Program (1).zip

5.0.0-preview.8.20354.5

serialize deserialize writer
Serialization took 1509 ms Deserialization took 4619 ms Writer took 1252 ms
Serialization took 1548 ms Deserialization took 4690 ms Writer took 1272 ms
Serialization took 1537 ms Deserialization took 4743 ms Writer took 1224 ms

Proposed change

serialize deserialize writer
Serialization took 1843 ms Deserialization took 5391 ms Writer took 1196 ms
Serialization took 1832 ms Deserialization took 5364 ms Writer took 1196 ms
Serialization took 1847 ms Deserialization took 5338 ms Writer took 1185 ms

So it is ~15-20% slower to use the Reflection based strategy after everything is warmed up. (Note this is explicitly not testing the time it takes to emit IL during warmup.)

@eerhardt
Copy link
Member Author

Since the performance tradeoff of taking this change isn't necessarily a net-win, we won't be making this change in 5.0. Moving to 6.0.

@eerhardt eerhardt modified the milestones: 5.0.0, 6.0.0 Jul 30, 2020
@marek-safar marek-safar added size-reduction Issues impacting final app size primary for size sensitive workloads and removed linkable-framework Issues associated with delivering a linker friendly framework labels Dec 9, 2020
eerhardt added a commit to eerhardt/runtime that referenced this issue May 11, 2021
@eerhardt
Copy link
Member Author

With the advent of the JSON source generator, we can solve the ~15-20% slower to use Reflection based strategy issue above by telling places that need fast JSON Serialization to use the source generator.

With that in mind, I created a prototype of introducing a feature switch in System.Text.Json to use the ReflectionMemberAccessor in order to allow for trimming the Ref.Emit code. See the prototype here:

59c60c6

This allows for the following size savings (.br compressed):

Before: 2,644,974 bytes
After: 2,628,106 bytes (with new feature switch set)

So roughly 16.5KB .br compressed size savings.

This could be an alternative approach to allow for the Ref.Emit code to be trimmed in a default Blazor WASM app, even if not all the places that use JSON Serialization use the source generator.

@eerhardt
Copy link
Member Author

Now that the microsoft-net-sdk-blazorwebassembly-aot workload is functional, I was able to retest trimming out the Reflection.Emit code along with re-linking the dotnet.wasm assembly (which also trims the native Reflection.Emit code).

Using the rough numbers of summing all the *.br files in the publish directory, here are the results I am seeing:

No Relinking Relinking
Include Ref.Emit code 2,537 KB 2,509 KB
Trim Ref.Emit code 2,519 KB 2,484 KB

So paired with the re-linking feature, if we can trim away the Reflection.Emit code, it would trim 25 KB compressed. And if you aren’t re-linking we’d save the estimated ~18 KBs, as above.

However, it still comes at a tradeoff for throughput. I re-ran the above Program (1).zip application using the latest .NET 6 bits with this feature switch, and I'm seeing the following results:

Ref.Emit + re-linking:

serialize deserialize writer
605 ms 894 ms 501 ms
577 ms 842 ms 424 ms
570 ms 833 ms 422 ms
543 ms 803 ms 402 ms
586 ms 826 ms 457 ms
Averages: 576.2 839.6 441.2

Trim Ref.Emit + re-linking:

serialize deserialize writer
845 ms 1463 ms 414 ms
859 ms 1453 ms 406 ms
872 ms 1493 ms 411 ms
820 ms 1414 ms 396 ms
866 ms 1432 ms 386 ms
Averages: 852.4 1451 402.6

As you can see, the performance of Json serialization has gotten drastically better from my original numbers (I may have been on a different machine as well). However, the difference between using Ref.Emit vs. Reflection has gotten wider. For "serialize" - ~48% worse, and for deserialize - ~73% worse.

For ahead-of-time (AOT) compilation, here are the differences:

Size difference, the only file that changes when using AOT is the dotnet.wasm file.

dotnet.wasm.br
Include Ref.Emit code 2,938 KB
Trim Ref.Emit code 2,876 KB

So a ~62 KB .br compressed size savings.

The throughput ratio between Ref.Emit vs Reflection is about the same on AOT as it is on non-AOT:

Ref.Emit + AOT:

serialize deserialize writer
443 ms 457 ms 200 ms
421 ms 476 ms 178 ms
432 ms 452 ms 182 ms
418 ms 454 ms 180 ms
440 ms 457 ms 179 ms
Averages: 430.8 459.2 183.8

Trim Ref.Emit + AOT:

serialize deserialize writer
635 ms 825 ms 179 ms
627 ms 815 ms 202 ms
632 ms 783 ms 182 ms
636 ms 825 ms 182 ms
598 ms 766 ms 181 ms
Averages: 623.3 802.8 185.2

@marek-safar
Copy link
Contributor

marek-safar commented May 18, 2021

@radekdoulik @BrzVlad could you work with Eric to investigate why no ref-emit AOT case is noticeably slower and if the code is fully AOTed or not.

@radekdoulik
Copy link
Member

I tried to replicate it locally with browser-bench sample and I see similar results (interp/amd64/chrome):

measurement main issue 38693
Json, non-ASCII text serialize 7.8816ms 9.3470ms
Json, non-ASCII text deserialize 11.9814ms 14.3489ms
Json, small serialize 0.2291ms 0.3512ms
Json, small deserialize 0.3620ms 0.5877ms
Json, large serialize 67.3947ms 102.0784ms
Json, large deserialize 100.0189ms 162.3636ms

@marek-safar
Copy link
Contributor

I think that still does not explain what is causing AOT SRE free version to be about 50% slower than the SRE version which is mostly interpreted.

@radekdoulik
Copy link
Member

I think that still does not explain what is causing AOT SRE free version to be about 50% slower than the SRE version which is mostly interpreted.

Indeed, I started looking into that and wanted to share that I can replicate it too, with different simple app without Blazer involved.

@eerhardt
Copy link
Member Author

Moving to Future. I don't think this work can happen until the runtime performance impact is decreased. With the current numbers, the runtime performance impact is too great for the small size reduction.

@eerhardt eerhardt modified the milestones: 6.0.0, Future Jul 30, 2021
@eiriktsarpalis
Copy link
Member

Appears to have been fixed by #54027.

@eerhardt
Copy link
Member Author

eerhardt commented Sep 2, 2022

There are 2 different switches here:

  • IsDynamicCodeSupported - can I IL Emit at all?
  • IsDynamicCodeCompiled - If I can, will the code I emit be compiled to assembly-level code?

#54027 allows JsonSerializer work at all when you can't IL Emit code. This issue is to track the 2nd case, when you can IL Emit code, but it get interpreted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area-System.Text.Json size-reduction Issues impacting final app size primary for size sensitive workloads
Projects
None yet
Development

No branches or pull requests

6 participants