Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Event.message: accept String or Function to avoid String interpolation. #56454

Closed
wants to merge 2 commits into from

Conversation

gmpassos
Copy link
Contributor

@gmpassos gmpassos commented Aug 13, 2024

Since Event is a logging object, its construction should not allocate new String objects or perform String interpolation.

This also:


  • [✅] I’ve reviewed the contributor guide and applied the relevant portions to this PR.

…rpolation.

Since `Event` is a logging object, its construction should not allocate new `String` objects or perform `String` interpolation.
Copy link

Thank you for your contribution! This project uses Gerrit for code reviews. Your pull request has automatically been converted into a code review at:

https://dart-review.googlesource.com/c/sdk/+/380202

Please wait for a developer to review your code review at the above link; you can speed up the review if you sign into Gerrit and manually add a reviewer that has recently worked on the relevant code. See CONTRIBUTING.md to learn how to upload changes to Gerrit directly.

Additional commits pushed to this PR will update both the PR and the corresponding Gerrit CL. After the review is complete on the CL, your reviewer will merge the CL (automatically closing this PR).

Copy link

https://dart-review.googlesource.com/c/sdk/+/380202 has been updated with the latest commits from this pull request.

3 similar comments
Copy link

https://dart-review.googlesource.com/c/sdk/+/380202 has been updated with the latest commits from this pull request.

Copy link

https://dart-review.googlesource.com/c/sdk/+/380202 has been updated with the latest commits from this pull request.

Copy link

https://dart-review.googlesource.com/c/sdk/+/380202 has been updated with the latest commits from this pull request.

Copy link

https://dart-review.googlesource.com/c/sdk/+/380202 has been updated with the latest commits from this pull request.

1 similar comment
Copy link

https://dart-review.googlesource.com/c/sdk/+/380202 has been updated with the latest commits from this pull request.

@gmpassos gmpassos changed the title Event.message: accept String of Function to avoid String interpolation. Event.message: accept String or Function to avoid String interpolation. Aug 13, 2024
Copy link

https://dart-review.googlesource.com/c/sdk/+/380202 has been updated with the latest commits from this pull request.

@gmpassos
Copy link
Contributor Author

gmpassos commented Aug 14, 2024

@kevmoo

FYI: this has a significant improvement on GC and CPU.

@mraleph
Copy link
Member

mraleph commented Aug 14, 2024

Is this infrastructure even turned on by default? That's strange - I would think that this should just be turned off when running in a default configuration cc @stereotype441

@gmpassos
Copy link
Contributor Author

It's always creating the "Event" object. It will be faster if turned off, BTW.

@stereotype441
Copy link
Member

This logging logic was never intended to execute as part of a normal run of the analyzer. I believe it only executes if either (a) assertions are enabled or (b) dump: true is passed to the function conditionallyStartInferenceLogging (which should only happen when the analyzer is invoked via analyzer/tool/type_inference_log.dart). If I'm right about that, then I don't think it's necessary to try to optimize this code.

If I'm wrong about that, then that's a bug, and so rather than try to optimize the logging logic, I'd prefer to fix the bug so that the logging logic doesn't get executed when it shouldn't be.

@gmpassos you say that this has a significant improvement on GC and CPU. I would be really interested to know the amount of improvement you measured, and what was the command you used to measure it. That would help me understand whether there's a bug here that needs to be addressed, and if so, the magnitude of it.

Thanks!

@gmpassos
Copy link
Contributor Author

gmpassos commented Aug 14, 2024

I have this branch, that is forked from tag 3.5.0:

https://github.com/gmpassos/sdk/tree/optimize-analyzer

Analyzing the "analysis_server" package (compared to Dart 3.5.0):

"dart language-server": 28sec (heap limi 820M)

"my branch": 19sec (heap limit 725M)

VM Parameters:

"--no-enable-asserts"

Parameters:

"--train-using /path/to/analysis_server --protocol analyzer"

Note that IntelliJ uses the analyzer protocol.

(Once I reach my development computer I will send more details).

@gmpassos
Copy link
Contributor Author

Note that the "Event" construction, in the way that it's being done (current main branch), will have many String interpolation and many calls to the "describe" method, that is complex and is also calling "replace".

@gmpassos
Copy link
Contributor Author

gmpassos commented Aug 14, 2024

@stereotype441 @mraleph @kevmoo

Here are some benchmarks:

$> dart --version
Dart SDK version: 3.5.0 (stable) (Tue Jul 30 02:17:59 2024 -0700) on "macos_x64"

Git repository:
https://github.com/dart-lang/sdk.git

Working path:
pkg/analysis_server


Branch main (tag 3.5.0):

Best result out of 5 consecutive attempts:

$> dart --no-enable-asserts language-server --protocol=analyzer --train-using=.
Analyzing with an empty driver cache:
Analyzing ....
Completed in 33.6s.

Analyzing with a populated driver cache:
Analyzing ....
Completed in 8.6s.

This patch only over tag 3.5.0 (pkg/_fe_analyzer_shared/lib/src/type_inference/shared_inference_log.dart):

$> dart compile aot-snapshot bin/server.dart

Best result out of 5 consecutive attempts:

$> dartaotruntime --no-enable-asserts bin/server.aot --protocol=analyzer --train-using=.
Analyzing with an empty driver cache:
Analyzing ....
Completed in 20.9s.

Analyzing with a populated driver cache:
Analyzing ....
Completed in 5.8s.

@mraleph
Copy link
Member

mraleph commented Aug 14, 2024

I think your benchmark is wrong. You need to run analysis server built in the same way, e.g. do dart compile aot-snapshot bin/server.dart on a clean stable checkout and benchmark that to establish the baseline. What you see here is comparison of JIT vs AOT. dart language-server runs from a JIT snapshot and you are benchmarking against AOT. AOT wins (which is a known thing - that's why we really want to move all tools to AOT snapshots).

I have tried adding throw 'aaaa'; into Event's constructor and it is never reached when running dart bin/server.dart --protocol=analyzer --train-using=., just as @stereotype441 says.

@stereotype441
Copy link
Member

I think your benchmark is wrong. You need to run analysis server built in the same way, e.g. do dart compile aot-snapshot bin/server.dart on a clean stable checkout and benchmark that to establish the baseline. What you see here is comparison of JIT vs AOT. dart language-server runs from a JIT snapshot and you are benchmarking against AOT. AOT wins (which is a known thing - that's why we really want to move all tools to AOT snapshots).

I have tried adding throw 'aaaa'; into Event's constructor and it is never reached when running dart bin/server.dart --protocol=analyzer --train-using=., just as @stereotype441 says.

Yes, agreed. I just tried comparing five runs of 3.5.0 to five runs of (3.5.0 plus these changes), using both dart --no-enable-asserts language-server --protocol=analyzer --train-using=. and dartaotruntime --no-enable-asserts bin/server.aot --protocol=analyzer --train-using=.. I only noted the "Analyzing with an empty driver cache" time, because that's what we care about. Here's what I got:

3.5.0, with dart language-server: 32.0, 31.6, 31.4, 31.3, 30.9 (mean=31.44 +/- 0.18)
3.5.0 plus changes, with dart language-server: 31.2, 32.0, 31.6, 31.5, 31.4 (mean=31.54 +/- 0.13)
3.5.0, with dartaotruntime: 21.6, 21.2, 21.1, 21.3, 20.9 (mean=21.22 +/- 0.12)
3.5.0 plus changes, with dartaotruntime: 21.2, 21.3, 21.0, 21.2, 21.4 (mean=21.22 +/- 0.066)

So, it looks like there's no significant difference in runtime with and without these changes.

However, I'm still really glad we're looking into this, because it appears that there is a very significant difference in runtime between dart language-server and dartaotruntime, with dartaotruntime. Considering that dart language-server is how we recommend that editors like VSCode should run the analysis server, this seems suspicious. Perhaps there's something about how dart language-server invokes the analysis server that is leading to sub-optimal performance? Definitely worth looking into.

I'll do a little digging and see if I can figure anything out.

@mraleph
Copy link
Member

mraleph commented Aug 14, 2024

@stereotype441 you should chat with @a-siva who is driving #53576

@stereotype441
Copy link
Member

@stereotype441 you should chat with @a-siva who is driving #53576

Thanks! I'll follow up with @a-siva

@gmpassos
Copy link
Contributor Author

gmpassos commented Aug 14, 2024

...What you see here is comparison of JIT vs AOT. dart language-server runs from a JIT snapshot and you are benchmarking against AOT. (AOT also has a lower memory footprint compared to the Dart VM)

Yes, you are right. However, the performance difference between AOT and JIT is significant—JIT is about 47% slower, or AOT is on average 32% faster. Given the importance of the analyzer for every Dart/Flutter developer, I strongly recommend switching the analyzer to an AOT command (or have an option for that) to take advantage of the already improved performance. (AOT also has a lower memory footprint compared to the Dart VM).

When I run the analysis server using --observe to collect hot spots, it creates Event objects. It's unclear how to enable or disable this undocumented logging feature, and we might need an optional parameter to disable it.

@stereotype441
Copy link
Member

stereotype441 commented Aug 14, 2024

...What you see here is comparison of JIT vs AOT. dart language-server runs from a JIT snapshot and you are benchmarking against AOT. (AOT also has a lower memory footprint compared to the Dart VM)

Yes, you are right. However, the performance difference between AOT and JIT is significant—JIT is about 47% slower, or AOT is on average 32% faster. Given the importance of the analyzer for every Dart/Flutter developer, I strongly recommend switching the analyzer to an AOT command (or have an option for that) to take advantage of the already improved performance. (AOT also has a lower memory footprint compared to the Dart VM).

Agreed. It has been our intention to switch all the tools invoked by dart <subcommand> to use AOT; see #53576. According to that issue (if I'm reading it correctly), the analysis server has already been switched over, but after looking at these results here, I think that's not actually true. It's possible that one of the subtasks on that issue got checked off by accident. I've reached out to @a-siva on the issue to see what's going on.

When I run the analysis server using --observe to collect hot spots, it creates Event objects. It's unclear how to enable or disable this undocumented logging feature, and we might need an optional parameter to disable it.

My guess is that the reason this is happening is that when you're running using --observe, you have assertions enabled. Would you mind checking whether that is the case?

@gmpassos
Copy link
Contributor Author

@mraleph I was wondering, in this case, what makes JIT run significantly slower than AOT?

@mraleph
Copy link
Member

mraleph commented Aug 15, 2024

@gmpassos hard to say without detailed profiling, but AOT by now has a bunch of optimizations which are not in the JIT (for various reasons):

  • Non-devirtualized calls go through a global dispatch table, instead of using inline caches or inlined decision trees.
  • int and double values are passed unboxed
  • registers are used for calling conventions
  • call-sequences are more direct (e.g. devirtualized call-sites use direct calls instead of going though an indirection via Code object).
  • Static type information is propagated globally through the program using TFA and is heavily used for optimizations, instead of relying on type-feedback and runtime speculation.

I think some combination of the above is most likely responsible for the difference. It is likely possible to make JIT much faster - but we don't currently have bandwidth (nor motivation) to invest into this.

@mraleph
Copy link
Member

mraleph commented Aug 15, 2024

I am going close this PR based on our current understanding that this is debug only logging code.

@mraleph mraleph closed this Aug 15, 2024
@gmpassos
Copy link
Contributor Author

@mraleph thanks for the response.

Regarding JIT, perhaps one of these "features" could be selected for implementation, based on its overall impact on performance. What is clear to me is that there’s still a huge potential for improvement. Best regards.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants