-
Notifications
You must be signed in to change notification settings - Fork 789
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Source files are ending up on the large object heap #4881
Comments
I wonder if we could avoid the copies occuring at those two places: in first location, we create a Is there a leaner way to get the char array from the stream? In second location, in this case we don't keep the char array around at call site, we could save extra call to |
I made a test PR #4882 that removes the call to I still have issues to build the whole repository with VS tooling so I currently can't test the impact on intellisense. |
This might be harder to fix than you think.... Or at least what you'll need to fix isn't what's described :) Among other things we use the entire contents of source files to act as a key for some operations in the language service caches. That can be changed I'm sure (e.g. use the hash of the contents) but until we do that we need to store the whole file, breaking into chunks isn't solving the fundamental problem |
@dsyme - the Roslyn workspace uses the editor's types to back all the files in the workspace, (sometimes privately for closed files, and sometimes publicly for open files), but we should be able to use editor version types or even instances to manage these caches. In fact, if we just threaded the |
#4948 by @auduchinok also reduces some copying. With that and @smoothdeveloper's changes in place it would be good to measure this again and see if it's had a positive impact. |
This is likely lessened greatly by #6001 We closed #5944 because we'll need to incorporate a hash code (or some integer ID) for an |
Looking through perf traces of IntelliSense, I can see that large source files are ending up on the large object heap because they are read as a single chunk:
This is going to put pressure on Gen2 collections, and I can see that in one trace opening a completion window in a large file causes a 77ms GC2 collection.
We should read these in chunks to avoid this.
Trace: [internalshare]public\wismith\PerfViewData5.etl.zip.
The text was updated successfully, but these errors were encountered: