Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

10% time and 50% memory improvement #377

Merged
merged 18 commits into from
Oct 15, 2019

Conversation

MihaZupan
Copy link
Collaborator

A bunch of random optimizations.

Most interesting (and responsible for perf) ones to look at are:
253be5c afe4308 aefad21

I was mainly playing whack-a-mole with memory allocations, time improvement comes mainly from 253be5c.

{
T[][] buffers = _buffers;
T[] buffer = null;
if (Interlocked.CompareExchange(ref _lock, 1, 0) == 0)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure that this code is thread safe?
If we have 2 threads entering this method at the exact same time, thread-1 enter this line (so _lock is then set to 1), so thread-2 can't enter (_lock is 1), so it goes directly to return buffer which will be null, no?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(this case is probably a better fit for a slim mutex)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is heavily adapted from the CoreFx's ConfigurableArrayPool. A SpinLock is used there instead.

I expect very little contention on this lock. Parsing is mostly done by one thread and even then dealing with the StringLine buffers all that common.
As you said, in this implementation if there is contention, the second thread will return null. That will fallback to a regular new T[length] allocation. Such a buffer can still be returned to the pool later on.

The reason I opted for that is because not all buffers will get released. In the case of StringLineGroup, only objects where inlines are processed will release the buffer in the end. In other cases the Lines are still available on the AST and adding a finalizer to reclaim them is way to expensive.

As such only buffers from objects that have their ProcessInlines set will get reclaimed. For other objects, buffers will only be returned when resizing, the final buffer on them will be reclaimed by the GC eventually and not returned to the pool.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, ok, sorry, didn't look further with the return SelectBucket(length)?.Rent() ?? new T[length];
yeah that's ok then

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In other words, buffers aren't guaranteed to be returned from the actual bucket, they can also be allocated on the spot if the bucket is contended/at capacity/of incorrect size.
Over-allocating here is fine since some buffers will never get returned anyway.

if (end - index + 1 < text.Length)
return false;

string sliceText = Text;
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the work you have done in this PR to move field access to single local variable is indeed helping a lot the codegen, amazing job

@xoofx
Copy link
Owner

xoofx commented Oct 15, 2019

Amazing work on perf @MihaZupan with this PR! 👍

@MihaZupan
Copy link
Collaborator Author

MihaZupan commented Oct 15, 2019

I was a bit sad there's no noticable overall effect from #363, so I only included the simple optimizations on IsDigit, IsAlpha in this PR

@xoofx xoofx merged commit 07a7714 into xoofx:master Oct 15, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants