Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

profiling: don't carry kwargs around #179

Merged
merged 3 commits into from
Dec 17, 2020
Merged

Conversation

matthiasdiener
Copy link
Member

@matthiasdiener matthiasdiener commented Dec 16, 2020

Fixes #177.
Approx. 10% -2% overhead vs. profiling with master branch.

Copy link
Contributor

@inducer inducer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for working on this. As to your comment regarding 10% overhead, what is the baseline? Does this represent an increase in overhead? If so, where's that coming from? I don't see any change that I would spontaneously blame for a cost increase.

@matthiasdiener
Copy link
Member Author

LGTM, thanks for working on this. As to your comment regarding 10% overhead, what is the baseline? Does this represent an increase in overhead? If so, where's that coming from? I don't see any change that I would spontaneously blame for a cost increase.

The baseline is profiling without this PR, ie, 10% additional overhead compared to the normal profiling overhead. I believe this is due to calling _cache_kernel_stats() (which transforms kwargs into args_tuple) on every kernel call. I have not found a way around that (or to cache the args_tuple).

@matthiasdiener
Copy link
Member Author

matthiasdiener commented Dec 16, 2020

LGTM, thanks for working on this. As to your comment regarding 10% overhead, what is the baseline? Does this represent an increase in overhead? If so, where's that coming from? I don't see any change that I would spontaneously blame for a cost increase.

The baseline is profiling without this PR, ie, 10% additional overhead compared to the normal profiling overhead. I believe this is due to calling _cache_kernel_stats() (which transforms kwargs into args_tuple) on every kernel call. I have not found a way around that (or to cache the args_tuple).

As a side question: part of the overhead appears to come from the fact that hash(program) is quite slow - about 0.3 ms, two orders of magnitude slower than hash(program.name). Is this expected?

@inducer
Copy link
Contributor

inducer commented Dec 16, 2020

As a side question: part of the overhead appears to come from the fact that hash(program) is quite slow - about 0.3 ms, two orders of magnitude slower than hash(program.name). Is this expected?

It's expected that it is slow the first time it happens. (A hash of the whole program needs to be computed, after all. Every expression, every variable definition, the whole thing.) It should however not stay slow. But I think we can make it fast the second and following times: inducer/loopy#185

@matthiasdiener
Copy link
Member Author

As a side question: part of the overhead appears to come from the fact that hash(program) is quite slow - about 0.3 ms, two orders of magnitude slower than hash(program.name). Is this expected?

It's expected that it is slow the first time it happens. (A hash of the whole program needs to be computed, after all. Every expression, every variable definition, the whole thing.) It should however not stay slow. But I think we can make it fast the second and following times: inducer/loopy#185

Nice, thanks! ;-)

@inducer
Copy link
Contributor

inducer commented Dec 17, 2020

inducer/loopy#185 is merged. Could you try again and report overhead?

@matthiasdiener
Copy link
Member Author

I ran these a few times but still can not believe my lying eyes:

(Lassen/pocl/GPU; Vortex2D/450 elements/100 time steps)

Wall time in seconds Loopy without inducer/loopy#185 Loopy with inducer/loopy#185
No profiling 57.0s 50.9s
Old profiling (current master) 85.9s 55.7s
New profiling (with #179) 95.6s 54.4s

@inducer
Copy link
Contributor

inducer commented Dec 17, 2020

Huh, nice. I'm glad we dug into the overhead a bit more. This looks good to merge then, especially given that it seems to have negative overhead now. :P

@matthiasdiener matthiasdiener merged commit c81cfab into master Dec 17, 2020
@matthiasdiener matthiasdiener deleted the profiling-kwargs branch December 17, 2020 04:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Memory-allocation related crash with profiling on GPU
2 participants