-
-
Notifications
You must be signed in to change notification settings - Fork 553
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inspect
causing big performance regression
#2908
Comments
This is a huge performance hit in some cases (e.g. repeated solving of the SPM is now 10x slower) so we should fix immediately and make a 23.4.1 release Maybe this helps https://stackoverflow.com/questions/17407119/python-inspect-stack-is-slow |
Also weird that the benchmarks never caught this, are they working now @Saransh-cpp ? |
Since this is immediate I am happy to revert — though I will get back on this and try some other faster implementation in the next week, wasn't aware of such a potential slowdown 😅 Where can I run the above profiler? |
It's snakeviz, you can pip install then run in
|
Potential solutions are:
The functionality this provides is not very critical compared to the performance cost, so I agree just removing it and doing a quick patch release is the best way forward until we figure out a better way |
I agree with removing altogether and releasing a patch for now.
I am just figuring out some end-semester assignments and submissions so it will take me a few days to try these ways. Please go ahead and reopen #1182 for the time being |
The benchmarks were indeed failing on develop, but the benchmark workflow was fixed after the Now they're passing! |
We should allow it to run on PRs from forks |
@tinosulzer, I tried to profile a faster implementation (it turns out that using # In the ipython shell
import pybamm
%load_ext snakeviz
%snakeviz pybamm.lithium_ion.SPM() Beforeand the results for
Now
which seems to be at least 2000x faster if we compare cumulative times (the earlier takes ~27% of the time taken to call SPM, which then dropped down to 0.02% in the latter); assuming I am reading the results right. This can be sped up even more if we use Could you perhaps write the code for a longer and very complex PyBaMM experiment that I can try to profile (such as the ones running behind your screenshots above) just to see if the results are scaling properly and do not cause any performance regressions? |
> import pybamm
> %load_ext snakeviz
> %snakeviz model = pybamm.lithium_ion.DFN()
> sim = pybamm.Simulation(model) # should be very fast, no need to profile
> %snakeviz sim.solve([0,3600])
> %snaleviz sim.solve([0,3600]) The final solve is repeated twice because several things get cached the first time. Ideally, the second time, almost all the time should be spent in the Also repeat the whole thing with It looks like using |
Thanks; for initialising the DFN model, the performance hit with The second time, Similar results arise for the SPM model: the performance hit for which means registering citations for the SPM model impacts it just slightly more than doing so for the DFN model – is this performance reasonable enough to open a PR, or should I try to improve upon this more? |
Sorry I forgot to reply to previous discussion. The speedup is great, but it's surprising that it's being called 14 times on the second solve. Can you find a way to cache the citation tags? |
Yes sure, I'll give that a try and convert that to a draft – would you suggest using |
Either is fine |
Just using |
This is due to #2862
@agriyakhetarpal can you look into a way to make this faster? Otherwise we should revert, it's not worth the functionality gain
The text was updated successfully, but these errors were encountered: