-
-
Notifications
You must be signed in to change notification settings - Fork 30.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inline bytecode caches #90997
Comments
...as discussed in faster-cpython/ideas#263. My plan is for this initial PR to lay the groundwork, then to work on porting over the existing opcode caches one-by-one. Once that's done, we can clean up lots of the "old" machinery. |
We need to decide what to do about dis. I don't think we should have a Instead we should have a That way, we can present the cache information as extra data on the quickened form, rather than junk instructions. |
Making this a release blocker, as we really cannot leave this half finished for the release. Shouldn't be a problem, as we'll have it done in a week or so. |
UNPACK_SEQUENCE's slowdown is already filed? I hit the gap at 424ecab on Windows. |
This is marked as a release blocker so I am holding the alpha release on this. Is there anything we can do to unblock this issue? |
Is there some way to mark something as not blocking an alpha release, but blocking a beta release? Everything is working at the moment, but not so efficiently. |
We should be done with this by early next week, if you can wait. |
"Deferred blocker" |
Good to know, although "deferred blocker" is somewhat vague about when it is deferred until. OOI, does it become a "blocker" again once you've done the alpha release, or what stops it being deferred past the beta or even the final release? |
It's not an UNPACK_SEQUENCE slowdown, it's a silly benchmark ;) What I *think* is happening is that the inline cache takes the size of the function (in code units) from about 4800 to about 5200, crossing our threshold for quickening (currently set to 5000). When we quicken in-place, there will be no need for a threshold and this issue will disappear. We should probably up the threshold for now, just to keep the charts looking good. |
Check out the devguide: https://devguide.python.org/triaging/#priority
But in any case, I normally promote them to release blockers by hand and all of them become full blockers in the beta. |
* Move CACHE handling into _unpack_opargs * Remove auto-added import * blurb add
|
(cherry picked from commit 5f3c9fd) Co-authored-by: Brandt Bucher <[email protected]>
(cherry picked from commit 5f3c9fd) Co-authored-by: Brandt Bucher <[email protected]>
Is there anything left to do here? |
No. This is done. |
BINARY_OP
#31543LOAD_GLOBAL
inline. #31575UNPACK_SEQUENCE
#31591BINARY_SUBSCR
. #31618COMPARE_OP
#31622COMPARE_OP
#31663GET_AWAITABLE
#31664BINARY_OP
's handling of inline caches #31671STORE_SUBSCR
#31742_Py_SET_OPCODE
macro #31780throw()
#31968bytes
object for_co_code_adaptive
#32205Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
Linked PRs
LOAD_GLOBAL
caches #102569The text was updated successfully, but these errors were encountered: