-
Notifications
You must be signed in to change notification settings - Fork 12.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve inference for context sensitive functions in object and array literal arguments #48538
Conversation
@typescript-bot test this |
Heya @ahejlsberg, I've started to run the parallelized Definitely Typed test suite on this PR at 900ef91. You can monitor the build here. |
Heya @ahejlsberg, I've started to run the abridged perf test suite on this PR at 900ef91. You can monitor the build here. Update: The results are in! |
Heya @ahejlsberg, I've started to run the diff-based community code test suite on this PR at 900ef91. You can monitor the build here. Update: The results are in! |
Heya @ahejlsberg, I've started to run the extended test suite on this PR at 900ef91. You can monitor the build here. |
@ahejlsberg Here they are:Comparison Report - main..48538
System
Hosts
Scenarios
Developer Information: |
@ahejlsberg |
This one looks like a very nice improvement! Could you pack this? I would love to test this out easily with XState to see if it fixes some of our long-standing issues :) |
@typescript-bot pack this |
Heya @RyanCavanaugh, I've started to run the tarball bundle task on this PR at 900ef91. You can monitor the build here. |
Hey @RyanCavanaugh, I've packed this into an installable tgz. You can install it for testing by referencing it in your
and then running There is also a playground for this build and an npm module you can use via |
@@ -29768,7 +29817,7 @@ namespace ts { | |||
} | |||
|
|||
const thisType = getThisTypeOfSignature(signature); | |||
if (thisType) { | |||
if (thisType && couldContainTypeVariables(thisType)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these new guards on couldContainTypeVariables
necessary for this to work, or are they an optimization?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Purely an optimization.
Okay, I think I understand a little better. The current behavior in inference is that as soon we find a contextually sensitive expression which grabs a type parameter for a contextual type, we would typically need to "fix" the inferences made so far in place. Those inferences come from inferring the entire given argument's type to an entire expected parameter's type. That has too large of a granularity. This PR grabs contextually sensitive functions/methods encountered in array and object arguments as we're checking them and stashes them away. Then, if we ever run into a case where we're about to fix inferences for a type parameter, we try to make inferences from the stashed expressions to their (uninstantiated) contextual type just in case we can learn a bit more at a higher granularity. Did I get that right? |
Is there a reason we need to stash them and can't just infer immediately? I assume you end up making worse inferences around |
getContextualTypeForObjectLiteralMethod(node as MethodDeclaration, ContextFlags.NoConstraints) : | ||
getContextualType(node, ContextFlags.NoConstraints); | ||
if (contextualType) { | ||
inferTypes(context.inferences, type, contextualType); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this makes me think that we'll be redoing a lot of inferences - for a given inference set, we'll infer from each of the inner properties to their matching contextual type, then we'll do the outer inference for the object as a whole, which, in so doing, will redo these inferences again (and hopefully come to the same results). It's making me think that maybe we should have some kind of visited list for inferences in a given inference context, something that says "we've already inferred from A to B in this context, no need to do it again" - essentially invokeOnce
, but shared to the entire inference pass.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this makes me think that we'll be redoing a lot of inferences - for a given inference set, we'll infer from each of the inner properties to their matching contextual type, then we'll do the outer inference for the object as a whole, which, in so doing, will redo these inferences again
I think (see my comment here) that that's why this is only done when we're about to fix (which are cases that are broken today). So in the cases which don't work today, I think we'll effectively do inference twice in the worst-case (cases where you need to fix a type parameter on every context-sensitive expression - should be rare). Otherwise, it should be the same amount of inference as before, the only new work is collecting the context-sensitive inner expressions).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean, storing these locations and doing the inference later will always result in inference occurring - that inner inference occurring simply changes the order of inferences (and allows some inferences to be locked in before sibling member inferences begin). We'll still ultimately back out to the expression as a whole and do inferences on its type, which will now sometimes have more inferences locked in from these inner inferences, and still do inference on that type and all its members, which includes all these inferences that we've already done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There will be some amount of redoing of inferences, but deferring the work at least ensures we only do it if there's a chance it's needed and the whole scenario is generally pretty rare. I'm not too concerned with the repetition that occurs when we infer from the entire expression, nor am I too concerned with the ordering (we already have out-of-sequence ordering because we first infer from context insensitive arguments).
@@ -21746,6 +21749,37 @@ namespace ts { | |||
} | |||
} | |||
|
|||
function addIntraExpressionInferenceSite(context: InferenceContext, node: Expression | MethodDeclaration, type: Type) { | |||
(context.intraExpressionInferenceSites ??= []).push({ node, type }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there are multiple nested active inference contexts (eg, foo(..., bar(..., baz(..., {...})))
might we not want to add the expression to all inference contexts? This way if a type parameter flows from foo
into bar
and then baz
, but doesn't flow back out on construction (eg, because it's conditional'd away), we could still pick up the inner inference site.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, context sensitive arguments to an inner call might cause an outer type parameter to become fixed, but inferences from the inner argument expressions only occur through the return type of the inner argument expressions.
I think we should get this in for 4.7. |
The inference limit of TypeScript has been improved in TypeScript 4.7. MR: microsoft/TypeScript#48538
The inference limit of TypeScript has been improved in TypeScript 4.7. MR: microsoft/TypeScript#48538
When contextually typing parameters of arrow functions, function expressions, and object literal methods in generic function argument lists, we infer types from context insensitive function arguments anywhere in the argument list and from context sensitive function arguments in preceding positions in the argument list. For example:
Above,
() => 42
and(n: number) => n
are context insensitive because they have no contextually typed parameters, butn => n
andfunction() { return 42 }
are context sensitive because they have at least one contextually typed parameter (in the function expression case, the implicitthis
parameter is contextually typed). The errors in the last two calls occur because inferred type information only flows from left to right between context sensitive arguments. This is a long standing limitation of our inference algorithm, and one that isn't likely to change.However, in this PR we are removing another long standing, and arguably more confusing, limitation. So far it hasn't been possible to flow inferences between context sensitive functions in object and array literals:
With this PR, all of the above calls to
f3
successfully check where previously only the first two would. Effectively, we now have the same left-to-right rules for information flow between context sensitive contextually typed functions regardless of whether the functions occur as discrete arguments or properties in object or array literals within the same argument.Fixes #25092.
Fixes #38623.
Fixes #38845.
Fixes #38872.
Fixes #41712.
Fixes #47599.
Fixes #48279.
Fixes #48466.