-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generic inference regression from #5699 #5738
Comments
OK, so here is the diagnosis: I remember that I checked that our repos are clean with that PR, but later @JukkaL proposed to only use the outer context invariant instances, which I didn't check. The point is that tuple is covariant (and It looks however, there is another bug this uncovered, because currently it should have inferred I believe this may be a manifestation of #1317 which is another high priority issue I always wanted to fix. |
Yes, I am pretty sure this is a duplicate of #1317, here is a similar repro:
I can try looking into this more today and tomorrow, but I can't guarantee it is easy to fix. |
Continuing thinking about this and #1317 it seems there are actually two aspects here:
Fixing the first one could be easy, while the second point may be a bit tricky. Currently mypy gets almost there: from typing import TypeVar, Callable, List
T = TypeVar('T')
S = TypeVar('S')
U = TypeVar('U')
def dec(f: Callable[[S], T]) -> Callable[[S], List[T]]: ...
def f(x: U) -> U: ...
g = dec(f)
reveal_type(g) # Revealed type is 'def (U`-1) -> builtins.list[U`-1]' we just need to make it @JukkaL what do you think? |
I agree that leaking type variables probably isn't hard to fix, once we figure out what the correct behavior would be. I can't say how hard it would be to infer generic types. This would need some research. |
I'm not sure this needs to be ready for the release even if it technically is a regression -- IIUC the code that used to pass was questionable at best, right? |
It is not incorrect although. I will try a bit more to see if there is a simple fix, I will have nothing by the end of the day, I would propose to just not worry about this for now. |
This also affects a common usage of attrs:
Unsuprisingly,
However
is ok. |
I spent couple days thinking and investigating this and related issues (in total I think we have around couple dozen issues that are duplicates or related to either this or #1317). Here are results of the analysis. Preliminary note on terminology. Just to avoid confusion about what is meant by "generic type" I want to recall that there are two different things called this way: type constructors and polymorphic types. It is better to illustrate this on an example of callback protocols: T = TypeVar('T')
α = TypeVar('α')
class Contructor(Protocol[T]):
def __call__(self, x: T) -> T: ...
class Poly(Protocol):
def __call__(self, x: α) -> α: ... Type constructors expect type arguments while polymorphic types don't (I am not sure if these are the "official" terms but let's use them). The type parameters of the latter are expected to be inferred if it appears as a callee, for the former they are just Whole expression type inference. Here I want to briefly describe how an "ideal" (see below for why it isn't actually ideal) type inference scheme/algorithm should look like. Imagine we have a nested function call
I tried to search the literature about whether there are any "standard" algorithms for solving these, but didn't find anything satisfying, so here is how I see this. First we notice that the linear constraints plus lower and upper bound constraints form a directed graph where type variables Then we meet all upper bounds, and join all lower bounds for each node. If there is both lower bound and upper bound for a variable we choose the lower bound (if lower is not a subtype of upper it is an error), otherwise the upper bound, otherwise Next, we substitute the inferred values into non-linear constraints, so that some of them can become linear, solve the new linear subset by the constraint propagation and so on until we see no progress. For example if we start from The main downside of this inference scheme (and btw other schemes I have found in literature) is that any type error made by a user will result in a cryptic message Current state of type inference in mypy. Here is a brief summary of the current logic of the "Hanoi Tower" type inference in mypy:
Note that this scheme is more limited that full expression inference mostly because of the type erasure that happens in step 1 for an argument that is a function call itself, while performing the step 4 of the caller. Basically we "cut" all linear constraints and are only left with the upper- and lower-bound kind of constraint (first in the list above). Issues in current logic and implementation. Unfortunately both inference scheme itself and its implementation has certain issues. There are four big ones:
Possible solution plan I think fixing these issues can be mostly done independently. However, since they are all related it is better to do this in an organized manner. Here are some ideas that I would like to propose. Issue 1: This would be solved by the whole expression inference scheme, but as I mentioned, the latter results in bad error messages, and IMO this is a very important argument against it. But I think with just couple improvements we can make the existing inference logic almost as powerful as whole expression inference without deteriorating error messages (and maybe even make them better). First, we need to collect and solve constraints from outer context and from the first pass arguments together (steps 2 and 5). Second, don't apply them (i.e. skip step 3 altogether), instead record the gathered information about bounds in the erased types when erasing meta-variables (so that for a nested function call we can use inferred constraints and the recorded ones together). Infer the types of second pass arguments using this augmented erased context. And finally infer the type argument using both external and full internal context. Importantly, if there is a type inference error in this last step, then we fall back to solution without extra constraints from erased parts of outer context, and then without the argument types. So that we will get This is best illustrated on an example with function Issue 2: In some sense this should be "just fixed", i.e. we should clean up the implementation and docs here. Importantly, if we go with the above solution, I think we should bias the selection to only put rock solid arguments in the first pass, i.e. those that work well in empty context, like Issue 3: The fix for this issue is straightforward -- type variables of polymorphic callables should participate in the inference as intermediate nodes for linear constraint propagation. (Caveat: we should be careful to not confuse type variables bound by an enclosed function with polymorphic variables, they currently can have the same name and id.) This is better illustrated on an example: X = TypeVar('X')
T = TypeVar('T')
def foo(x: Callable[[int], X]) -> List[X]:
...
def id(x: T) -> T:
...
y = foo(id) Here inferred type of Issue 4: This issue can be fixed by allowing "instantiation" with less variables instead of inferring T = TypeVar('T')
S = TypeVar('S')
U = TypeVar('U')
def dec(f: Callable[[S], T]) -> Callable[[S], List[T]]:
...
def f(x: U) -> U:
...
g = dec(f) Here the inferred type of def dec(f: Callable[[S], T]) -> Dict[S, T]:
... we would end up inferring I am not sure what is the best order to fix these, 1 and 2 are better fixed together and in total can take a week to fix, while 3 and 4 are smaller. Any thoughts or comments? cc @Michael0x2a |
Hmm, I wonder if it might be possible to improve the error handling of the "whole expression inference" algorithm. Basically, it seems to me that there are only four ways the whole expression inference algorithm can fail to produce a valid mapping of nodes to types:
I think we might be able to handle cases 1 through 3 by making the following changes:
I think the only case where it's difficult to construct a good error message for is case 4 (the graph is not a DAG). But tbh I think such cases are pretty rare, maybe even impossible, so just failing with a bland "Could not infer type" error might be good enough in this case. Caveats: I haven't really thought about this problem too deeply, so there could be some flaws with the idea that I'm overlooking. And even if this modified algorithm does work, I think it could still be reasonable to go with your proposal -- iterative changes to a known algorithm are generally less risker compared to a complete rewrite, etc etc... |
@Michael0x2a Thanks for ideas! Here are few comments:
Such constraint can't get into the list initially, I didn't clarify this, sorry. I think we should "dig through" the type components until we get a bare variable on either side of constraint. IOW I don't have problems with how constraints are found currently (well except for inferring constraints against unions, but this is a totally different story).
This is not actually an error. It is totally fine to have cycles i.e. have a DG that is not a DAG. Moreover, this is a quite typical situation for invariant containers to produce tiny loops like To summarize I think the problems/situations 2-4 in your list are non-issues, and problem 1 is really the root of all evil. I am however not sure I understand how you propose to use your solution to give the better error message. Could you please explain it on example with |
Oh, so given the constraint What happens if we have a constraint that looks like
Sure! Assuming that the type signature of At some point, these constraints get refined down to After performing propagation, we end up with Once the caller receives this mapping, it realizes there must be a type error of some kind since the mapping contains an In this case, when we try applying these two mappings we get This error is actually different from the error mypy produces today, which is I think both errors are fine tbh, but I don't think it'll be too hard to find a way to continue biasing towards "pushing up" errors to the return type. For example, we could maybe try running this "solve constraints then apply" step twice -- first without including the return type context as a constraint, then if that doesn't return any errors trying again with it. (Or alternatively, we could apply these two mappings without doing any error handling to produce the callee and caller signatures |
No, this will infer no constraints because this can never be satisfied. But for the cases like
Thanks for explanation. But it looks like this will not work well. I should have been more explicit here again. The point is that a list literal is internally translated to a generic function call (see my explanation for how errors would generate in the compromise inference scheme). So that def _list(*args: T) -> List[T]: ...
_list('a', 'b', 'c') This is actually why Taking this into account and renaming type variable of |
@ilevkivskyi I like your incremental approach of improving type inference. The general scheme looks good, though I have a few questions. It would be good to have a few more worked examples, such as some already supported tricky cases (such as cases involving lambda and/or overloads), and open bugs that this would fix. Beyond the difficulty of generating good error messages, the "whole expression inference" approach also seems much more work -- especially more work needed for any incremental improvement, as it seems basically all-or-nothing to me.
Does this mean using the new
How would this deal with cases where there is both a generic function call (say, |
We briefly discussed Jukka's questions privately, here is a summary.
We can re-use the existing
It looks like we might need to introduce priorities that will determine order of processing arguments to make this kind of things robust. |
An example of questionable behaviour that stems from current logic of splitting arguments in two passes (appeared in a discussion on typeshed): def g(x: T, y: Callable[[], T]) -> None: ...
def not_a_lambda() -> str: ...
g(42, not_a_lambda) # this currently fails, but should infer T = object |
We sometimes get this from mypy: Traceback (most recent call last): [...] File "mypy/main.py", line 89, in main File "mypy/build.py", line 166, in build File "mypy/build.py", line 235, in _build File "mypy/build.py", line 2611, in dispatch File "mypy/build.py", line 2911, in process_graph File "mypy/build.py", line 2989, in process_fresh_modules File "mypy/build.py", line 1919, in fix_cross_refs File "mypy/fixup.py", line 25, in fixup_module File "mypy/fixup.py", line 89, in visit_symbol_table File "mypy/fixup.py", line 45, in visit_type_info File "mypy/fixup.py", line 91, in visit_symbol_table File "mypy/nodes.py", line 874, in accept File "mypy/fixup.py", line 136, in visit_var File "mypy/types.py", line 234, in accept cr_running = gi_running File "mypy/fixup.py", line 169, in visit_type_alias_type File "mypy/fixup.py", line 276, in lookup_qualified_alias File "mypy/fixup.py", line 290, in lookup_qualified File "mypy/fixup.py", line 299, in lookup_qualified_stnode File "mypy/lookup.py", line 47, in lookup_fully_qualified AssertionError: Cannot find component 'pending_download_type' for 'qutebrowser.browser.webkit.mhtml._Downloader.pending_download_type' This seems to help... See: python/mypy#5738 python/mypy#7281
We sometimes get this from mypy: Traceback (most recent call last): [...] File "mypy/main.py", line 89, in main File "mypy/build.py", line 166, in build File "mypy/build.py", line 235, in _build File "mypy/build.py", line 2611, in dispatch File "mypy/build.py", line 2911, in process_graph File "mypy/build.py", line 2989, in process_fresh_modules File "mypy/build.py", line 1919, in fix_cross_refs File "mypy/fixup.py", line 25, in fixup_module File "mypy/fixup.py", line 89, in visit_symbol_table File "mypy/fixup.py", line 45, in visit_type_info File "mypy/fixup.py", line 91, in visit_symbol_table File "mypy/nodes.py", line 874, in accept File "mypy/fixup.py", line 136, in visit_var File "mypy/types.py", line 234, in accept cr_running = gi_running File "mypy/fixup.py", line 169, in visit_type_alias_type File "mypy/fixup.py", line 276, in lookup_qualified_alias File "mypy/fixup.py", line 290, in lookup_qualified File "mypy/fixup.py", line 299, in lookup_qualified_stnode File "mypy/lookup.py", line 47, in lookup_fully_qualified AssertionError: Cannot find component 'pending_download_type' for 'qutebrowser.browser.webkit.mhtml._Downloader.pending_download_type' This seems to help... See: python/mypy#5738 python/mypy#7281
I'm not sure but I suspect this is another instance of this issue, which just bit me today: from typing import Optional, TypeVar, Callable, Tuple
A = TypeVar('A')
B = TypeVar('B')
L = TypeVar('L')
R = TypeVar('R')
Ret = TypeVar('Ret')
def decorate(f):
# type: (Callable[[L, R], Ret]) -> Callable[[L, R], Ret]
''' in real life, the type of decorator is a little more involved, so I can't just have it be `(T) -> T` '''
assert False
def function(l, foo):
# type: (Optional[A], Optional[B]) -> Optional[Tuple[A, B]]
assert False
x: Optional[Tuple[str, int]] = function('s', 1)
y: Optional[Tuple[str, int]] = decorate(function)('s', 1)
reveal_type(function)
reveal_type(decorate)
reveal_type(decorate(function))
reveal_type(x)
reveal_type(y) typechecking this file gives:
From |
I assume this is also caused by this issue: from typing import List, Optional
from dataclasses import field, dataclass
@dataclass
class Foo:
x: Optional[List[str]] = field(default_factory=list) which gives |
for reference, I'm encountering this exact scenario too with Tuple |
The way mypy does type inference also breaks calling a generic function on the right side of from typing import TypeVar
T = TypeVar('T')
def identity(x: set[T]) -> set[T]:
return x
x = {0, 1}
# error: Argument 1 to "identity" has incompatible type "Set[int]"; expected "Set[object]" [arg-type]
print({0} & identity(x)) |
Fixes #1317 Fixes #5738 Fixes #12919 (also fixes a `FIX` comment that is more than 10 years old according to git blame) Note: although this PR fixes most typical use-cases for type inference against generic functions, it is intentionally incomplete, and it is made in a way to limit implications to small scope. This PR has essentially three components (better infer, better solve, better apply - all three are needed for this MVP to work): * A "tiny" change to `constraints.py`: if the actual function is generic, we unify it with template before inferring constraints. This prevents leaking generic type variables of actual in the solutions (which makes no sense), but also introduces new kind of constraints `T <: F[S]`, where type variables we solve for appear in target type. These are much harder to solve, but also it is a great opportunity to play with them to prepare for single bin inference (if we will switch to it in some form later). Note unifying is not the best solution, but a good first approximation (see below on what is the best solution). * New more sophisticated constraint solver in `solve.py`. The full algorithm is outlined in the docstring for `solve_non_linear()`. It looks like it should be able to solve arbitrary constraints that don't (indirectly) contain "F-bounded" things like `T <: list[T]`. Very short the idea is to compute transitive closure, then organize constraints by topologically sorted SCCs. * Polymorphic type argument application in `checkexpr.py`. In cases where solver identifies there are free variables (e.g. we have just one constraint `S <: list[T]`, so `T` is free, and solution for `S` is `list[T]`) it will apply the solutions while creating new generic functions. For example, if we have a function `def [S, T] (fn: Callable[[S], T]) -> Callable[[S], T]` applied to a function `def [U] (x: U) -> U`, this will result in `def [T] (T) -> T` as the return. I want to put here some thoughts on the last ingredient, since it may be mysterious, but now it seems to me it is actually a very well defined procedure. The key point here is thinking about generic functions as about infinite intersections or infinite overloads. Now reducing these infinite overloads/intersections to finite ones it is easy to understand what is actually going on. For example, imagine we live in a world with just two types `int` and `str`. Now we have two functions: ```python T = TypeVar("T") S = TypeVar("S") U = TypeVar("U") def dec(fn: Callable[[T], S]) -> Callable[[T], S]: ... def id(x: U) -> U: ... ``` the first one can be seen as overload over ``` ((int) -> int) -> ((int) -> int) # 1 ((int) -> str) -> ((int) -> str) # 2 ((str) -> int) -> ((str) -> int) # 3 ((str) -> str) -> ((str) -> str) # 4 ``` and second as an overload over ``` (int) -> int (str) -> str ``` Now what happens when I apply `dec(id)`? We need to choose an overload that matches the argument (this is what we call type inference), but here is a trick, in this case two overloads of `dec` match the argument type. So (and btw I think we are missing this for real overloads) we construct a new overload that returns intersection of matching overloads `# 1` and `# 4`. So if we generalize this intuition to the general case, the inference is selection of an (infinite) parametrized subset among the bigger parameterized set of intersecting types. The only question is whether resulting infinite intersection is representable in our type system. For example `forall T. dict[T, T]` can make sense but is not representable, while `forall T. (T) -> T` is a well defined type. And finally, there is a very easy way to find whether a type is representable or not, we are already doing this during semantic analyzis. I use the same logic (that I used to view as ad-hoc because of lack of good syntax for callables) to bind type variables in the inferred type. OK, so here is the list of missing features, and some comments on them: 1. Instead of unifying the actual with template we should include actual's variables in variable set we solve for, as explained in #5738 (comment). Note however, this will work only together with the next item 2. We need to (iteratively) infer secondary constraints after linear propagation, e.g. `Sequence[T] <: S <: Sequence[U] => T <: U` 3. Support `ParamSpec` (and probably `TypeVarTuple`). Current support for applying callables with `ParamSpec` to generics is hacky, and kind of dead-end. Although `(Callable[P, T]) -> Callable[P, List[T]]` works when applied to `id`, even a slight variation like `(Callable[P, List[T]]) -> Callable[P, T]` fails. I think it needs to be re-worked in the framework I propose (the tests I added are just to be sure I don't break existing code) 4. Support actual types that are generic in type variables with upper bounds or values (likely we just need to be careful when propagating constraints and choosing free variable within an SCC). 5. Add backtracking for upper/lower bound choice. In general, in the current "Hanoi Tower" inference scheme it is very hard to backtrack, but in in this specific choice in the new solver, it should be totally possible to switch from lower to upper bound on a previous step, if we found no solution (or `<nothing>`/`object`). 6. After we polish it, we can use the new solver in more situations, e.g. for return type context, and for unification during callable subtyping. 7. Long term we may want to allow instances to bind type variables, at least for things like `LRUCache[[x: T], T]`. Btw note that I apply force expansion to type aliases and callback protocols. Since I can't transform e.g. `A = Callable[[T], T]` into a generic callable without getting proper type. 8. We need to figure out a solution for scenarios where non-linear targets with free variables and constant targets mix without secondary constraints, like `T <: List[int], T <: List[S]`. I am planning to address at least majority of the above items, but I think we should move slowly, since in my experience type inference is really fragile topic with hard to predict long reaching consequences. Please play with this PR if you want to and have time, and please suggest tests to add.
Since #5699, mypy now rejects the following program (minimized from a few failures at dropbox):
with the error message
Incompatible types in assignment (expression has type "Tuple[_T_co, ...]", variable has type "Tuple[str, ...]")
@ilevkivskyi thoughts?
To be honest I'm not totally sure how we used to get this right, since my understanding was that we don't do the sort of instantiation of all type variables and unification that I would expect to be necessary?
The text was updated successfully, but these errors were encountered: