-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: infer obvious parameters and return types #10149
Comments
I think I want the same thing, but with use in The main reason I won't have untyped defs is due to return values. I'd like them to be inferred, like what would happen if I specify arguments and not a return. I have some functions with no arguments, but with a return, and there's no way to tell MyPy that it is typed now other than specifying the return value. |
@JukkaL Yes, I believe that would be sufficient. It'd handle at least most of the situations I want. Though as noted there, having an explicit "inferred" return might also be helpful when integrating with existing code, but I will venture to use check-untyped-defs for all my code anyway. |
@JukkaL, here are some additional ideas that might be worth considering... In pyright, we always attempt to infer the return type of functions even if the parameters and return types are unannotated. We do this by substituting an "Unknown" type (a special alias of "Any") for the unannotated input parameters. More than 50% of the time, we're able to infer a known return type. In cases where the inferred return type is still unknown, we apply a technique we call "call-site type inference" whereby we re-analyze the called function using the argument types passed to it by the caller. We apply this analysis up to three levels deep. We find that we reach diminishing "returns" (pun intended) if we go any deeper. This technique allows us to infer useful return types a majority of the time. Here's an example: def add(val1, val2):
if val2 is None:
return val1
else:
return val1 + val2
w = add(3j, None)
reveal_type(w) # complex
x = add("1", "2")
reveal_type(x) # str
y = add(1, 2)
reveal_type(y) # int
z = add([1], [2])
reveal_type(z) # List[int] Pyright also performs a special trick for unannotated class Foo:
def __init__(self, val1, val2):
self._val1 = val1
self._val2 = val2
@property
def val1(self):
return self._val1
@property
def val2(self):
return self._val2
foo_int_str = Foo(1, "1")
reveal_type(foo_int_str.val1) # int
reveal_type(foo_int_str.val2) # str
foo_list_complex = Foo([2], 3j)
reveal_type(foo_list_complex.val1) # list[int]
reveal_type(foo_list_complex.val2) # complex Mypy currently outputs |
@erictraut Some of the early prototypes of a predecessor of mypy did implement clever type inference similar to what you are describing. I gave up on the idea, since often when type inference went wrong (and it's never 100% reliable), the bad types could propagate far away from the original code, making it hard to reason about what was going on. It's possible that with more effort this can be made work without too many false positives, but it still feels like this is too magical for mypy. One of the primary objectives of mypy is to make code easier to understand, and inferring more types may actually work against this goal. If mypy could find, say, 50% of possible errors in unannotated code, I see a risk is that many users wouldn't bother to annotate their code, since mypy would work with unannotated code "well enough". This way they would miss out on arguably one of the main benefits of static type checking -- making code easier to understand via type annotations. That's why mypy may never infer anything other than fairly obvious types in unannotated code, even if this would be technically fairly easy to support. The cost/benefit of adding more type annotations is too favorable for type annotations. Also, if we can infer 50% of types automatically and others are implicitly Any/unknown, it will be hard to see which parts of a module are being type checked properly and which are not. I think that here "explicit is better than implicit". If types can be used by editors to perform code completion etc, providing some inferred types for unannotated code seems much more valuable, however. This is not an important use case for mypy, though. |
@JukkaL If mypy could infer return types, mypyc would be benefited too. More code can be compiled and optimized without further modifications. |
I see a lot of value in enabling (optional) type-inference of not-annotated functions: namely, helping with incremental typing of legacy codebases. If mypy, when passed As a datapoint, in a codebase I am working on, we currently have many thousands of errors, and the majority of those are |
Feature
Similar to #4409, but also broader, allow for mypy to infer the parameter and return types for a function that's a passthrough to other typed functions.
Pitch
Consider the case in keyring, where the
get_password
function is a convenience accessor to (a) resolve the backend and (b) invoke the method of the same name on that backend with the same parameters:https://github.com/jaraco/keyring/blob/db6896acea942a86d3fbee7e2a556fffb38055ba/keyring/core.py#L54-L56
The
get_keyring()
is annotated and always returns aKeyringBackend
.KeyringBackend
is annotated and itsget_password
always demandsstr
parameters and declares its return type.As a result, it's unambiguous what the required parameters and return type for
core.get_password
must be.In pypa/twine#733, we learned that if a downstream consumer of the library enables
disallow_untyped_calls
, it will fail oncore.get_password
unless that function is redundantly decorated with the same parameters and return types ofKeyringBackend.get_password
.It would be nice if mypy could infer the types from these unambiguous cases like passthrough functions, possibly gated by a feature flag or a decorator on the function (e.g.
@typing.passthrough
), and avoid the somewhat messy redundancy that results from hand-copying the types.The text was updated successfully, but these errors were encountered: