-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
We need Not
to properly express some parts of Intersections
#17
Comments
I assume @mikeshardmind can fill in the rest based on other discussion, please do not wait on me feeling like explaining this. |
@mniip and @DiscordLiz both had things that arose from abstract logic in discord. I understand them, but I'm still wrangling some of how to best succinctly express them. There has been a lot of discussion today, so I may not get to explain all of this today/tonight. If people would like to read through the details on Discord, they can, but it is a lot to reparse through, and I do not mind taking the time to come up with an accurate formulation of the ideas requiring this, but need some time to do so. |
I'm not putting the whole thing tonight, I'm exhausted.
I will not elaborate with python examples or with a formal proof of this tonight. There has already been prior art on this done in Rust in an accepted RFC https://github.com/rust-lang/rfcs/blob/master/text/1210-impl-specialization.md#permitting-overlap For those who do not want to read the whole thing, this is the opening to the section I have linked:
If you believe that you can show a simpler method than this, you may want to reach out to the rust team as well 😃 While overloads and intersections do not fully map to rust's impls, and dispatch on type to an implementation is different to overloads, the static typing concerns are highly related, and it can still help us ensure we pick the most accurate return type from available implementations. |
one additional quote from the Rust RFC:
Which means we have a path towards helpful error messages with this as well. This may be more a concern of type checkers than this PEP, but the fact that the approach provides such a nice path is a big plus. |
I don't know if this was clear to everyone else from the partial summary.
I have not composed all of my thoughts on this into one post yet. I can show that type negation is necessary to remove the ordering via both type theory and topology, but I would prefer to stick to type theory given the problem domain. I would not mind being proven to have missed an alternative option for doing this, but it isn't as simple as "intersect the first on one side with the first on the other", and I believe it will become more clear why later. A hint to why this is the case is that it is possible to have a "smaller" compatible minimum bound than this in some cases, and that this also needs to interact with comparing which implementations can be seen as specializations of others after. I would highly appreciate that people coming into this discussion be aware that a lot of this has been discussed elsewhere already, and, at least for why type negation is necessary to remove the ordering of overloads, it is only a matter of forming a "to the point" post explaining all of what has already been explored and the ramifications of it. Independently of what I have done, there's at least two other people who have each started some kind of machine verifiable formal proof of some of these conclusions. I do not want to discourage other contributors from voicing issues with what is later shown, to the contrary if I and others have been wrong on some of this show us. I am happy to engage in constructive contradictions. But I want them to actually be constructive and not a chore to respond to because enough detail was not included to refute something which claims to. When this happens, it can feel like people aren't even reading what each other are saying. I also would want to encourage other people to come talk with us less formally about those things. @mikeshardmind has already committed to ensuring all of what is relevant gets summarized for at minimum an appendix to the PEP, so you will not be missing out after the fact if you do not, but people should be empowered to be a part of the precursor discussions as well. The discord link has been shared multiple times already, but here it is again |
Can I request this be expanded on? It seems untrue to me, or more specifically that it presupposes the need to synthesise the interface in a concrete way, and if that is the case I'd request for that to be expanded on. |
I agree with @diabolo-dan here, in python |
Again, checking that a type is consistent with A&B (assignment) is different from checking that if all you know is A&B and not the type that will be assigned to A&B, what you concretely know about the use The difference is here: def f(x: A & B):
return x.foo(...) # Method foo
# or
return x(...) # Callables A & B
# We only know foo by the minimum bound provided by A & B here.
# We need a procedure to correctly determine the minimum bound. This isn't about if some type is A & B, but if some use is A & B. It needs to be part of the spec as well, not just "well leave the details to type checkers", because you can return the result. This makes it part of a library's typed API, if type checkers diverge here, this would be pure chaos. When it comes to methods of classes or callables in general, the way parameter lists and return types interact with substitution is slightly more complicated. While parameters may be more specific, return types must not be for the same original parameter types (Yes, I spelled out variance of callables without naming each one, intentional to avoid another point of confusion for people) Because of this, if we only know Intersection.foo we need to be able to determine a few things.
A few things about 4
If I need to figure out some more contrived examples to show it is necessary to do more than check consistency with each individually, I will later. The need for it was not shown by a direct example originally, and I'm more focused on summarizing correctly what was shown first as I have time. |
Some use is A & B iff the use is A or the use is B. A type checker can check those independently to validate the use is valid at all. |
Yes, and how do you propose we resolve this without a concrete type (in the body accepting an intersection), and without handling determining what the minimum bound of that intersection is when it comes to callables that are not directly compatible but can be? That is what this issue is about. It isn't just the intersection of the return types. A simple example of this was already shown yesterday in the long thread. |
If people don't have anything additional to add that wasn't already argued, wait until this is fully explained. You're just creating the same arguments that end up wasting time having to be addressed when they could be being spent on detailing more of the problem space. |
Generalized specialization from known type informationThis is single, position only, required parameters for the sake of being building blocks for later There are also some cases retained that only result in Never. These are not directly Never alone, as they may still intersect with other things in the chaining of intersections. General forms (Table A)
General form, with B as a subtype of A and D as a subtype of C (Table B)
General form, with B as a subtype of A and D is not a subtype of C (Table C)
General forms, A, B, C, D are mutually incompatible (Table D)
These are some of the simplest examples here to use as a baseline reference with more complex things discussed later, this set follows entirely from basic subtyping rules. The specializations here could be seen as type-checker implementation details only if not for how we need to build on them to resolve more complex cases. Edit: Removed things in the form (T) ->T, T: A | B. These were intended as template shorthand,, not TypeVars, but even with a definition of this included, it was determined more likely to cause confusion |
How to decouple Ordering from OverloadsNote: This is the how portion, not the why portion given a set of overloads, the overloads can be decoupled from having an ordering concern by finding those that are disjoint and those which are specializations of each other, then adding type negations to prevent the less specialized ones from being used for more specialized cases. Note that we only add the negations for parameters, not for return types. An example given using the subtyping relationship B <: A
In this case, (B) -> B is more specific than (A) -> A We can remove the ordering between these two by negating the more specific from the less specific
in the 2-ary cases of (B, A) -> B and (A, B) -> A, we have
This makes the full set
but we can now freely re-order these, make other simplifications on them, and intersect them without an ordering concern as they have been made fully disjoint by specificity. Now, lets explore the same, but without a subtype relationship
In this case, we need to ensure that (A, B) -> A takes priority over (B, A) -> A and (B) ->B takes priority over (A) -> A without the ordering mattering. You may notice that earlier, the subtype relation gave us a naturally higher specificity, and this wasn't required. This can be accomplished in a similar manner by assuming a total ordering between parameter lists of the same arity representing an equivalent relation for the lower priority, we negate all higher priority types. So we keep (A, B) -> A and (B) - > B untouched as the highest priority in each arity this makes the full set
but these are now freely reorderable, and do not need to be treated with the care of an overload For one more case of this showing how this resolves with more types to order:
Absent a subtype relationship to order on most specific, we can allow this to become unordered while retaining the same priority with:
|
The cases that simplify to An example of a slightly more interesting case for Types A,B,C and an unbound TypeVar T. ((A) -> A & (A) -> B) | (C) -> C) & (T) -> T ((A) -> Never) | (C) -> C) & (T) -> T This is now (T) -> T, Where T is not A Now it should be noted that (A) -> Never is compatible with (T) -> T, This one is more philosophical, and not born of necessity, necessity is the (de)ordering of overloads in parameters. Once the ordering of overloads is removed, they can be treated as if they are just an intersection. This comes into play with correctly making other simplifications and selections, and with allowing the same logic to be applied to multiple parts of the problem. The ability for users to negate types also allows them to escape any ordering rules that exist currently. It also means type-checkers can have extremely efficient checking, but I've been convinced to save that part for an appendix. |
I definitely see the usefulness of being able to write overloads with pairwise disjoint arguments, as they make overloads order-independent. However, the necessity is still not clearly established. In #16 (comment) it was alluded to that the interface This would be nice-to-have since it would aid users in implementing correct overloads. However, the claim that without this spec inconsistencies between type-checkers could arise still seems unsubstantiated to me, since all the type-checker needs to know in order to do its job is:
None of which require knowledge of how to establish a list of overloads that correctly implements |
Quoting myself:
There has been a lot to type up, and as much as I would like to get to all of this in a timely manner, life has had other plans.
You can decide if you want to take someone else at face value on this or wait until someone who has done the logic on this has time to explain it in full.
Where do you think the return type comes from? From A, and from B. Now if we don't have this in the spec, when one type checker actually calculates the minimum bound, and another naively intersects, you have divergent behavior between type checkers. If A library uses a type checker that is more specific, and exposes the return type to a user expecting the more specific, and some of their users have a type checker that opted for the naive approach, this has diverging exposed types. This needs specification, and Not is necessary for this specification to be consistent. I will provide further details and examples as my time allows, including an efficient procedure for checking which follows this. |
So can we specify that the behaviour is to naively intersect? And does that still require Not? |
No. Absolutely not. Why in the world would we say a type checker must give less specific and less useful information? We have to have this in the spec, we should do it right or not at all. |
And there's an even more basic answer: can you prove that the naive intersection is the minimum bound in all cases? you may find there are surprising edge cases if you don't have negation. |
Two answers, one in terms of technical correctness and one of not forbidding type checkers to work with the type information they have because you find it inconvenient. |
Because it keeps the spec for intersections much simpler. And is unnecessary. It would maybe become necessary to have it if one wants to compile python code that has intersections (e.g. with
Yes, it follows immediately from the definition. Definition: We say
Lemma: If a greatest lower bound to Proof: Assume The question whether a greatest common lower bound always exists should be clarified in #5, but I'm pretty sure the answer is yes and at worst it is |
Given the choice between type checkers being mandated to provide more accurate information or less, and a means to do both, more information.
It was asked that people who don't have more to say that wasn't already hashed out wait on all the details here |
This is basically never going to get done, because people are badgering with the same points without actually waiting on more details that were promised above, and it is absolutely draining to have to deal with that. If you're so impatient and rushing this to a worse outcome, I'll supply a quicker one of "Just don't add intersections, the community clearly doesn't care enough about the quality of them" So with very little left to say constructive at this moment, I don't want to see another notification from this thread that repeats things people have said on the matter before without further elaboration that has value towards a correct specification. We can work on ensuring it is simple and easy to understand starting from correctness. I'm at the point where it seems appealing to pick all of this up and independently work on a pep for intersections without all of the "proccess transparency" and just drop a full pep down when it's actually ready for review with proof of correctness. As it is, I'm trying to actually detail step by step in digestible parts with examples at each chunk because this isn't actually simple, as much as people want it to be, while at the same time dealing with other obligations. |
Okay, I think we may have a disagreement on terms that caused the "We can just reduce to the intersection or not quite" The problem here is that while Never is a valid substitute for a runtime type, the other way around is not true. This makes detectable Never a more specific known where we can determine it as the only option. I think if we resolve this, we can leave the proof aside for the formalization efforts because it really comes down to "is being more specific important" To me the answer here is "While being more specific would be nice, it should be left to type checkers" However, the problem with this is that this needs is a cause for divergent behavior if we don't mandate one or the other. So let's make the case for the extra complexity here:
Lets focus on this question for deciding what's next for this issue. |
Sorry if this has been answered somewhere in this thread, but are you sure that negation needs to be in the intersection PEP? This last comment sounds a lot like the motivation for a new PEP? |
I think it does personally, mostly because of this point:
-If we have to pick one of the two, the latter is more specific, and there are other benefits to it. If we stick to what is simpler, we lock off the ability for type checkers to do more. Since I view Static analysis as the tool that checks if what you are doing makes sense and catch errors before they happen, I want it to be able to use the info available. -I think the two are rather closely related from the perspective of theory and use, and that we already have clear answers on what the behavior of not should be if it is added. The overloads case in the way we are resolving here is simple conceptually but will take some computation. The good news it that it only needs to be done to callables at use and not prior. (Intersecting 3 classes with 30 methods each, but only using 1 method of the intersection, you're only checking that singular method to this detail) |
This is still slightly incorrect. There are cases where Not is needed arising from intersections. I'll try and formulate a python example later. |
There are also cases where the difference between adding & (T) -> Never is different to negating T Not might need it's own PEP, but intersection should then depend on it. Sequence[str] & ~str is not the same as List[str] An api allowing a sequence of string-like objects but not treating a singular string as a sequence of strings has no way to do this except adding a runtime check for this. |
Could you please clarify this? I'm struggling to understand how an intersection type can cause escape in a way that was not already possible by re-exposing, aliasing, unioning, etc.
But it'll still return a
I'm general there is lots of things that can't be expressed in the type system, including most of the actual behaviour of code, and currently some unsupported input that matches the type. I similarly can't express that an index is within the length of a Sequence (at least not yet). When an intermediary library author expose the intersection type, presumably as part of some exposed function, then they should absolutely be expected to document any limitations, in the same way the original authors documented those expectations. Alternatively, they could handle the performance limiting assertion themselves (though I'd still expect that behaviour to be documented). To be clear I'm not saying that a Not type wouldn't be useful in this context to better express the interface. Having a Not type operator would be useful in general (although probably only really if there is an intersection operator too). Just that I don't think it's a prerequisite for an intersection type. But again, I'm not convinced I've fully understood your example, so if you can elaborate on how the intersection exposes the function I may find it more convincing. |
The @overload
def process_token_stream(self, token_stream: str) -> Never: ...
@overload
def process_token_stream(self, token_stream: Sequence[str]) -> Sentiment: ... Which will raise I agree with @diabolo-dan that I do not really see how this shows |
Since I forgot to unsubscribe from the thread, you can have a singular additional reply. @overload
def process_token_stream(self, token_stream: str) -> Never: ...
@overload
def process_token_stream(self, token_stream: Sequence[str]) -> Sentiment: ... is not the same as def process_token_stream(self, token_stream: Sequence[str] & ~str) -> Sentiment: ... @randolf-scholz If you can't understand that the type not being handled as a parameter, and the type always erroring as a parameter are two distinct things and only appear compatible in a single direction, this might be too hard a problem for you to be working on. The second example using type negation of the type it doesn't expect to need to handle would be compatible with a subclass that handled You can't treat things that aren't equivalent as equivalent and think they suddenly solve all the same problems. @diabolo-dan extrapolate from the difference between having the type marked as erroring when it doesn't and not marking the type as handled and you should find why it's a needed thing here. I lack any inclination to keep working on this. |
I've linked this discussion to python/typing so that it can be used in a future PEP. Is there any objection to closing this? |
Other than that the concerns raised here about this needing to exist prior to intersections, not the other way around not having been suitably addressed yet? No, but that's reason enough not to. Don't assume that just because people have stepped away as an active participant that the concerns they raised suddenly aren't concerns. |
I'm not trying to assume anything. But how are we supposed to move forward with a proposal if people don't want to participate in discussion, but still want to insist that the proposal reflect their perspective? How can this be resolved? |
The perspective has already been expressed, including concrete examples that show that overloads are not a substitute for intersections with negation, and why intersections without Not would be harmful. As long as those concerns are addressed rather than just sweeping them to the side as if they aren't important points that stand with or without someone actively arguing for them, the argument has already been shown. You might also want to look at what caused several contributors to step back from active involvement. |
Sorry, but I really don't understand this comment. The people in this discussion have been extremely patient with an enormous amount of disrespectful language. Never in my professional life have I been talked to this way:
(And what makes this even worse was that in the end, the argument I made seems to be what we've accepted.) And I don't know how @randolf-scholz feels, but I certainly wouldn't like to be talked to like this either:
I know that's not you, but I think that someone does need to call this out. There is nothing that kills open source participation like negativity and condescension. I know you're also getting frustrated. We all know what it feels like. Maybe you're right, and people don't have the right background to make sense of your good arguments, or maybe other people have good counterpoints, and they don't convince you, or you don't want to invest the energy to make sense of them. All of these are perfectly fair, and are understandable reasons to walk away. But we all want to see intersections in Python. And for that to happen, we need to make put a PEP together, which means deciding what goes into the PEP. Do you still want to participate in making the PEP? If so, let's try to figure out what goes in the PEP together. If you're not participating, we can't have a discussion with your ghost. Even if your arguments are right, we haven't yet found them convincing, and this issue is already consuming an enormous amount of energy that could be spent writing the PEP. @CarliJoy @randolf-scholz what do you think we should do? |
Sorry, I thought you had seen some of the frustration that had been expressed in the discord about non-productive discussion, and ignoring what others had said. I also commented on not thinking this was resulting in a productive process here
I'm not going to speak on behalf of anyone else, but it was observed in Discord that the person you quoted as being "disrespectful" had been ignored multiple times. I don't think that excuses it, but it wasn't a one-off occurrence and I can see some of that occurring as a result of frustration. I've personally gotten frustrated with the direction of some of the discussions where people seem more concerned about "well what if people lie to the type checker" than "What's the correct behavior". Frankly, what happens if someone lies to the type checker should be between them and their type checker, and the PEP shouldn't care except to try and minimize the number of cases where people feel that is necessary to do by providing good tools. People are often going to lie to the type-checker when they encounter shortcomings in typing; That shouldn't be our concern, instead, our concern should be creating a type system that as more and more is added to it, requires fewer lies as we gain the ability to express things more accurately about code.
I don't want to place blame on anyone for the frustration, if I had infinite time available, I'd not find this frustrating in the same manner, but there's a large time commitment in explaining things and reconciling differences when we are all starting from a different set of perspectives and experience, and I don't think there's been constructive discussion on certain topics. It can feel like a no-win situation when there's no progress on something like this because it's "spend time that could be spent on other tasks" vs "risk a feature getting accepted in a way that compromises its long-term usefulness"
If the choice right now was to have intersections without type negation or to not have intersections, I'd actually favor waiting to have intersections. In my mind, it's better not to add features that aren't properly explored for all of their potential issues and interactions with existing and predictable future cases. I wholeheartedly believe that without type negation, intersections involving anything other than protocols will be problematic long-term. Maybe that's a better argument for saying we should start by supporting protocols only, then look at what is left for a follow-up, maybe it's an argument not to add intersections, or maybe we can figure out how to best argue the benefits of type negation and why this prevents issues long-term; Which of the three it is probably requires more discussion.
If we can find a way to have a constructive discussion that doesn't circle the same arguments without anything new being added and to keep the discussion in-bounds for what is useful to the PEP (that is, defining the specification, not the type-checker behavior) then yes. However, it really is conditional on that for me, and I'm disappointed to even feel the need to say this. |
No. There is no excuse for it, and I think it's inappropriate for you to make allowances for the disrespect of others. (And you have the chronology backwards anyhow. The disrespect came first.) |
Considering I did call it out in the referenced thread in an immediate response, you're barking up the wrong tree here. I'm not making allowances for anyone here, only noting the observed points of frustration. You're right, that did happen first, but this point of clarification isn't even productive as the person you quoted walked out of all discussion on this, and left the discord server for it as well. Now can we find a way to get to the part where we have a productive discussion on a technical level about the PEP? |
There is a new PEP in draft about TypeGuards. It has a paragraph related to Intersections
I guess we really shouldn't hand in a PEP without |
I'm only interpreting this as "doesn't exist yet" not "cannot exist", so any PEP that included Not could just work within existing definitions or clarify them more clearly, but I think that the need for Not goes away in 2 possible outcomes of the discussion we needed to export to python/typing with regard to Never, so in either of those outcomes, I would split off the work here for any future Not PEP and possibly just have a note in the intersections pep on why Intersections was deemed to not need it (conversely if we determine we do need it, we'll have language summarizing what we came to for that too) I think every major controversial remaining part of how Intersections should be specified is hinging on that discussion for how much this pep needs to be responsible for, I'll try and find some time to start summaries of the non-controversial or controversial, but resolved discussions soon since we can't do much with the others yet. |
preemptively closing this, other decision-making outcomes that weren't necessarily clear before appear to remove the need and only make this something that would be nice and can be done separately, will reopen only if we discover this isn't actually the case anymore. |
This needs reopening, the reasons I thought we no longer needed
Type checkers already need to be able to handle this now, so this is not a new burden. |
I didn't get into all of the details when re-opening this, but we need Not to handle Callables. It may also help with other aspects of intersections with Any, especially with constructors, which was a major concern that had been attempted to be handled with some special cased ordering before. https://elixir-lang.org/blog/2023/09/20/strong-arrows-gradual-typing/ covers the theory here, as well as another gradually typed language with an Any-like construct. When evaluating an intersection of callable, the callable only are comparable where they have an overlapping input domain. For instance, The callables: (int) -> int Have what appears to be entirely disjoint* domains, the resulting intersection would end up equivalent to ((int & ~str) -> int) & ((str & ~int) -> str) & ((int & str) -> int & str) *The return types are only intersected when the input types are, and while we can say "int and str can't ever intersect", the same isn't true of all types like this. However, this also gives us an answer for gradual callables and Any, which strikes a much better balance between some of our original options without compromising on being theoretically sound. Considering this intersection: ((int) -> int) & Callable[..., Any] We get: ((int) -> int & Any) & ((Any & ~int) -> Any) & ((...) & ~(Any | int) -> Any) The last part of this expanded form is a bit messy and might indicate a need to find some nicer ways to handle this, but essentially, expressing all the non-single positional only argument forms This is possible to retain because any callable consistent with ((int) -> int), even if it can also handle other things, must return something that is at least as int when given just an int. However, for users to be able to denote function domains, and for type checkers to be able to display meaningful errors, diagnostic messages, or informational messages (reveal_type) relating to function domains, as well as for this to be testable (assert_type), these domains must be denotable. There are multiple ways to express the function domains, I believe negation to be the appropriate one for python at the current point in time. Note: I think not expanding this intersection to show the resulting domains may be fine in some displays for simplicity, and the resulting expanded form might only be necessary for complex cases, diagnostics, and tests. The two are equivalent, but expanding it helps with figuring out what we are supposed to have behavior wise in each case. The one major theoretical issue we can run into is: A negated gradual type must either be treated as equivalent to Any other interpretations would violate either consistency or the principle of gradual typing that a gradual type should not result in false positive errors when standing in for another valid type. There is no fully consistent interpretation for a negated gradual type (See this issue comment for details), but the only places where a negated gradual type should ever show up would be synthetic in the internals of type checkers (users won't write ~Any, and if they do, Never is a reasonable interpretation in a vacuum) Type checkers warning on an annotation that is a negation of a gradual type directly in source form, and not as a result of type variables is a reasonable opinionated option available to type checkers here. ~Never should likely also be warned against if ever written directly, this has two possible interpretations (Any and object). I believe the correct option for python here is object, and the consequences of a double negation effectively casting Any to object to be acceptable, but this would need further exploration to be sure of. @mark-todd I believe this may help address your concerns with the few things you thought that retaining ordering would help with, but I believe we still probably have more to work through on reaching a consensus toward something more closely following pure subtyping rules as @superbobry and @DiscordLiz both expressed a preference for. |
To check that I'm understanding the point you are getting at here, suppose I have input parameters P1 P2 and returns R1 R2, am I right that this generalizes so that Callable[P1, R1] & Callable[P2, R2] can be expanded out to: Callable[P1 & P2, R1 & R2] & Callable[P1 & ~P2 , R1] & Callable[P2 & ~P1, R2] And in the Case where P2 is a more specific subtype of P1 (This language avoids that Any has bidirectional subtyping behavior currently and we don't have a consistent subtyping rule that applies to Any, I think we need to address that elephant in the room for these rules, and that specificity should be the criteria used), we can simplify further to Callable[P2, R1 & R2] & Callable[P1 & ~P2 , R1] Because P2 & P1 is trivially P2, and P2 & ~P1 is trivially empty? If this all correct, can it be extended to handle constructors by treating a type T as having a constructor (P) -> T that is consistent with the type checker's understanding of the successive underlying functions that would be chained (type call, type new, and init), or does this still not work because of the allowed LSP violations for these? I don't fully understand the current rules for what type checkers actually check for constructors. |
The examples there all look fine to me, yeah. That's projecting in -> out types based on function domain.
Yeah...
Yes, but only because of a recent clarification to constructors that was made. This should also help clear up the current rules there. |
Copying some stuff that was discussed elsewhere that will need resolution of some kind. class Sequence(Protocol[T]):
...
class SequenceMutationMethods(Protocol):
...
type MutableSequence[T] = Sequence[T] & SequenceMutationMethods
type NeverMutableSequence[T] = Sequence[T] & ~SequenceMutationMethods Would create a problem where negated types can't ever be guaranteed unless the requirement for type checkers become total consistent use evaluating entire visible code graph The short version below, erasing type information by assigning to a less specific structural type avoids the negated methods from being checked. Indirection through function calls possible to avoid a type checker only checking local use.
|
above interacts with TypeIs's negation as well, see microsoft/pyright#8109 |
My understanding of negation types was that they required positive proof to correctly use, so assigning a Sequence to a NeverMutableSequence would result in a type error (absent other locally available proofs, such as an assertion or branch on SequenceMutationMethods). |
Requiring positive proof for something that should be possible to analyze statically is rather ugly, and results in worse code being written. I'd rather find a better way here if possible. The obvious one that removes this requirement is that type checkers should check that the total use is consistent across the entire visible code graph. |
|
I'm gonna open a discourse thread about typeis's negation, maybe we can put the issue to rest by just fixing negation where it exists. |
I dont know how we're supposed to continue. many people in that discussion were fine with the idea of TypeIs remaining unsound and also keeping the definitions that it has. Those definitions say that TypeIs should behave like theoretic Not and Intersections would, but the unsound nature of TypeIs not having limitations based on theory and people not wanting to change that puts this in a bind. |
The way forward is clear, it just precludes the use cases you and I had and discussed. The broader community doesn't care about soundness, so:
number 4 above involves a related type soundness hole that isn't actually fixable without breaking more users than seems like anyone is going to go for, so if can be ignored as a constraint too. I'd suggest closing this unless someone else decides to actually fix the issue with TypeIs and the impacts the definitions it was accepted with have here. |
TBD, see #16
Edit: This issue is currently incomplete and is being used to transparently post components of it for review. Please hold comments on the following topics for now:
Please do comment if you can directly find fault in any labeled resource that is a fault arising from an internal self-consistency problem.
Please hold comments that think something isn't relevant. Many of these are either to help illustrate later details or to provide helpful resources, but may not become part of the pep itself. The goal is to start by being correct, then to refine our definitions to be as simple and succinct as possible from there, while having the proof of that correctness available as a resource. We can cut sections for irrelevance when we have the full picture, not before.
If there is an issue with this after all is presented, please point it out then. I am not spending my time translating abstract logic to Python's type system and to a form that people are more familiar with as simply an exercise in doing so. I'm doing so because Python doesn't have a formalized type system, so we need to put it into the language people here are all familiar with and use so that we can look for any fault in it from that shared point of understanding.
Intersections get extremely into the weeds of type theory when considering a language with no formalized type system, only a set of not fully internally consistent rules. Please do not expect that proofs be simple when the situation itself is not. We can come up with simple procedures after we have correctness shown.
The text was updated successfully, but these errors were encountered: