Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mark Shannon's presentation at the 2017 Language Summit #432

Closed
gvanrossum opened this issue May 17, 2017 · 53 comments
Closed

Mark Shannon's presentation at the 2017 Language Summit #432

gvanrossum opened this issue May 17, 2017 · 53 comments

Comments

@gvanrossum
Copy link
Member

gvanrossum commented May 17, 2017

@ilevkivskyi @markshannon.

Mark observed that the typing module uses classes to represent types. This can be expensive, since e.g. the type List[int] really ought to be the tuple (List, int) but it's actually a class object which has a fair amount of overhead (though not as much as early versions of typing.py, since we now cache these class objects).

If we changed to tuples (or at least objects simpler than class object), we'd have a problem: The simpler object couldn't be subclassed from. But who subclasses List[int]? Then again, maybe simpler objects aren't the point?

Mark also pointed out that after

from typing import List
class C(List[int]): pass
print(C.__mro__)

We find that C.__mro__ has 17 items!

I confirmed this. The roughly equivalent code using collections.abc

from collections.abc import MutableMapping
class C(MutableMapping): pass
print(C.__mro__)

has only 7 items. And subclassing builtins.list

class C(list): pass
print(C.__mro__)

has only three.

This affects performance, e.g. which is faster?

class C(list, Sequence[int]): pass
C().append(1)
class D(Sequence[int], list): pass
D().append(1)

One append() call is 10% faster than the other.

@JukkaL
Copy link
Contributor

JukkaL commented May 18, 2017

I agree with Mark that this is a problem and can be a blocker for adopting type annotations for some projects. We are mostly shielded from the problem at Dropbox because we use comment annotations, but once we migrate to Python 3 we'd potentially have problems with the current approach as well.

Inheriting from List[int] (or, say Dict[str, Foo]) is occasionally useful and probably worth supporting, but Mark's suggestion to use a different syntax for this seems like a reasonable compromise. I think his suggestion was to do something like this, since List[int] is not a class:

from typing import implements, List

@implements(List[int])
class MyList(list): ...

It's unclear whether this would also affect user-defined generic classes. For example, consider this code that works currently:

class Box(Generic[T]): ...

class SpecialBox(Box[int]): ...

class AnotherBox(Box[T]): ...

def do_stuff(box: Box[int], box2: AnotherBox[str]) -> None: ...

If I understood things right, this would be written like this based on Mark's proposal:

class Box(Generic[T]): ...

@implements(Box[int])
class SpecialBox(Box): ...

@implements(Box[T])
class AnotherBox(Box): ...

def do_stuff(box: Box[int], box2: AnotherBox[str]) -> None: ... # No change

This would still mean that user-defined classes may trigger metaclass conflicts due to Generic. This would mostly happen with user-defined generic classes -- just inheriting from Iterable[str], for example, would no longer imply any metaclass restrictions.

These things should probably also work:

@implements(Tuple[int, str])
class MyTuple(tuple): ...

@implements(Tuple[int, ...])
class MyUniformTuple(tuple): ...

@implements('List[int]')  # Still slightly more efficient
class MyList(list): ...

@gvanrossum
Copy link
Member Author

Personally I think this part of the proposal is not workable. E.g. inheriting from Sequence (an existing ABC repurposed as a type) also adds some concrete default implementations, such as __iter__ and __contains__. You'd have to write

@implements(typing.Sequence[T])
class MyList(collections.abc.Sequence):
    ...

@JukkaL
Copy link
Contributor

JukkaL commented May 18, 2017

Maybe that could written like this:

@implements(typing.Sequence[T])
class MyList(typing.Sequence):
    ...

There would still be a duality -- Sequence and other ABCs would be both classes (for things like inheritance, isinstance, etc.) and types. However, indexing a generic class would result in an object that is only a type, not a class. So these would all be true:

  • typing.Sequence is both a class and a type (the latter would be equivalent to typing.Sequence[Any] in a type context).
  • typing.Sequence[int] is a type but not a class.
  • An annotation, a cast and @implements would work with arbitrary types.
  • Base classes must all be classes. They can only be types if the type is also a class.
  • A custom generic class MyList would be both a class and a type, but Mylist[int] would only be a type.
  • Generic type aliases like List[int] and also bare List are only types, not classes.
  • Each context where a type or a class is valid would always be either a type or a class context, not both. We'd have to document all of these. For example, the second argument to isinstance would be a class context, whereas the first argument to cast would be a type context.
  • String literal escaping is only valid in a type context.
  • Any, Union[...], None, Optional[...] and Callable[...] are types but not classes.

This is still a usability regressions and a backward compatibility break, as defining subclasses of generic classes would become different and slightly harder. However, maybe we can live with this, since after we have protocols, using Iterable[x] and other common ABCs as base classes would no longer be necessary. Also, by making the distinction between types and classes explicit, things may be less confusing for users.

Finally, generic types such as Dict[int, T] would have to be real objects, since we need to be able to index them, due to generic type aliases.

@markshannon
Copy link
Member

Does anyone actually define their own generics, or even inherit from a type?

As a data point, Zulip makes no use of inheritance to define a class and a type at the same time. In other words there is no code of the form class C(T): ... where T is a class in the typing module does not exist in the Zulip code base.

https://lgtm.com/query/1957210066/project:140750185/lang:python/

@dmoisset
Copy link
Contributor

I've done both things (define own generics and inheriting from something lice Dict[str, str]) in the context of writing stubs for popular libraries, and I've done the second (inheriting from a specific realization of a generic) in application code

@markshannon
Copy link
Member

How often, though? I expect that it is a very rare thing to do.
For the first case (stub files):
Stub files are special, we never run them, so it might be OK to allow a type as a base class in stub files.
For the second (application code):
Is the code open source? It would be good to have some real world examples.

@JukkaL
Copy link
Contributor

JukkaL commented May 18, 2017

Mypy itself has three examples:

$ ag 'class.*\(Dict' mypy
mypy/binder.py
18:class Frame(Dict[Key, Type]):

mypy/nodes.py
2351:class SymbolTable(Dict[str, SymbolTableNode]):

mypy/test/testsemanal.py
215:class TypeInfoMap(Dict[str, TypeInfo]):

Also our internal Dropbox codebases have dozens of examples of dict subclasses. Subclassing list seems pretty rare though.

@JelleZijlstra
Copy link
Member

As a user, I'm not too bothered with the current state of things, since it's rare that I need to subclass a generic. (I found only a single inheritance from Generic[T] in my codebase.) I hope we won't switch to suboptimal APIs like @implements just to improve performance in rare cases.

@markshannon
Copy link
Member

@JelleZijlstra
It isn't just performance. Less code, means fewer bugs.
Also, keeping types and classes distinct in the implementation helps user maintain mental separation.

@markshannon
Copy link
Member

@JukkaL
class Frame(Dict[Key, Type]): would be better (IMO) as

@implements(Mapping[Key, Type])
class Frame(dict):

That way the type (mapping) can be kept separate from the implementation (dict).

Similarly for the other two examples.

@JukkaL
Copy link
Contributor

JukkaL commented May 18, 2017

@markshannon What would then happen to methods defined in dict but not in Mapping, such as __init__? In particular:

  1. Are they available through Frame (I suppose so, as otherwise there's no way to construct an instance)?
  2. What will the signature of __init__ etc. be (with respect to type variables of dict)?

@gvanrossum I wonder if there would be a way to simplify the MROs even if we inherit from List[int] etc.? Could we filter out the cruft, and maybe have a separate __typed_mro__ attribute or something that would have all the generic base classes? Then we could use an inspection API to access the full MRO. This wouldn't directly help with the space usage of generic type objects, though, but maybe this would help with the slower method calls.

@ilevkivskyi
Copy link
Member

Here are my comments:

  • Is there an actual proposal, what exactly is proposed to change and why? This is completely unclear from the current discussion.
  • It is probably too late for any backward incompatible changes. I just checked and FWIW typing is downloaded from PyPI at around 580k downloads/month, and searching for typing imports on GitHub gives more than 25k files. Also my experience is that people complained even about small incompatible changes in internal API in 3.6.0.
  • I already mentioned few times in February that I did some profiling and have some ideas about how to speed-up typing. Unfortunately, I didn't have time to implement them.
  • It looks like protocols will resolve several problems mentioned above. For example, it will be not necessary to subclass Mapping[int, str] if a class implements it.

@JukkaL
Copy link
Contributor

JukkaL commented May 18, 2017

Yeah, migration would already be tricky. On the other hand, two years from now it would be still much harder, so if we are going change something fundamental, we should do it as soon as possible.

@markshannon
Copy link
Member

First of all a bit of history. In the run up to accepting PEP 484, in an email thread with @gvanrossum and @JukkaL I specifically requested that all isinstance() and issubclass() implementations be removed. Looking back, however, it looks as if I didn't make that public. My bad.

I have consistently been of the opinion that equating types and classes is a bad idea.
What has changed is that we now have evidence that doing so is complex and slow and that inheriting from a type is very rare.

@ilevkivskyi The specific proposal is that no classes representing types should inherit from type.
This would simplify the typing module and reduce its performance impact by a large amount.
But performance is not the only problem; coupling types to classes impairs understanding of an already subtle topic.

As the use of type-hints spread, I expect that applications that declare types will remain a (small?) minority, but that applications that use at least one module that uses type-hints will become common.
Therefore, most applications will pay the performance cost of loading the typing module plus the cost of creating types in their library code. We should keep that cost as small as possible, ideally zero.

@gvanrossum
Copy link
Member Author

FWIW isinstance() was indeed removed, per your request. issubclass() remains because there were problems with removing it. There's still an issue open about removing it.

@ilevkivskyi
Copy link
Member

issubclass() remains because there were problems with removing it. There's still an issue open about removing it.

issubclass() was removed in September (by me).

@dmoisset
Copy link
Contributor

@markshannon :

How often, though? I expect that it is a very rare thing to do.
For the first case (stub files):
Stub files are special, we never run them, so it might be OK to allow a type as a base class in stub files.

The problem here is that it gets harder in practice to have a generic in a stub and non-generic in the library. For example working on the numpy stubs, I defined ndarray as generic (on the element type) so you can write x: ndarray[float], but to run that code you are forced to use comment or quoted annotations, otherwise the x declaration will fail in runtime.

The examples of inheriting things like Dict are used in my django stubs for stuff like the QueryDict (to represent vars in http requests) which inherits a dict with specific keys/values, and the cookie jar (also a dict with string... this is essentially the same code as in stdlibs http.cookies module)

For the second (application code):
Is the code open source? It would be good to have some real world examples.

This code is not opensource, but the usecase is essentially some kind of generic container (to represent results of API calls that are collections, but not necessarily a pythonic container)

@markshannon
Copy link
Member

@dmoisset How is ndarray any different from list? Presumably for numpy we would need a NdArray type analogous to List.

@JukkaL
Copy link
Contributor

JukkaL commented May 22, 2017

I talked about this with Mark in person, and he had another idea that would not break compatibility (at least not as much). Not sure if I understood it fully but I'm attempting to capture the idea here so that we don't completely lose track of it. @markshannon please let me know in which ways I misunderstood you :-)

We could make types like List[int] be regular (non-type) objects but make them valid in base class lists in a future Python version by adding a __type__ magic method for type-like objects that would return the corresponding normal type object, that would take effect if the type is used as a base class. For example, List[int].__type__() could return list. Thus this would still be okay:

class MyList(List[int]): ...

However, the MRO of MyList would only include regular type objects like list and object, not generic type objects. Maybe we could preserve the full MRO which may include generic types and such in a separate type object attribute.

@ilevkivskyi
Copy link
Member

Here are some more comments:

One append() call is 10% faster than the other.

This situation (list and Sequence[int] in different orders) looks rather like an extreme case. For typical things like:

class C(List[int]):
    ...

list is third in C.__mro__. This is the same for other types, Dict, Set, etc: __extra__ is typically inserted near the start. Moreover, such situations are quite rare, so that the actual overhead in real code could be very small (less than 1%). Are there any realistic benchmarks?

I looked at my old profiling results and things I have found really slow are instantiation of user defined generic classes (up to 10x slower), and valid isinstance checks (3-6x slower) like this:

class C(Generic[T]):
    ...
c: C[int] = C()
isinstance(c, C)

The first situation (instantiation) can be made few times faster by "inlining" _gorg() and _geqv() (these two turns out to be super-hot, although there is no point in re-calculating them). The second one (instance check) is slow because all generics are ABCs, and instance and class checks for ABCs are in principle much slower than for normal classes.

@JukkaL

We are mostly shielded from the problem at Dropbox because we use comment annotations

I don't understand this point. As I understand, you are still heavily using subclassing generic classes even in Python 2, and Mark claims that this is the main problem. Do you see any slow-down in Dropbox code when you make classes generic (or inherit from List[int] etc)?

We could make types like List[int] be regular (non-type) objects but make them valid in base class lists in a future Python version by adding a __type__ magic method

Interesting. This is exactly what I first thought when this thread started. This approach however should be well thought out. For example, when the fallback to __type__ should happen? There are four possible scenarios:

  1. metaclass conflict:
class C(int, 1): # This fails with metaclass conflict
    pass
  1. special TypeError:
class C(int, object()): # This fails with TypeError: bases must be types
    pass
  1. bad signature:
class A:
    pass
class C(A()): # This fails with TypeError: object() takes no parameters
    pass
  1. good signature:
class A:
    def __init__(*args, **kwargs):
        pass
class C(A()): # This actually works!
    pass

However, I think if we will figure this out, then this will fix many performance problems (including the original one, and two that I mention above) while preserving full backwards compatibility.

@JukkaL
Copy link
Contributor

JukkaL commented May 22, 2017

I don't understand this point. As I understand, you are still heavily using subclassing generic classes even in Python 2, and Mark claims that this is the main problem.

Subclassing is just one of the problems that I've heard being talking about. Another potential issue is the memory use of annotations, as generic types take some space. I'm aware that identical types are shared, so it's much better now than it used to be. It can still be a problem for memory-constrained environments such as microcontrollers (MicroPython) and very large applications, perhaps. Startup overhead is another worry some users have. This should be easy to measure, though.

@ilevkivskyi
Copy link
Member

It can still be a problem for memory-constrained environments such as microcontrollers (MicroPython) and very large applications, perhaps. Startup overhead is another worry some users have. This should be easy to measure, though.

This makes sense, I just tried this on my laptop, here is what I get:

  • Start-up time: no typing 0.031s, with typing 0.043s (+ 39%)
  • Memory: before importing typing 2.6Mb after importing 3.1Mb (+19%)

I was thinking a bit more about __type__ and I think it should work as an override, i.e. Python runtime should prefer it if found even if the class definition would succeed without it. Maybe we should then call it __type_override__?

Another question is should this be a function or just an attribute pointing to the actual class object? At the time of subscripting of a generic class we already know that it should just point to the original class. (Btw this attribute will also allow us to remove completely _gorg and _geqv mentioned in my previous comment.)

There is another thing that will help to speed-up things: a flag that ignores all annotations for modules, classes, and functions (like for docstrings currently). Both this flag and __type_override__ are easy to implement. This is now just the question of making this decision.

@gvanrossum
Copy link
Member Author

Lots of people seem interested in runtime processing annotations. E.g. https://github.com/ericvsmith/dataclasses

@ilevkivskyi
Copy link
Member

Lots of people seem interested in runtime processing annotations.

Good point! Ignoring annotations flag will break NamedTuple, TypedDict, and all similar constructs.

@JukkaL
Copy link
Contributor

JukkaL commented May 23, 2017

Good point! Ignoring annotations flag will break NamedTuple, TypedDict, and all similar constructs.

It's still possible to use the functional forms of these even if annotations are ignored. Also, we could perhaps populate annotations dictionaries in classes but use None instead of the actual types, for example.

@gvanrossum
Copy link
Member Author

A global flag would break libraries that use the non-functional notation (which is much nicer anyway).

@ilevkivskyi
Copy link
Member

@gvanrossum

A global flag would break libraries that use the non-functional notation (which is much nicer anyway).

But what do you think about the idea of setting all annotations to None by this flag? This will keep NamedTuple etc. working and will only affect runtime type checkers (but I expect that the sets of people who use runtime type checkers, and people who will use the optimization flag are probably non-overlapping).

@gvanrossum
Copy link
Member Author

I worry about library developers that use type annotations e.g. to construct JSON schemas. We already have problems with -O and -O.

What problem are we really trying to solve, other than Mark complaining?

@JukkaL
Copy link
Contributor

JukkaL commented May 23, 2017

I guess we are not doing things in a logical order -- first we should do benchmarking to determine if there is a significant problem right now. In addition to @ilevkivskyi 's results, these data points would be interesting and hopefully not difficult to generate (though they are still a significant amount of work):

  • How much memory does an import of typing use on MicroPython (relative to not importing any modules)? How long does the import take (relative to normal startup cost)?
  • How much memory does a List[int] annotation use on MicroPython? How much would a simple instance that encodes the same information use instead?
  • How much extra memory do all annotations in the mypy codebase use when running mypy (both in absolute and relative terms)? How would this change if List[int] etc. would be simple objects?
  • How much do the annotations in the mypy codebase affect the runtime startup overhead, relative to ignoring annotations? How would this change if List[int] etc. would be simple objects?

Perhaps we could extrapolate that the relative overhead for mypy would be similar for other fully annotated, medium-sized programs. Not sure if we can extrapolate to much larger codebases, but at least the extrapolation could give a rough estimate for larger codebases as well.

@markshannon
Copy link
Member

@gvanrossum
What we are trying to solve is:

  • Slow startup when importing the typing module
  • Excessive memory use of the runtime representation of types
  • Lack of separation of types and classes. They should be separate, both for clarity and to allow other meta-classes to be used when defining classes with type-hints.

All the above apply to CPython as well as MicroPython.

@ilevkivskyi
Copy link
Member

What problem are we really trying to solve, other than Mark complaining?

Good question!

first we should do benchmarking to determine if there is a significant problem right now

I did many benchmarks, but I think they are all not reliable, since they are micro-benchmarks focused on the speed of specific things like generic class instantiation, isinstance call, generic class subcscription, generic class subclassing, etc. What we really need is benchmarking some medium-size real programs. I agree mypy mypy is a good candidate for this. In addition to items mentioned by Jukka I would add:

  • Overhead (memory and time) of making classes generic.

Here are some speculations: implementing __type_override__ in CPython should be simple. Corresponding backward compatible changes in typing are also not difficult to implement (e.g. Union was very easy to convert from a class to an object). My expectation for speed gain for a typical project like mypy would be 3-5%

@markshannon
Copy link
Member

Can we flip this discussion? Instead of demanding justification for types not being metaclasses, can someone justify why types need to be metaclasses? Making a class inherit from (the class) type is hardly standard.

As to maintaining backwards compatibility, @JukkaL misinterpreted what I meant, but I prefer his approach, so lets go with that 😄
Not sure about the name __type__ though, as a method that a type implements to return a class, I would prefer __runtime__ to similar.

@JimJJewett
Copy link
Contributor

Why is micropython a concern? They already make plenty of other changes to support a radically smaller memory footprint, including leaving out most of the standard library by default.

Since the primary use case of typing is for static checks, it would presumably be run only during development (on a more powerful machine), and NOT on the small devices. I think the default would be to not even make the annotations available on the small device; if they do choose to support some sort of typing for run-time checks, a lighter-weight version than CPython uses would be less of a compatibility issue than some of the other changes already are.

(Making typing more efficient, particularly when not used, is still a good goal, but I don't think micropython in particular is an important use case.)

@ilevkivskyi
Copy link
Member

Why is micropython a concern?

It is not the only concern. There are several aspects where typing can be made significantly faster (making types and classes more distinct also makes sense). TBH, I am interested in playing with this, but I don't have much time for this now. Also there is one "conceptual" problem: most probably this will require the __type_override__ mentioned above to allow subclassing non-classes, but this can be done only in Python 3.7, so that we will end up with two versions of typing: one slow "backport" version, and one fast version for Python 3.7+. Not everyone will be happy with such perspective.

@ilevkivskyi
Copy link
Member

Here is PR #439 with some short term solutions I mentioned above (as I predicted this gives around 5% speed-up for real applications that use typing).

@ncoghlan
Copy link

The idea of a __subclass_base__ method to allow an object to specify a different base class that gets used when it appears as a nominal base class in a class definition seems interesting. Should that be filed separately as an RFE on bugs.python.org?

Or do you want to experiment with mutating the bases list in typing.TypingMeta.__new__ first? (That will presumably be necessary anyway if you want to ensure consistent behaviour across Python versions)

@ilevkivskyi
Copy link
Member

ilevkivskyi commented Jul 18, 2017

@ncoghlan The idea is indeed interesting since it will not only remove the performance concerns, but will also avoid metaclass conflicts as e.g. #449 (by using __init_subclass__) and probably will make the typing.py code simpler, while preserving the public API.

Should that be filed separately as an RFE on bugs.python.org?

I am quite sure now that this is a reasonable idea. We have tried different variations of bases substitution in GenericMeta.__new__, and although they give some speed wins, this still looks suboptimal. I don't have much time now, but since these issues keep coming, I think this is a priority so that I will come up with a POC implementation soon (my idea is to just patch __build_class__, so this should be quite simple).

@ncoghlan
Copy link

ncoghlan commented Jul 18, 2017

OK, cool. In that case, I think the most important part of my comment is the suggested method name: __subclass_base__. My rationale for that:

  1. We want the word class in there somewhere, since the initial purpose is to convert a conceptual type into a concrete runtime class
  2. I'd like to have the word base in there, since our main known use case is to be able to include conceptual types in a list of base classes without having them actually appear in the runtime MRO
  3. I think calling it __subclass_base__ aligns nicely with __subclass_init__: where __subclass_init__ lets a base class do something when a new subclass is defined, __subclass_base__ is instead a way to say "Nope, you don't want me, you want this other class instead"

@JelleZijlstra
Copy link
Member

Minor point: it's __init_subclass__, not __subclass_init__ (https://docs.python.org/3/reference/datamodel.html#object.__init_subclass__).

@ilevkivskyi
Copy link
Member

ilevkivskyi commented Jul 19, 2017

OK, I have a POC implementation. Here are observations:

  • I use PyType_Check() to search for __base_subclass__ only on bases that are not class objects. Otherwise, this gives some speed penalty for normal classes (up to 20% for an empty class with four bases). If I search for __base_subclass__ only on non-classes, then the speed penalty is negligible.
  • In the case when at least one base has __base_subclass__ I save the original unmodified bases in the namespace under name __orig_bases__ before the class creation (this is the same that we do now with the help of the metaclass but done much faster).
  • I can't get rid of GenericMeta completely for one simple reason: __getitem__ (as other special methods) are searched immediately on the class (i.e. metaclass in our case).

Concerning the last point, there are two possible options:

a) go with a simple solution (it will already give great speed-up) and keep GenericMeta (I will document it then). There is a problem with this solution: many libraries use metaclasses, this means that users who want generic classes that subclass library classes will need to manually pass metaclass=... to all such classes, I could imagine this is annoying, and already have seen this complain several times.

b) We could modify PyObject_GetItem inserting a fallback for classes right before "object is not subscriptable". For example something like this (plus some safety checks):

...
PyObject_GetItem(PyObject *o, PyObject *key)
{
    ...
    if (PyType_Check(o)){
        fallback = PyObject_GetAttrString(o, "__class_getitem__")
        if (fallback == NULL){
            goto error;
        }
        esle{
            /* pack 'o' and 'key' into 'args'*/
            return _PyObject_FastCall(fallback, args, 2);
        }
    }
    ...
}

My idea is that people rarely subscript random things inside try: ... except TypeError: ... so that the speed penalty will be negligible.

@gvanrossum @ncoghlan what do you think? Should we go with option (a) or (b)?
(I like (b) a bit more since it is quite simple, however it introduces a new dunder.)

@ncoghlan
Copy link

ncoghlan commented Jul 19, 2017

I'm not sure I'm entirely following the problem:

  • having typing.List be both a type & a class seems OK, and for backwards compatibility, you want the class AlsoGeneric(BaseGeneric): case to continue to work. So in that case, keeping the metaclass intact is fine.
  • the case to be changed is typing.List[int], and that already has a check in __getitem__ to throw TypeError, so dropping the metaclass just makes that more efficient (since GenericMeta.__getitem__ never gets called in the first place)
  • given the change, if folks want to type a subclass as a list wvia inheritance ithout making it a generic type, they can inherit from typing.List[Any] (and similarly for any other generic type, filling in as many 'Any's as are needed)

@ilevkivskyi
Copy link
Member

ilevkivskyi commented Jul 19, 2017

@ncoghlan
GenericMeta.__getitem__ is needed to make this work:

class Custom(Generic[T]): ...
Custom[int]  # Should be OK
Another(Custom[T]): ...
Another[int]  # Should be also OK

Everything else seems to be possible without a metaclass (only with __init_subclass__).

Concerning the performance there are two major slow-downs currently:

  • On subscription: Custom[int] creates a new class object (very expensive), this is necessary to make Custom[int] subclassable.
  • On member access: instantiation and all method calls on instances are slower for generic types because of complex MROs.

Both above problems will be fixed by __base_subclass__, the problem with metaclass is orthogonal, but the point is that if we go with __base_subclass__ then avoiding metaclass is possible (and easy) otherwise it would be a non-starter.

@ilevkivskyi
Copy link
Member

Two additional notes:

  • The old sys._getframe hack could be easily removed with help of __base_subclass__.
  • If we get rid of GenericMeta then in definitions like class Mapping(Iterable[KT], Generic[KT, VT]): ... the Generic[...] should always be the last base, otherwise a consistent MRO is not always possible.

@ncoghlan
Copy link

Regarding the name, while Jelle's right that the implemented name is __init_subclass__ (we went back and forth enough times during the design process that I often forget where we ended up), the new API should still be __subclass_base__, as __base_subclass__ sounds like we're requesting a subclass of the base class, which isn't what's happening.

The TypeVar case is an interesting one, but it seems to me that it could potentially be addressed by:

  1. Always calling __subclass_base__ on GenericMeta instances
  2. Duplicating the current _check_generic call from getitem, and returning the class itself from __subclass_base__ if it's actually still generic

If the isinstance check also proves to be too slow (or otherwise impractical), then I'd suggest we look at ways of optimising that before going down the __class_getitem__ path.

@ilevkivskyi
Copy link
Member

@ncoghlan

Regarding the name ...

OK

The TypeVar case is an interesting one, but it seems to me that it could potentially be addressed by...

Yes, it works perfectly if we keep the GenericMeta, the only problem is that keeping GenericMeta will cause metaclass conflicts, this is why I am not 100% happy with it. But anyway, it seems to me the best strategy is to do this in two steps:

  • First add __subclass_base__ that will fix all major performance issues (plus an old sys._getframe hack). At the same time keep GenericMeta but simplify its code significantly.
  • If people will continue complaining about metaclass conflicts, then consider adding __class_getitem__.

@gvanrossum
Copy link
Member Author

I'm a little lost. Ivan, if you have an implementation somewhere, can you link to it? I presume it's modifications to the C code of CPython? If we're going that route, what will happen on Python 3.5 and before? (Or on 3.6 and before if we decide this is too invasive to go into CPython 3.6.3.) I suppose you can fall back to metaclasses.

If we want Custom[int] without a metaclass, and we're changing C code anyways, could we add a __getitem__ implementation to type itself that defers to __class_getitem__?

Another solution to the 3rd party metaclass problem (which is real) could be to just recommend people inherit their metaclass from abc.ABCMeta instead of directly from type -- would that work in most cases? (I realize it would slow things down.)

Why is this still in the "Mark Shannon" thread? I think I missed a part of the conversation.

@ilevkivskyi
Copy link
Member

@gvanrossum

I'm a little lost

Sorry, probably we went to fast. Here is a short summary:

  • Some time ago Mark complained about several performance issues with typing
  • One of the possible solutions is to make a small change to CPython allowing non-classes to be present in the base classes list (so that List[int] will not be a class). This solution also has several other pluses like removing an old sys._getframe hack.
  • My POC is here Reference implementation of __class_getitem__ and __mro_entries__ ilevkivskyi/cpython#2 (only the C part).
  • An important conclusion that we get from POC implementation is that it will not cause any visible slow-downs for normal (non-generic) classes.
  • Then there appeared an idea that we can also fix the metaclass conflicts with a bit extended version of the same change plus __class_getitem__.

If we're going that route, what will happen on Python 3.5 and before? (Or on 3.6 and before if we decide this is too invasive to go into CPython 3.6.3.) I suppose you can fall back to metaclasses.

Most probably we will need to have a separate source file for newer versions (like we now have for Python 2 and 3). I think there will be so many fallbacks so that the code will be hard to read. I expect that __subclass_base__ will really simplify the code. I am going to invest more time to show how it will look.

...could we add a __getitem__ implementation to type itself that defers to __class_getitem__?

This is actually another possibility that I was thinking about. Maybe it is even better (it is a more "local" change anyway).

recommend people inherit their metaclass from abc.ABCMeta instead of directly from type -- would that work in most cases?

It will probably fix vast majority of metaclass conflicts. I think we should start from a simple solution (i.e. have very minimal changes to CPython and keep GenericMeta), this will already fix most performance issues. Then (if people will continue to complain about metaclass conflicts) we may remove GenericMeta, this is a quite independent problem.

@gvanrossum
Copy link
Member Author

gvanrossum commented Jul 19, 2017 via email

@ilevkivskyi
Copy link
Member

it'll still be a metaclass conflict

Yes, sorry, you are right, I was confused by the fact that this works:

class C(typing.List[int], collections.abc.Sequence): ...

Anyway, I am still not sure what to do. If you think we might go with __class_getitem__, then I will come up with an extended POC implementation.

@ncoghlan
Copy link

ncoghlan commented Jul 20, 2017

(It probably makes sense to break this tangent out into a new issue, but I'll continue here for now)

I think it makes sense to break exploration of this idea into 3 phases:

  1. See how far you can get by doing something like this in typing.TypingMeta.__new__ before calling super().__new__:
    def _keep_base(x):
        return x
    new_bases = tuple(getattr(base, "__subclass_base__", _keep_base)() for base in bases)
    if new_bases != bases:
        # Bases list changed, check if that changes the metaclass
        orig_meta = type(cls)
        unhinted_meta, __, __ = types.prepare_class(name, bases)
        if orig_meta is unhinted_meta:
            # Original metaclass matched the one derived from the bases list, so recalculate it
            new_meta, __, __ = types.prepare_class(name, new_bases)
            if new_meta is not orig_meta:
                # Start the class creation over again with the new metaclass
                # and no keyword arguments (disallowing `typing.TypingMeta` subclasses)
                return new_meta(name, new_bases, namespace)

That is, allowing non-classes in a subclass bases list would be a feature of typing.TypingMeta, not a generally available Python level feature. As a result, it can't have a performance impact on standard class definitions, at the price of making derivation from typing.TypingMeta a bit slower.

1a. Potentially look at exposing better building blocks (e.g. a types.recalculate_metaclass function) for metaclasses wanting to get up to these kinds of tricks (OTOH, it's not exactly the sort of thing we want to encourage, since it can break in all sorts of interesting and exciting ways if you're not careful with it)

  1. Look at how feasible it would be to make __subclass_base__ support a standard feature of the type system in 3.7+, rather than something specific to typing.TypingMeta. This would avoid the triple calculation of the derived metaclass from the list of bases, the double execution of parts of the metaclass instantiation process, and the incompatibility between the use of __subclass_base__ and keyword arguments in class definitions.

  2. Look at how feasible it would be to add a type.__getitem__ implementation in 3.7+ that delegates to __class_getitem__ on the instance (potentially eliminating the need for typing.GenericMeta entirely).

@ilevkivskyi
Copy link
Member

@ncoghlan

See how far you can get by doing something like this in typing.TypingMeta.__new__ before calling super().__new__

The problem is to get TypingMeta.__new__ called in the first place. The problem is that cases like this:

class C(<a class>, <not a class>):
    ...

will fail soon in _PyType_CalculateMetaclass so that I don't think we can do something without modifying the C code.

@ncoghlan
Copy link

@ilevkivskyi Ah, you're right, I completely forgot that the initial metaclass determination step itself would fail. D'oh :(

@ilevkivskyi
Copy link
Member

The original performance issue is now addressed by PEP 560.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants