Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: TensorFlow Canonical Type System #208

Merged
merged 46 commits into from
Mar 25, 2020
Merged

RFC: TensorFlow Canonical Type System #208

merged 46 commits into from
Mar 25, 2020

Conversation

mdanatg
Copy link

@mdanatg mdanatg commented Feb 20, 2020

TensorFlow Canonical Type System

Status Draft
RFC # https://github.com/tensorflow/community/pull/208
Author(s) Dan Moldovan ([email protected])
Sponsor Gaurav Jain ([email protected])
Updated 2020-02-19

Objective

This RFC proposes a new TensorFlow module and namespace (tf.types) dedicated to storing implementation-free type definitions, similar to C++ header files. This module has no other dependencies inside TensorFlow, so any other internal module can depend on it to ensure interoperability without the risk of creating circular dependencies. These definitions can also be used by external users, for example in pytype annotations.
The RFC focuses on the Python API, however the design should be reviewed with cross-language consistency in mind.

@gabrieldemarmiesse
Copy link
Member

gabrieldemarmiesse commented Feb 20, 2020

Very nice proposal! In Tensorflow addons, we've typed the public API, and without this RFC, this is quite a challenge. We've had custom types that we would be happy to get rid of. See https://github.com/tensorflow/addons/blob/master/tensorflow_addons/utils/types.py

Could we also have types which correspond to all valid input for common tensorflow functions? For example, we have things like TensorLike which refers to anything that can be converted to a tensor. And we use this type a lot in the public API. If it could be taken care of in the TF side, it would be nice.

A good way to ensure that users find all the types they need is to add type hints to all the public functions and methods in TF, but it's quite some work :)

@mdanatg
Copy link
Author

mdanatg commented Feb 20, 2020

@gabrieldemarmiesse Yes, TensorLike sounds like a broadly useful definition - I'll add it to the list. The aim is indeed to use these type hints in all public declarations, over time.

* N/A

### Performance Implications
* No performance implications expected. At most, we are adding a small number of levels to the class tree of some objects.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One caveat here is that isinstance checks involving ABCs are significantly slower than that with non-abstract classes. This is because abc.ABCMeta implements __instancecheck__ so isinstance has to jump back to pure Python instead of doing MRO traversal in C:

>>> import abc                                                                                                                                                                                
>>> class A(abc.ABC): pass                                                                                                                                                                                            
>>> class B: pass                                                                                                                                                                             
>>> class SA(A): pass                                                                                                                                                                         
>>> class SB(B): pass                                                                                                                                                                         
>>> sa = SA()                                                                                                                                                                                 
>>> sb = SB()                                                                                                                                                                                 
>>> %timeit isinstance(sa, A)                                                                                                                                                                 
237 ns ± 0.45 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
>>> %timeit isinstance(sb, B)                                                                                                                                                                 
51.2 ns ± 0.245 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)

While this is of course an artificial benchmark, and a single isintance check is still "just" ~200ns (on my fairly slow laptop), TensorFlow does a lot of such checks, and the slowdown might as well be noticeable in more realistic workloads.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very interesting. It sounds like we may want to avoid adding ABCMeta superclasses in instances when performance is critical, like eager tensors.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One option is to define these types without abc.ABCMeta as a metaclass, i.e. make them abstract by convention. The way abc.ABCMeta enforces an implementation (at instantiation time instead of definition time) is likely not relevant/useful for tf.types. Note however, that this also makes these type non-extensible which could be an issue for TensorLike.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. Just to be clear, the types would be non-extensible using ABCs registration mechanism, and one could still extend using traditional subclassing.
I added a preliminary note - PTAL.


The initial type hierarchy is focused on V2 symbols. We expect to encounter places where these symbols would not be compatible with V1 code; in such cases, the V1 symbols will not be affected.

#### Types created by this RFC
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are the specifics of each type out of scope for this RFC? i.e. which methods will be available on a Shape or Tensor?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was hoping so, although I think we'll need to define them sooner than later if type annotations are to be useful.


* `Composite` - low-level static structure (opaque to GraphDef/IR)
* `Module` - builder for structures of `Variables` (invisible to GraphDef/IR)
* `Optional` - basic programming construct
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this an alias to typing.Optional or a separate type? If the latter, could you give an example when one might need a tf.types.Optional and not typing.Optional?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll add a clarification - tf.types.Optional is an alias to tf.data.experimental.Optional, so a separate type from typing.Optional. I think the main differences are that it cannot be None and exposes has_value() get_value().

* `Symbol` - the regular graph tensor
* `Value` - eager tensors

* `TensorLike` - any type that can be implicitly converted to `Tensor` (see for example https://github.com/tensorflow/addons/blob/master/tensorflow_addons/utils/types.py)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How will TensorLike interact with custom ->Tensor conversion machinery? If a user registers a new conversion function, should they also register a "virtual" subclass of TensorLike?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. Since using ABC may have performance implications as you pointed out, we may need to think of a different mechanism, like a concrete TensorLike superclass (instead of a Union generic) that also defines to_tensor or defining a dunder method similar to __ndarray__.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that regardless of the implementation, the relationship between TensorLike, tf.convert_to_tensor and tf.register_tensor_conversion_function should be specified to avoid confusion.

Personally, I'm fine with TensorLike being non-extensible, i.e. if you want your custom type to be TensorLike, convert it to such explicitly.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My initial expectation is that isinstance(x, tf.types.TensorLike) would work for anything registered with tf.register_tensor_conversion_function. I think we cannot do that while requiring TensorLike to be a protocol (e.g. np.ndarray doesn't define to_tensor, but can be converted to one).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a section about this - PTAL.

It turns out that protocols do support static type checking, and can support isinstance in newer Python versions too.

I'm not sure that register_tensor_conversion_function can be made compatible with static type checking. ABCMeta.register might be the equivalent of it, but it lacks the means to specify the conversion logic if you don't control the class (e.g. how do you specify the conversion logic for list with ABCMeta.register?). And besides, if we replace register_tensor_conversion_function, we might as well opt for the leaner protocols (hasattr seems slightly faster than a bare isinstance). So my preference would be to use protocol as alternative to register_tensor_conversion_function, and eventually deprecate it.


Changes definitions inside `tf.types` must be approved by TensorFlow leads, and typically should be accompanied by an RFC.

All type declarations are compatible with [pytype](https://github.com/google/pytype).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you expect them to also be compatible with other type checkers, e.g. mypy or pyre?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the text - the hope is that so long as we rely on typing, static definitions and compliance with the relevant PEPs, they should be compatible with most (all?) static type checkers.

* `FunctionGraph` - IR of a single concrete function

### Alternatives Considered
* N/A
Copy link
Member

@gabrieldemarmiesse gabrieldemarmiesse Feb 21, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should consider the alternative where not everything is in the same module and we don't use the trick of making this:

variables are instances of tf.Variable, which is now a subclass of tf.types.Variable

I like the idea of having types in tensorflow, but this whole ABC black magic of avoiding dependecies is done because internally, Tensorflow has many, many circular dependencies (see here). In python, the more black magic we do, the more it will come back to bite us, see https://github.com/tensorflow/community/pull/208/files#r382246511 .

Python types are not supposed to be used in this way and I'm afraid that we're going to have many problems in the future just for the sake of supporting circular dependencies in the TF codebase. Like for example some type system being confused about our types, or we'll realize the need to redeclare all the classes' interfaces in tf.types, breaking the DRY rule.

The alternative is to declare types like AnyTensor = Union[tf.Tensor, tf.sparse.SparseTensor] and to fix tensorflow circular dependencies internally. Types can be declared in their corresponding files (ex: AnyTensor is defined in the file defining the Tensor class).

In short, making Tensorflow types without any ABC tricks shouldn't create new circular dependencies, they'll just make the existing ones come to light.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I agree with the broader comment: we want to avoid, rather than facilitate, circular dependencies. This RFC specifically aims to support the extract interface pattern for removing such circular dependencies.

That said, I fear the pattern is not a universal solution. For example, in the Keras instance you mentioned, the circular dependency cannot be broken by extracting types and requires moving some code around.

I'm not sure I fully follow the Variable example though, could you elaborate on it? The main idea is to have the types defined in a module separated from all others, so for example if you want to recognize a Variable you can import just tf.types, rather than the entire tf module.

By "ABC tricks" I suspect you are referring to ABCMeta.register? There are no plans currently to use it (in fact we are considering not using abc at all, see one of the comments above); I hope it will not be needed.

Copy link
Member

@gabrieldemarmiesse gabrieldemarmiesse Feb 21, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here I'm referring to the trick of having all public classes of tensorflow subclass a class defined in tf.types, I think it's a brittle trick to enable the use of isinstance and avoid using the typing module.

if you want to recognize a Variable you can import just tf.types, rather than the entire tf module.

I see, but what is the benefit of that? As a user, what bonus do I get by doing from tensorflow.types import Variable rather than from tensorflow import Variable? I'm sure there is a reason, but it's not clear in this RFC and we would benefit from a small paragraph detailing the benefits.

A side note for the user experience: A user will first try to use tf.Variable as a type hint before reading the docs when he/she needs it. It's not very intuitive to put the type hint in a separate module.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added this as a discussion topic. I'm inclined to agree - we don't need both tf.types.Variable and tf.Variable, even if it was safe to use either. In that case, tf.types would only have entries for types which do not already belong elsewhere.

The Python types module uses a similar pattern.

Related, if you have additional thoughts on how typing could enable better decoupling, both internally and externally, please do suggest them.



## Questions and Discussion Topics

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would the new type system unified with https://www.tensorflow.org/api_docs/python/tf/TypeSpec?

I think having both name space might confuse user about which to use, and we probably want to unify them into one if possible.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question, and I agree it would be great to either unify them, or think of ways to make the differences clearer and more intuitive - perhaps we should begin to refer to them as [TF] Python types and TF [native] types.
The distinctions blur when using type annotations (e.g. if a tf.function had both an input signature and type annotations, we need to at least make sure they're consistent).
The most intuitive path toward unification ought be to use Python type annotations to specify a TypeSpec, although the type annotation system is not yet powerful enough.


```
>>> @function
... def f(x: Tensor[int32, [3]]):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How can we allow Tensor[int32, [3]]? Does this mean Tensor have to have a __getitem__ classmethod?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That example was incorrect, sorry about that. Updated to a tested version.

>>> class BatchSize(Dimension):
... value = 32
>>> @function
... def f(x: Tensor[Int32, Shape1D[BatchSize]]):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would be the recommended way to type functions which accept multiple dtype? Would Union work? For example: def f(x: Tensor[Union[tf.float32, tf.float64], Shape[32]].

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although I haven't tested it, I'd expect Union to work because Tensor is a simple Generic here. If that doesn't work, we could fall back to a less-powerful AnyDType identifier.

... value = 32
>>> @function
... def f(x: Tensor[Int32, Shape1D[BatchSize]]):
... ...
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why Shape1D rather than Shape? Could we also just use a tuple to specify the shape? It would lead to clearer signatures.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated the RFC test to avoid prescribing a certain scheme for type annotations, and use whatever works best when we move to implementing it. It would be great to be able to use a tuple and we should do it if it's possible. In the few tests that I'm aware of [1], certain limitations in the type annotation system precluded the use of tuples, and forced using different type for different tensor ranks (e.g. Tensor1D, Tensor2D, etc.).

[1] tensorflow/tensorflow#31579

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A plain tuple would be much clearer indeed. Perhaps an abstract rank as well in case the rank is unknown (e.g. Shape(rank=None) would be a dynamic-rank shape.

Copy link
Contributor

@fchollet fchollet Mar 24, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem with Tensor1D, Tensor2D is that it hard-codes the rank, which isn't always a safe assumption. Could there be a type to specify "rank is dynamic but last dimension is N"? (which is what a Dense layer would need, for instance).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The inability to mix different ranks is an unfortunate limitation of the type annotation system, as far as I can tell. But specifying an unknown rank or partial shape should be possible.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The RFC definitely doesn't finalize the design for this, so we have time - I'll clarify that in the doc.

I'm not too worried about the input_signature - we can add whatever functionality is missing to align it with type annotations, as long as it remains backward compatible. Backward compatibility isn't hard to achieve I think. The hard limitation is the amount of expressivity that the type annotations allow us, because they're not generic expressions and do have significant limitations. That's why I keep asking whether proposed interfaces are actually realizable in practice.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not familiar enough with the usage of typing.Generic to affirm anything concerning the faisability of this :( I think it's possible based on what's available in the standard library, but I'm not positive.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With regards to syntax for shape typing, the broader Python ecosystem is very interested in this topic. Hopefully we can figure out a design/syntax that works across a range of array libraries (TensorFlow/NumPy/JAX/PyTorch) and type checkers (mypy/pytype/...).

Here are links to a few related discussions in other projects:
python/typing#516
python/typing#513
numpy/numpy#7370

I would also suggest reaching out to the Python typing-sig mailing list, where this has also been a topic of repeated interest.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank for the pointers, yes these are interesting discussions! I think only a few pieces are missing - happy to join the choir.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.