Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update tensorflow requirement from !=2.6.0,!=2.6.1,<2.11.0,>=2.2.0 to >=2.2.0,!=2.6.0,!=2.6.1,<2.12.0 #678

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Nov 18, 2022

Updates the requirements on tensorflow to permit the latest version.

Release notes

Sourced from tensorflow's releases.

TensorFlow 2.11.0

Release 2.11.0

Breaking Changes

  • The tf.keras.optimizers.Optimizer base class now points to the new Keras optimizer, while the old optimizers have been moved to the tf.keras.optimizers.legacy namespace.

    If you find your workflow failing due to this change, you may be facing one of the following issues:

    • Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplifies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to tf.keras.optimizer.legacy.XXX (e.g. tf.keras.optimizer.legacy.Adam).
    • TF1 compatibility. The new optimizer, tf.keras.optimizers.Optimizer, does not support TF1 any more, so please use the legacy optimizer tf.keras.optimizer.legacy.XXX. We highly recommend migrating your workflow to TF2 for stable support and new features.
    • Old optimizer API not found. The new optimizer, tf.keras.optimizers.Optimizer, has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.
    • Learning rate schedule access. When using a tf.keras.optimizers.schedules.LearningRateSchedule, the new optimizer's learning_rate property returns the current learning rate value instead of a LearningRateSchedule object as before. If you need to access the LearningRateSchedule object, please use optimizer._learning_rate.
    • If you implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass tf.keras.optimizer.legacy.XXX. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the Keras GitHub repo.
    • Errors, such as Cannot recognize variable.... The new optimizer requires all optimizer variables to be created at the first apply_gradients() or minimize() call. If your workflow calls the optimizer to update different parts of the model in multiple stages, please call optimizer.build(model.trainable_variables) before the training loop.
    • Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file an issue in the Keras GitHub repo.

    The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example, tf.keras.optimizers.Adafactor) will only be implemented based on the new tf.keras.optimizers.Optimizer base class.

  • tensorflow/python/keras code is a legacy copy of Keras since the TensorFlow v2.7 release, and will be deleted in the v2.12 release. Please remove any import of tensorflow.python.keras and use the public API with from tensorflow import keras or import tensorflow as tf; tf.keras.

Major Features and Improvements

  • tf.lite:

    • New operations supported: tf.math.unsorted_segment_sum, tf.atan2 and tf.sign.
    • Updates to existing operations:
      • tfl.mul now supports complex32 inputs.
  • tf.experimental.StructuredTensor:

    • Introduced tf.experimental.StructuredTensor, which provides a flexible and TensorFlow-native way to encode structured data such as protocol buffers or pandas dataframes.
  • tf.keras:

    • Added a new get_metrics_result() method to tf.keras.models.Model.
      • Returns the current metrics values of the model as a dict.
    • Added a new group normalization layer - tf.keras.layers.GroupNormalization.
    • Added weight decay support for all Keras optimizers via the weight_decay argument.
    • Added the Adafactor optimizer - tf.keras.optimizers.Adafactor.
    • Added warmstart_embedding_matrix to tf.keras.utils.
      • This utility can be used to warmstart an embedding matrix, so you reuse previously-learned word embeddings when working with a new set of words which may include previously unseen words (the embedding vectors for unseen words will be randomly initialized).
  • tf.Variable:

    • Added CompositeTensor as a base class to ResourceVariable.
      • This allows tf.Variables to be nested in tf.experimental.ExtensionTypes.
    • Added a new constructor argument experimental_enable_variable_lifting to tf.Variable, defaulting to True.
      • When it's set to False, the variable won't be lifted out of tf.function; thus it can be used as a tf.function-local variable: during each execution of the tf.function, the variable will be created and then disposed, similar to a local (that is, stack-allocated) variable in C/C++. Currently, experimental_enable_variable_lifting=False only works on non-XLA devices (for example, under @tf.function(jit_compile=False)).

... (truncated)

Changelog

Sourced from tensorflow's changelog.

Release 2.11.0

Breaking Changes

  • tf.keras.optimizers.Optimizer now points to the new Keras optimizer, and old optimizers have moved to the tf.keras.optimizers.legacy namespace. If you find your workflow failing due to this change, you may be facing one of the following issues:

    • Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to tf.keras.optimizers.legacy.XXX (e.g. tf.keras.optimizers.legacy.Adam).
    • TF1 compatibility. The new optimizer does not support TF1 any more, so please use the legacy optimizer tf.keras.optimizer.legacy.XXX. We highly recommend to migrate your workflow to TF2 for stable support and new features.
    • API not found. The new optimizer has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.
    • Learning rate schedule access. When using a LearningRateSchedule, The new optimizer's learning_rate property returns the current learning rate value instead of a LearningRateSchedule object as before. If you need to access the LearningRateSchedule object, please use optimizer._learning_rate.
    • You implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass tf.keras.optimizer.legacy.XXX. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the Keras GitHub repo.
    • Error such as Cannot recognize variable.... The new optimizer requires all optimizer variables to be created at the first apply_gradients() or minimize() call. If your workflow calls optimizer to update different parts of model in multiple stages, please call optimizer.build(model.trainable_variables) before the training loop.
    • Performance regression on ParameterServerStrategy. This could be significant if you have many PS servers. We are aware of this issue and working on fixes, for now we suggest using the legacy optimizers when using ParameterServerStrategy.
    • Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file an issue in the Keras GitHub repo.

    The old Keras optimizer will never be deleted, but will not see any

... (truncated)

Commits

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Updates the requirements on [tensorflow](https://github.com/tensorflow/tensorflow) to permit the latest version.
- [Release notes](https://github.com/tensorflow/tensorflow/releases)
- [Changelog](https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md)
- [Commits](tensorflow/tensorflow@v2.2.0...v2.11.0)

---
updated-dependencies:
- dependency-name: tensorflow
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Nov 18, 2022
@jklaise
Copy link
Contributor

jklaise commented Nov 18, 2022

Main thing to be careful about is the breaking change to tf.keras.optimizers.Optimizer.

Also, this error in the CI seems strange: E AttributeError: module 'tensorflow' has no attribute 'Tensor'

@ascillitoe
Copy link
Contributor

@jklaise good spot. I can look into the Optimizer change.

This one also seems significant:

TF pip:

Windows CPU-builds for x86/x64 processors are now built, maintained, tested and released by a third party: Intel. Installing the windows-native pip packages for tensorflow or tensorflow-cpu would install Intel's tensorflow-intel package. These packages are provided as-is. Tensorflow will use reasonable efforts to maintain the availability and integrity of this pip package. There may be delays if the third party fails to release the pip package. For using TensorFlow GPU on Windows, you will need to install TensorFlow in WSL2.

I'm wondering if this, and the complexity in our CI install process wrt to alibi-testing might be the cause of E AttributeError: module 'tensorflow' has no attribute 'Tensor' on Windows? i.e. tensorflow 2.10 is pulled in by the alibi-testing install, and then it is uninstalled and replaced with tensorflow 2.11, which actually happens to be an alias for tensorflow-intel.

This might not be the cause at all, but it is making me think we need to rethink our CI deps wrt to alibi-testing. At the very least, we might want to make sure deps like tensorflow are bumped asap in alibi-testing. Or maybe we should just remove all deps from alibi-testing, thus saying "it is only intended to be used with alibi/alibi-detect, and not as a standalone package".

@ascillitoe
Copy link
Contributor

Also, wrt to this Windows issue, we probably need to add a note to our docs/README, since this effectively means pip install alibi-detect[tensorflow] is not valid for Windows + GPU's?

@jklaise
Copy link
Contributor

jklaise commented Nov 18, 2022

Good spot. Wrt alibi-testing I would be in favour of making it as simple as possible or even removing - the original motivation was to use it as kind of an installable collection of model artifacts to test some functionality on real, pre-trained models, but perhaps we can do it in a different way, or even use untrained models for tests.

@ascillitoe
Copy link
Contributor

ascillitoe commented Nov 22, 2022

@jklaise I've rerun now that we've stripped the deps out of alibi-testing. Looks like this has fixed the Windows issue. Now CI is failing due to the optimizers changes as expected.

Edit: Also, test_saving.py is failing since transformers is not yet compatible with tensorflow 2.11 huggingface/transformers#20329 (comment)

@ascillitoe
Copy link
Contributor

Leaving the bump to tensorflow 2.11 until later so that alibi-detect v0.11.0 syncs with alibi v0.9.0.

@ascillitoe ascillitoe modified the milestones: v0.11.0, v0.11.1 Jan 12, 2023
@ascillitoe
Copy link
Contributor

Blocked by #739

@dependabot @github
Copy link
Contributor Author

dependabot bot commented on behalf of github Mar 21, 2023

A newer version of tensorflow exists, but since this PR has been edited by someone other than Dependabot I haven't updated it. You'll get a PR for the updated version as normal once this PR is merged.

@jklaise
Copy link
Contributor

jklaise commented Apr 5, 2023

Closing as done in #739.

@jklaise jklaise closed this Apr 5, 2023
@dependabot @github
Copy link
Contributor Author

dependabot bot commented on behalf of github Apr 5, 2023

OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting @dependabot ignore this major version or @dependabot ignore this minor version. You can also ignore all major, minor, or patch releases for a dependency by adding an ignore condition with the desired update_types to your config file.

If you change your mind, just re-open this PR and I'll resolve any conflicts on it.

@dependabot dependabot bot deleted the dependabot/pip/tensorflow-gte-2.2.0-and-neq-2.6.0-and-neq-2.6.1-and-lt-2.12.0 branch April 5, 2023 12:42
@ascillitoe ascillitoe removed the Blocked Issue is blocked by some bigger issue label Apr 5, 2023
@ascillitoe
Copy link
Contributor

@dependabot recreate

@dependabot @github
Copy link
Contributor Author

dependabot bot commented on behalf of github Apr 5, 2023

Looks like this PR is closed. If you re-open it I'll rebase it as long as no-one else has edited it (you can use @dependabot reopen if the branch has been deleted).

@ascillitoe
Copy link
Contributor

@dependabot reopen

@dependabot dependabot bot reopened this Apr 5, 2023
@dependabot dependabot bot restored the dependabot/pip/tensorflow-gte-2.2.0-and-neq-2.6.0-and-neq-2.6.1-and-lt-2.12.0 branch April 5, 2023 13:37
@ascillitoe
Copy link
Contributor

@dependabot close

@dependabot dependabot bot closed this Apr 5, 2023
@dependabot dependabot bot deleted the dependabot/pip/tensorflow-gte-2.2.0-and-neq-2.6.0-and-neq-2.6.1-and-lt-2.12.0 branch April 5, 2023 13:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants