Skip to content

Release 3.0.0

Compare
Choose a tag to compare
@drasmuss drasmuss released this 17 Dec 18:28

Compatible with Nengo 3.0.0

Compatible with TensorFlow 2.0.0

There are a lot of breaking changes in NengoDL 3.0. See the migration guide for all the details.

Added

  • Keras Layer classes can now be used with nengo_dl.Layer/tensor_layer.
  • TensorGraph can now be used as a Keras Layer.
  • Added Simulator.predict/evaluate/fit functions, which implement the Keras Model API.
  • Added a warning that changing the TensorFlow seed (e.g. on Simulator.reset) will not affect any existing TensorFlow operations (this was always true in TensorFlow, the warning is just to help avoid confusion).
  • Added TensorGraph.build_inputs, which will return a set of Keras Input layers that can be used as input to the TensorGraph layer itself.
  • Added nengo_dl.callbacks.TensorBoard. This is identical to tf.keras.callbacks.TensorBoard, except it will also perform profiling during inference (rather than only during training).
  • Added stateful option to Simulator.run which can be set to False to avoid updating the saved simulation state at the end of a run.
  • Added nengo_dl.configure_settings(stateful=False) option to avoid building the parts of the model responsible for preserving state between executions (this will override any stateful=True arguments in individual functions).
  • Added nengo_dl.configure_settings(use_loop=False) option to avoid building the simulation inside a symbolic TensorFlow loop. This may improve simulation speed, but the simulation can only run for exactly unroll_simulation timesteps.
  • NengoDL now requires jinja2 (used to template some of the docstrings).
  • Added an inputs argument to Simulator.check_gradients, which can be used to control the initial value of input Nodes during the gradient calculations.
  • Added nengo_dl.Converter for automatically converting Keras models to native Nengo networks. See the documentation for more details.
  • Added Legendre Memory Unit RNN example.

Changed

  • Minimum TensorFlow version is now 2.0.0.
  • Simulator.save/load_params now uses a single include_non_trainable=True/False (equivalent to the previous include_local). Trainable parameters will always be saved, so the include_global argument is removed.
  • Standardized all signals/operations in a simulation to be batch-first.
  • The dtype option is now specified as a string (e.g. "float32" rather than tf.float32).
  • If the requested number of simulation steps is not evenly divisible by Simulator.unroll_simulation then probe values and sim.time/n_steps will be updated based on the number of steps actually run (rather than the requested number of steps). Note that these extra steps were also run previously, but their results were hidden from the user.
  • Renamed TensorGraph.input_ph to TensorGraph.node_inputs.
  • Simulator.time/n_steps are now read-only.
  • Simulator.n_steps/time are now managed as part of the op graph, rather than manually in the Simulator.
  • Renamed nengo_dl.objectives to nengo_dl.losses (to align with tf.losses).
  • nengo_dl.objectives.Regularize now takes two arguments (y_true and y_pred) in order to be compatible with the tf.losses.Loss API (y_true is ignored).
  • The remove_constant_copies simplification step is now disabled by default. In certain situations this could be an unsafe manipulation (specifically, when using Simulator.save/load_params it could change which parameters are saved). It can be manu
    ally re-enabled through the simplifications configuration option.
  • Simulator.check_gradients now only accepts an optional list of Probes (no longer accepts arbitrary Tensors).
  • Eager execution is no longer disabled on import (it is still disabled within the Simulator context, for performance reasons; see tensorflow/tensorflow#33052).
  • nengo_dl.tensor_layer(x, func, ...) now passes any extra kwargs to the nengo_dl.TensorNode constructor (rather than to func). If you need to pass information to func consider using partial functions (e.g. tensor_layer(functools.partial(x, func, arg=5), ...) or a callable class (e.g., tensor_layer(x, MyFunc(arg=5), ...)). When using Keras Laye
    rs with nengo_dl.tensor_layer, a fully instantiated Layer object should be passed rather than a Layer class (e.g., use tensor_layer(x, tf.keras.layers.Dense(units=10), ...) instead of tensor_layer(x, tf.keras.layers.Dense, units=10)).
  • benchmarks.run_profile now uses the TensorBoard format when profiling, see the documentation for instructions on how to view this information (the information is the same, it is just accessed through TensorBoard rather than requiring that it be loaded directly in a Chrome brow
    ser).
  • nengo_dl.TensorNode now takes shape_in and shape_out arguments (which specify a possibly multidimensional shape), rather than the scalar size_in and size_out.
  • TensorNode functions no longer use the pre_build/post_build functionality. If you need to implement more complex behaviour in a TensorNode, use a custom Keras Layer subclass instead. For example, TensorNodes Layers can create new parameter Variables inside the Layer build method.
  • TensorNode now has an optional pass_time parameter which can be set to False to disable passing the current simulation time to the TensorNode function.
  • Added nengo_dl.Layer. Similar to the old nengo_dl.tensor_layer, this is a wrapper for constructing TensorNodes, but it mimics the new tf.keras.layers.Layer API rather than the old tf.layers.
  • TensorFlow's "control flow v2" is disabled on import, for performance reasons; see tensorflow/tensorflow#33052.
  • Renamed nengo_dl.objectives.mse to nengo_dl.losses.nan_mse (to emphasize the special logic it provides for nan targets).
  • Connections created by nengo_dl.Layer/tensor_layer will be marked as non-trainable by default.
  • Updated all documentation and examples for the new syntax (in particular, see the updated Coming from TensorFlow tutorial and TensorFlow/Keras integration example, and the new Tips and tricks page).
  • The training/inference build logic (e.g., swapping spiking neurons with rate implementations) can be overridden by setting the global Keras learning phase (tf.keras.backend.set_learning_phase) before the Simulator is constructed.
  • Increased minimum Nengo core version to 3.0.0.
  • Reduced size of TensorFlow constants created by Reset ops.
  • DotInc operators with different signal sizes will no longer be merged (these merged operators had to use a less efficient sparse matrix multiplication, and in general this cost outweighed the benefit of merging).
  • Trainability can now be configured in the config of subnetworks. This replaces the ability to mark Networks as (non)trainable. See the updated documentation for details.
  • Training/evaluation target data can now have a different number of timesteps than input data (as long as it aligns with the number of timesteps expected by the loss function).
  • Whether or not to display progress bars in Simulator.run and Simulator.run_steps now defaults to the value of Simulator(..., progress_bar=x).

Fixed

  • Fixed bug due to non-determinism of Process state ordering in Python 3.5.
  • Nested Keras layers passed to TensorNode will be rebuilt correctly if necessary.

Deprecated

  • nengo_dl.tensor_layer has been deprecated. Use nengo_dl.Layer instead; tensor_layer(x, func, **kwargs) is equivalent to Layer(func)(x, **kwargs).

Removed

  • Removed the session_config configuration option. Use the updated TensorFlow config system instead.
  • Removed the deprecated nengo_dl.Simulator(..., dtype=...) argument. Use nengo_dl.configure_settings(dtype=...) instead.
  • Removed the deprecated Simulator.run(..., input_feeds=...) argument. Use Simulator.run(..., data=...) instead.
  • Removed the Simulator.sess attribute (Sessions are no longer used in TensorFlow 2.0). The underlying Keras model (Simulator.keras_model) should be used as the entrypoint into the engine underlying a Simulator instead.
  • Removed the Simulator.loss function (use Simulator.compile and Simulator.evaluate to compute loss values instead).
  • Removed the Simulator.train function (use Simulator.compile and Simulator.fit to optimize a network instead).
  • Removed the nengo_dl.objectives.Regularize(weight=x, ...) argument. Use the Simulator.compile(loss_weights=...) functionality instead.
  • Removed the Simulator.run(..., extra_feeds=...) argument. TensorFlow 2.0 no longer uses the Session/feed execution model.
  • Removed Simulator.run_batch. This functionality is now managed by the underlying Simulator.keras_model.
  • Removed TensorGraph.training_step. The training step is now managed by Keras.
  • Removed TensorGraph.build_outputs and TensorGraph.build_optimizer_func. Building loss functions/optimizers is now managed by Keras.
  • Removed nengo_dl.utils.find_non_differentiable (this no longer works in TF2.0's eager mode).
  • Removed Simulator(..., tensorboard=...) argument. Use the Keras TensorBoard callback approach for TensorBoard logging instead (see tf.keras.callbacks.TensorBoard or nengo_dl.callbacks.NengoSummaries).
  • NengoDL will no longer monkeypatch fix the tf.dynamic_stitch gradients on import. The gradients are still incorrect (see tensorflow/tensorflow#7397), but we no longer use this operation within NengoDL so we leave it up to the user to fix it in their own code if needed.
  • Removed benchmarks.matmul_vs_reduce. We use matmul for everything now, so this comparison is no longer necessary.
  • Removed utils.minibatch_generator (training/inference loops are now managed by Keras).