Release 3.0.0
Compatible with Nengo 3.0.0
Compatible with TensorFlow 2.0.0
There are a lot of breaking changes in NengoDL 3.0. See the migration guide for all the details.
Added
- Keras
Layer
classes can now be used withnengo_dl.Layer/tensor_layer
. TensorGraph
can now be used as a KerasLayer
.- Added
Simulator.predict/evaluate/fit
functions, which implement the Keras Model API. - Added a warning that changing the TensorFlow seed (e.g. on
Simulator.reset
) will not affect any existing TensorFlow operations (this was always true in TensorFlow, the warning is just to help avoid confusion). - Added
TensorGraph.build_inputs
, which will return a set of KerasInput
layers that can be used as input to the TensorGraph layer itself. - Added
nengo_dl.callbacks.TensorBoard
. This is identical totf.keras.callbacks.TensorBoard
, except it will also perform profiling during inference (rather than only during training). - Added
stateful
option toSimulator.run
which can be set to False to avoid updating the saved simulation state at the end of a run. - Added
nengo_dl.configure_settings(stateful=False)
option to avoid building the parts of the model responsible for preserving state between executions (this will override anystateful=True
arguments in individual functions). - Added
nengo_dl.configure_settings(use_loop=False)
option to avoid building the simulation inside a symbolic TensorFlow loop. This may improve simulation speed, but the simulation can only run for exactlyunroll_simulation
timesteps. - NengoDL now requires
jinja2
(used to template some of the docstrings). - Added an
inputs
argument toSimulator.check_gradients
, which can be used to control the initial value of input Nodes during the gradient calculations. - Added
nengo_dl.Converter
for automatically converting Keras models to native Nengo networks. See the documentation for more details. - Added Legendre Memory Unit RNN example.
Changed
- Minimum TensorFlow version is now 2.0.0.
Simulator.save/load_params
now uses a singleinclude_non_trainable=True/False
(equivalent to the previousinclude_local
). Trainable parameters will always be saved, so theinclude_global
argument is removed.- Standardized all signals/operations in a simulation to be batch-first.
- The dtype option is now specified as a string (e.g.
"float32"
rather thantf.float32
). - If the requested number of simulation steps is not evenly divisible by
Simulator.unroll_simulation
then probe values andsim.time/n_steps
will be updated based on the number of steps actually run (rather than the requested number of steps). Note that these extra steps were also run previously, but their results were hidden from the user. - Renamed
TensorGraph.input_ph
toTensorGraph.node_inputs
. Simulator.time/n_steps
are now read-only.Simulator.n_steps/time
are now managed as part of the op graph, rather than manually in the Simulator.- Renamed
nengo_dl.objectives
tonengo_dl.losses
(to align withtf.losses
). nengo_dl.objectives.Regularize
now takes two arguments (y_true
andy_pred
) in order to be compatible with thetf.losses.Loss
API (y_true
is ignored).- The remove_constant_copies simplification step is now disabled by default. In certain situations this could be an unsafe manipulation (specifically, when using
Simulator.save/load_params
it could change which parameters are saved). It can be manu
ally re-enabled through the simplifications configuration option. Simulator.check_gradients
now only accepts an optional list of Probes (no longer accepts arbitrary Tensors).- Eager execution is no longer disabled on import (it is still disabled within the Simulator context, for performance reasons; see tensorflow/tensorflow#33052).
nengo_dl.tensor_layer(x, func, ...)
now passes any extra kwargs to thenengo_dl.TensorNode
constructor (rather than tofunc
). If you need to pass information tofunc
consider using partial functions (e.g.tensor_layer(functools.partial(x, func, arg=5), ...)
or a callable class (e.g.,tensor_layer(x, MyFunc(arg=5), ...))
. When using Keras Laye
rs withnengo_dl.tensor_layer
, a fully instantiated Layer object should be passed rather than a Layer class (e.g., usetensor_layer(x, tf.keras.layers.Dense(units=10), ...)
instead oftensor_layer(x, tf.keras.layers.Dense, units=10)
).benchmarks.run_profile
now uses the TensorBoard format when profiling, see the documentation for instructions on how to view this information (the information is the same, it is just accessed through TensorBoard rather than requiring that it be loaded directly in a Chrome brow
ser).nengo_dl.TensorNode
now takesshape_in
andshape_out
arguments (which specify a possibly multidimensional shape), rather than the scalarsize_in
andsize_out
.TensorNode
functions no longer use thepre_build
/post_build
functionality. If you need to implement more complex behaviour in a TensorNode, use a custom Keras Layer subclass instead. For example, TensorNodes Layers can create new parameter Variables inside the Layerbuild
method.TensorNode
now has an optionalpass_time
parameter which can be set toFalse
to disable passing the current simulation time to the TensorNode function.- Added
nengo_dl.Layer
. Similar to the oldnengo_dl.tensor_layer
, this is a wrapper for constructing TensorNodes, but it mimics the newtf.keras.layers.Layer
API rather than the oldtf.layers
. - TensorFlow's "control flow v2" is disabled on import, for performance reasons; see tensorflow/tensorflow#33052.
- Renamed
nengo_dl.objectives.mse
tonengo_dl.losses.nan_mse
(to emphasize the special logic it provides fornan
targets). - Connections created by
nengo_dl.Layer/tensor_layer
will be marked as non-trainable by default. - Updated all documentation and examples for the new syntax (in particular, see the updated Coming from TensorFlow tutorial and TensorFlow/Keras integration example, and the new Tips and tricks page).
- The training/inference build logic (e.g., swapping spiking neurons with rate implementations) can be overridden by setting the global Keras learning phase (
tf.keras.backend.set_learning_phase
) before the Simulator is constructed. - Increased minimum Nengo core version to 3.0.0.
- Reduced size of TensorFlow constants created by Reset ops.
- DotInc operators with different signal sizes will no longer be merged (these merged operators had to use a less efficient sparse matrix multiplication, and in general this cost outweighed the benefit of merging).
- Trainability can now be configured in the config of subnetworks. This replaces the ability to mark Networks as (non)trainable. See the updated documentation for details.
- Training/evaluation target data can now have a different number of timesteps than input data (as long as it aligns with the number of timesteps expected by the loss function).
- Whether or not to display progress bars in
Simulator.run
andSimulator.run_steps
now defaults to the value ofSimulator(..., progress_bar=x)
.
Fixed
- Fixed bug due to non-determinism of Process state ordering in Python 3.5.
- Nested Keras layers passed to TensorNode will be rebuilt correctly if necessary.
Deprecated
nengo_dl.tensor_layer
has been deprecated. Usenengo_dl.Layer
instead;tensor_layer(x, func, **kwargs)
is equivalent toLayer(func)(x, **kwargs)
.
Removed
- Removed the session_config configuration option. Use the updated TensorFlow config system instead.
- Removed the deprecated
nengo_dl.Simulator(..., dtype=...)
argument. Usenengo_dl.configure_settings(dtype=...)
instead. - Removed the deprecated
Simulator.run(..., input_feeds=...)
argument. UseSimulator.run(..., data=...)
instead. - Removed the
Simulator.sess
attribute (Sessions are no longer used in TensorFlow 2.0). The underlying Keras model (Simulator.keras_model
) should be used as the entrypoint into the engine underlying a Simulator instead. - Removed the
Simulator.loss
function (useSimulator.compile
andSimulator.evaluate
to compute loss values instead). - Removed the
Simulator.train
function (useSimulator.compile
andSimulator.fit
to optimize a network instead). - Removed the
nengo_dl.objectives.Regularize(weight=x, ...)
argument. Use theSimulator.compile(loss_weights=...)
functionality instead. - Removed the
Simulator.run(..., extra_feeds=...)
argument. TensorFlow 2.0 no longer uses the Session/feed execution model. - Removed
Simulator.run_batch
. This functionality is now managed by the underlyingSimulator.keras_model
. - Removed
TensorGraph.training_step
. The training step is now managed by Keras. - Removed
TensorGraph.build_outputs
andTensorGraph.build_optimizer_func
. Building loss functions/optimizers is now managed by Keras. - Removed
nengo_dl.utils.find_non_differentiable
(this no longer works in TF2.0's eager mode). - Removed
Simulator(..., tensorboard=...)
argument. Use the Keras TensorBoard callback approach for TensorBoard logging instead (seetf.keras.callbacks.TensorBoard
ornengo_dl.callbacks.NengoSummaries
). - NengoDL will no longer monkeypatch fix the
tf.dynamic_stitch
gradients on import. The gradients are still incorrect (see tensorflow/tensorflow#7397), but we no longer use this operation within NengoDL so we leave it up to the user to fix it in their own code if needed. - Removed
benchmarks.matmul_vs_reduce
. We use matmul for everything now, so this comparison is no longer necessary. - Removed
utils.minibatch_generator
(training/inference loops are now managed by Keras).