Releases: nengo/nengo-dl
Release 3.0.0
Compatible with Nengo 3.0.0
Compatible with TensorFlow 2.0.0
There are a lot of breaking changes in NengoDL 3.0. See the migration guide for all the details.
Added
- Keras
Layer
classes can now be used withnengo_dl.Layer/tensor_layer
. TensorGraph
can now be used as a KerasLayer
.- Added
Simulator.predict/evaluate/fit
functions, which implement the Keras Model API. - Added a warning that changing the TensorFlow seed (e.g. on
Simulator.reset
) will not affect any existing TensorFlow operations (this was always true in TensorFlow, the warning is just to help avoid confusion). - Added
TensorGraph.build_inputs
, which will return a set of KerasInput
layers that can be used as input to the TensorGraph layer itself. - Added
nengo_dl.callbacks.TensorBoard
. This is identical totf.keras.callbacks.TensorBoard
, except it will also perform profiling during inference (rather than only during training). - Added
stateful
option toSimulator.run
which can be set to False to avoid updating the saved simulation state at the end of a run. - Added
nengo_dl.configure_settings(stateful=False)
option to avoid building the parts of the model responsible for preserving state between executions (this will override anystateful=True
arguments in individual functions). - Added
nengo_dl.configure_settings(use_loop=False)
option to avoid building the simulation inside a symbolic TensorFlow loop. This may improve simulation speed, but the simulation can only run for exactlyunroll_simulation
timesteps. - NengoDL now requires
jinja2
(used to template some of the docstrings). - Added an
inputs
argument toSimulator.check_gradients
, which can be used to control the initial value of input Nodes during the gradient calculations. - Added
nengo_dl.Converter
for automatically converting Keras models to native Nengo networks. See the documentation for more details. - Added Legendre Memory Unit RNN example.
Changed
- Minimum TensorFlow version is now 2.0.0.
Simulator.save/load_params
now uses a singleinclude_non_trainable=True/False
(equivalent to the previousinclude_local
). Trainable parameters will always be saved, so theinclude_global
argument is removed.- Standardized all signals/operations in a simulation to be batch-first.
- The dtype option is now specified as a string (e.g.
"float32"
rather thantf.float32
). - If the requested number of simulation steps is not evenly divisible by
Simulator.unroll_simulation
then probe values andsim.time/n_steps
will be updated based on the number of steps actually run (rather than the requested number of steps). Note that these extra steps were also run previously, but their results were hidden from the user. - Renamed
TensorGraph.input_ph
toTensorGraph.node_inputs
. Simulator.time/n_steps
are now read-only.Simulator.n_steps/time
are now managed as part of the op graph, rather than manually in the Simulator.- Renamed
nengo_dl.objectives
tonengo_dl.losses
(to align withtf.losses
). nengo_dl.objectives.Regularize
now takes two arguments (y_true
andy_pred
) in order to be compatible with thetf.losses.Loss
API (y_true
is ignored).- The remove_constant_copies simplification step is now disabled by default. In certain situations this could be an unsafe manipulation (specifically, when using
Simulator.save/load_params
it could change which parameters are saved). It can be manu
ally re-enabled through the simplifications configuration option. Simulator.check_gradients
now only accepts an optional list of Probes (no longer accepts arbitrary Tensors).- Eager execution is no longer disabled on import (it is still disabled within the Simulator context, for performance reasons; see tensorflow/tensorflow#33052).
nengo_dl.tensor_layer(x, func, ...)
now passes any extra kwargs to thenengo_dl.TensorNode
constructor (rather than tofunc
). If you need to pass information tofunc
consider using partial functions (e.g.tensor_layer(functools.partial(x, func, arg=5), ...)
or a callable class (e.g.,tensor_layer(x, MyFunc(arg=5), ...))
. When using Keras Laye
rs withnengo_dl.tensor_layer
, a fully instantiated Layer object should be passed rather than a Layer class (e.g., usetensor_layer(x, tf.keras.layers.Dense(units=10), ...)
instead oftensor_layer(x, tf.keras.layers.Dense, units=10)
).benchmarks.run_profile
now uses the TensorBoard format when profiling, see the documentation for instructions on how to view this information (the information is the same, it is just accessed through TensorBoard rather than requiring that it be loaded directly in a Chrome brow
ser).nengo_dl.TensorNode
now takesshape_in
andshape_out
arguments (which specify a possibly multidimensional shape), rather than the scalarsize_in
andsize_out
.TensorNode
functions no longer use thepre_build
/post_build
functionality. If you need to implement more complex behaviour in a TensorNode, use a custom Keras Layer subclass instead. For example, TensorNodes Layers can create new parameter Variables inside the Layerbuild
method.TensorNode
now has an optionalpass_time
parameter which can be set toFalse
to disable passing the current simulation time to the TensorNode function.- Added
nengo_dl.Layer
. Similar to the oldnengo_dl.tensor_layer
, this is a wrapper for constructing TensorNodes, but it mimics the newtf.keras.layers.Layer
API rather than the oldtf.layers
. - TensorFlow's "control flow v2" is disabled on import, for performance reasons; see tensorflow/tensorflow#33052.
- Renamed
nengo_dl.objectives.mse
tonengo_dl.losses.nan_mse
(to emphasize the special logic it provides fornan
targets). - Connections created by
nengo_dl.Layer/tensor_layer
will be marked as non-trainable by default. - Updated all documentation and examples for the new syntax (in particular, see the updated Coming from TensorFlow tutorial and TensorFlow/Keras integration example, and the new Tips and tricks page).
- The training/inference build logic (e.g., swapping spiking neurons with rate implementations) can be overridden by setting the global Keras learning phase (
tf.keras.backend.set_learning_phase
) before the Simulator is constructed. - Increased minimum Nengo core version to 3.0.0.
- Reduced size of TensorFlow constants created by Reset ops.
- DotInc operators with different signal sizes will no longer be merged (these merged operators had to use a less efficient sparse matrix multiplication, and in general this cost outweighed the benefit of merging).
- Trainability can now be configured in the config of subnetworks. This replaces the ability to mark Networks as (non)trainable. See the updated documentation for details.
- Training/evaluation target data can now have a different number of timesteps than input data (as long as it aligns with the number of timesteps expected by the loss function).
- Whether or not to display progress bars in
Simulator.run
andSimulator.run_steps
now defaults to the value ofSimulator(..., progress_bar=x)
.
Fixed
- Fixed bug due to non-determinism of Process state ordering in Python 3.5.
- Nested Keras layers passed to TensorNode will be rebuilt correctly if necessary.
Deprecated
nengo_dl.tensor_layer
has been deprecated. Usenengo_dl.Layer
instead;tensor_layer(x, func, **kwargs)
is equivalent toLayer(func)(x, **kwargs)
.
Removed
- Removed the session_config configuration option. Use the updated TensorFlow config system instead.
- Removed the deprecated
nengo_dl.Simulator(..., dtype=...)
argument. Usenengo_dl.configure_settings(dtype=...)
instead. - Removed the deprecated
Simulator.run(..., input_feeds=...)
argument. UseSimulator.run(..., data=...)
instead. - Removed the
Simulator.sess
attribute (Sessions are no longer used in TensorFlow 2.0). The underlying Keras model (Simulator.keras_model
) should be used as the entrypoint into the engine underlying a Simulator instead. - Removed the
Simulator.loss
function (useSimulator.compile
andSimulator.evaluate
to compute loss values instead). - Removed the
Simulator.train
function (useSimulator.compile
andSimulator.fit
to optimize a network instead). - Removed the
nengo_dl.objectives.Regularize(weight=x, ...)
argument. Use theSimulator.compile(loss_weights=...)
functionality instead. - Removed the
Simulator.run(..., extra_feeds=...)
argument. TensorFlow 2.0 no longer uses the Session/feed execution model. - Removed
Simulator.run_batch
. This functionality is now managed by the underlyingSimulator.keras_model
. - Removed
TensorGraph.training_step
. The training step is now managed by Keras. - Removed
TensorGraph.build_outputs
andTensorGraph.build_optimizer_func
. Building loss functions/optimizers is now managed by Keras. - Removed `neng...
Release 2.2.2
Compatible with Nengo 2.8.0 - 3.0.0
Compatible with TensorFlow 1.4.0 - 2.0.0
Fixed
- Compatibility with Nengo 3.0 release
Release 2.2.1
Compatible with Nengo 2.8.0
Compatible with TensorFlow 1.4.0 - 2.0.0
Changed
- Update testing framework to use new nengo pytest ecosystem (
pytest-rng
,pytest-allclose
, andpytest-nengo
) - Disable TensorFlow 2.0 behaviour (e.g. control flow v2) by default. This will be re-enabled when full TensorFlow 2.0 support is added.
Fixed
- Fixed
tensorflow-gpu
installation check in pep517-style isolated build environments.
Release 2.2.0
Compatible with Nengo 2.8.0
Compatible with TensorFlow 1.4.0 - 2.0.0
Added
- Added a new example demonstrating how to integrate a Keras model with NengoDL (thanks to new contributor @NickleDave).
- Added support for TensorFlow 2.0 (pre-release).
- Added support for sparse transforms (see nengo/nengo#1532).
- Added support for stateful Processes (see nengo/nengo#1387).
Changed
- The default session will now be set to the NengoDL session before calling TensorNodes'
post_build
function. - Renamed the pytest
unroll_simulation
argument tounroll-simulation
. - Switched to nengo-bones templating system for TravisCI config/scripts.
- NengoDL will disable eager execution on import (and will probably not work properly if it is manually re-enabled).
- Increased minimum numpy version to 1.14.5 (required by TensorFlow 1.14).
- Minimum Nengo version is now 2.8.0.
- Update LinearFilter synapse implementation to match recent changes in Nengo core (see nengo/nengo#1535).
Fixed
- Fixed TensorFlow seeding so that randomness can be reliably controlled by setting the Simulator seed.
- Improved robustness of
tensorflow-gpu
installation check (in particular, it will now correctly detect GPU dists installed throughconda
). - Fixed inspection of
TensorNode.tensor_func
arguments for partial functions. - Simulator seed will now be deterministic for a given top-level Network seed.
- Raise a more informative error if user attempts to pickle a Simulator (this is not possible to do with TensorFlow sessions; see the documentation for other methods of saving/loading a NengoDL model).
Removed
- NengoDL no longer supports Python 3.4 (official support for 3.4 ended in March 2019).
Release 2.1.1
Added
- Added
nengo_dl.obj
as a shortcut alias fornengo_dl.objectives
. - Added tutorial for Nengo users coming to NengoDL
- Added tutorial for TensorFlow users coming to NengoDL
Changed
- Increased minimum
progressbar2
version to 3.39.0. - We now only provide
sdist
releases, notbdist_wheel
. Due to the way the TensorFlow packages are organized,bdist_wheel
forces any existing TensorFlow installations (e.g.tensorflow-gpu
ortf-nightly
) to be overwritten bytensorflow
, which we don't want to do.
Removed
- Removed the
nef-init
tutorial (replaced by the newfrom-nengo
tutorial).
Release 2.1.0
Added
- Added a built-in objective to assist in applying regularization during training.
- Added keep_history config option, which can be set to
False
on Probes if only the data from the most recent simulation step is desired (as opposed to the default behaviour of keeping the data from all steps).
Changed
- Moved
utils.mse
toobjectives.mse
. sim.loss
will now applynengo_dl.objectives.mse
to all probes indata
if no explicitobjective
is given (mirroring the default behaviour insim.train
).- The Spaun benchmark network will now be installed through pip rather than manually cloning and importing the repo.
Fixed
- Fixed objective argument parsing if objective is a callable class or method.
- Fixed bug in
sim.train
1-step synapse warning when explicitly specifyingn_steps
(rather than passing indata
).
Deprecated
- Passing
"mse"
as the objective insim.train
/sim.loss
is no longer supported. Use the functionnengo_dl.objectives.mse
instead.
Release 2.0.0
Breaking API changes
-
sim.train
andsim.loss
now accept a singledata
argument, which combines the previousinputs
andtargets
arguments. For example,sim.train({my_node: x}, {my_probe: y}, ...)
is now equivalent to
sim.train({my_node: x, my_probe: y}, ...)
The motivation for this change is that not all objective functions require target values. Switching to the more generic
data
argument simplifies the API and makes it more flexible, allowing users to specify whatever training/loss data is actually required. -
The
objective
argument insim.train
/sim.loss
is now always specified as a dictionary mapping probes to objective functions. Note that this was available but optional previously; it was also possible to pass a single value for the objective function, which would be applied to all probes intargets
. The latter is no longer supported. For example,sim.train(..., objective="mse")
must now be explicitly specified as
sim.train(..., objective={my_probe: "mse"})
The motivation for this change is that, especially with the other new features introduced in the 2.0 update, there were a lot of different ways to specify the
objective
argument. This made it somewhat unclear how exactly this argument worked, and the automatic "broadcasting" was also ambiguous (e.g., should the single objective be applied to each probe individually, or to all of them together?). Making the argument explicit helps clarify the mental model.
Added
- An integer number of steps can now be passed for the
sim.loss
/sim.train
data argument, if no input/target data is required. - The
objective
dict insim.train
/sim.loss
can now contain tuples of probes as the keys, in which case the objective function will be called with a corresponding tuple of probe/target values as each argument. - Added the
sim.run_batch
function. This exposes all the functionality that thesim.run
/sim.train
/sim.loss
functions are based on, allowing advanced users full control over how to run a NengoDL simulation. - Added option to disable progress bar in
sim.train
andsim.loss
. - Added
training
argument tosim.loss
to control whether the loss is evaluated in training or inference mode. - Added support for the new Nengo
Transform
API (see nengo/nengo#1481).
Changed
- Custom objective functions passed to
sim.train
/sim.loss
can now accept a single argument (my_objective(outputs): ...
instead ofmy_objective(outputs, targets): ...
) if no target values are required. utils.minibatch_generator
now accepts a singledata
argument rather thaninputs
andtargets
(see discussion in "Breaking API changes").sim.training_step
is now the same astf.train.get_or_create_global_step()
.- Switched documentation to new nengo-sphinx-theme.
- Reorganized documentation into "User guide" and "API reference" sections.
- Improve build speed of models with large constants (#69)
- Moved op-specific merge logic into the
OpBuilder
classes.
Fixed
- Ensure that training step is always updated before TensorBoard events are added (previously it could update before or after depending on the platform).
Deprecated
- The
sim.run
input_feeds
argument has been renamed todata
(for consistency with other simulator functions).
Removed
- NengoDL no longer supports Python 2 (see https://python3statement.org/ for more information)
Release 1.2.1
Added
- Added a warning if users run one-timestep training with a network containing synaptic filters.
Changed
- Test Simulator parameters are now controlled through pytest arguments, rather than environment variables.
- Disable INFO-level TensorFlow logging (from C side) on import. Added a NengoDL log message indicating the device the simulation will run on, as a more concise replacement.
- Boolean signals are now supported (#61)
Fixed
- Avoid backpropagating NaN gradients from spiking neurons.
- Fixed an error that was thrown when calling
get_tensor
on aSignal
that was first initialized inside the Simulation while loop (#56) - Allow TensorNodes to run in Nengo GUI.
- Avoid bug in TensorFlow 1.11.0 that prevents certain models from running (see tensorflow/tensorflow#23383). Note that this doesn't prevent this from occurring in user models, as we cannot control the model structure there. If your model hangs indefinitely when you call
sim.train
, try downgrading to TensorFlow 1.10.0. - Ensure that
sim.training_step
is always updated after the optimization step (in certain race conditions it would sometimes update part-way through the optimization step).
Release 1.2.0
Added
- NengoDL will now automatically use a rate-based approximation to compute the gradient for spiking neuron types, if one is known (no more need to manually swap neuron types for training and inference).
- Added
nengo_dl.configure_settings(inference_only=True)
option, which will build the network in inference-only mode. This will slightly improve the inference speed of the simulation, but the network will not be trainable. - Added
nengo_dl.configure_settings(lif_smoothing=x)
option, which will control how much smoothing is applied to the LIF function during gradient calculations (if any). - Added documentation on the various NengoDL config options.
- Added better validation for TensorNode output when
size_out != None
(#51)
Changed
- More informative error message if the user tries to pass target values for a probe that isn't used in the objective function.
- Switched to ADD_N gradient accumulation (from TREE); this will increase the memory usage during training, but improve performance.
- Revert to
Timeline
profiling method.tf.profiler
can produce incorrect output, and isn't maintained any more (tensorflow/tensorflow#15214 (comment)) - Reduce memory usage during training by caching temporary variables used when computing
ScatterUpdate
gradient. - Increase minimum TensorFlow version to 1.4.0.
- Increased minimum NumPy version to 1.12.1 (required by TensorFlow)
- Sort write signals as well as reads during graph optimization (encourages tighter partitioning, which can improve training/inference speed).
- Moved
configure_settings
fromutils.py
toconfig.py
.
Fixed
- Fixed a bug where
nengo_dl.dists.VarianceScaling(..., distribution="normal")
did not respect the seed if one was given.
Deprecated
- The
Simulator(dtype=...)
argument has been deprecated; usenengo_dl.configure_settings(dtype=...)
instead. Will be removed in 1.3.0.
Release 1.1.0
Added
- The default TensorFlow Session is now set to the underlying Simulator session within the Simulator context.
- Added CLI for benchmarks.py
- Added
sim.freeze_params
tool, to more easily extract model parameters for reuse in different Simulators. - Added documentation on saving and loading model parameters.
- Added Spaun example in
benchmarks.py
Changed
- Move
tensorflow-gpu
installation check to Simulator init, and only apply ifdevice=None
. - Switched to
pylint
for style checks. - TensorFlow INFO-level log messages are now disabled by default on import
- All previous releases now tracked in documentation
- Updated spiking MNIST example to simplify and improve performance.
- Passing unknown configuration options to
nengo_dl.configure_settings
will now give a more explicit error message. - Improved speed of parameter fetching though
get_nengo_params
- Raise a warning if user tries to train a network with non-differentiable elements (requires
tensorflow>=1.9.0
) - Improved accuracy of
SoftLIFRate
implementation for small values (#45) - Simplified how
TensorSignals
are loaded into the TensorFlow graph
Fixed
- Better handling of Simulator errors not associated with a specific op (fixes #41)
- Fixed node outputs changing after simulator is built (fixes #4)
- Fixed some broken cross references in the documentation
- Fixed several edge cases for
get_nengo_params
; don't use trained gains for direct neuron connections, error raised ifget_nengo_params
applied to an Ensemble with Direct neurons - Compatible with
tensorflow==1.9.0
release - Fixed bug in
nengo_dl.configure_settings(session_config=...)
when passing a pre-build model to the Simulator instead of a Network - Fixed TensorFlow version comparisons for 1.10.0
Deprecated
Simulator.trange
argumentdt
has been deprecated (replaced withsample_every
, see nengo/nengo#1384)
Removed
- Removed
nengo_dl.DATA_DIR
constant - Removed
benchmarks.compare_backends
(usewhitepaper2018_plots.py:compare_backends
instead) - Removed
ghp-import
dependency