Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proper implementation of kernel weights #478

Merged
merged 27 commits into from
Nov 18, 2021
Merged

Conversation

neworderofjamie
Copy link
Contributor

@neworderofjamie neworderofjamie commented Nov 11, 2021

Previously, the way kernel weights were implemented was a bit of hack. You used procedural connectivity with a special variable initialisation snippet which pulled through the weights from an extra global parameter. This has a number of disadvantages:

  • As the kernels are extra global parameters, GeNN has no idea how large they should be
  • There's no way to support learning in this configuration as, from the point of view of the weight update model, the weights are produced by an opaque initialisation snippet.
  • You can't take advantage of standard initialisation syntax for the kernels as they are extra global parameters.
  • If you wanted to do something where different kernels were used across batches, you'd need to figure out the structure in memory yourself within a custom variable initialisation snippet.

This all stands in the way of both learning in convolutions and implementing a more efficient convolution implementation. This change hopefully solves all these problems via a new PROCEDURAL_KERNELG matrix mode. When this mode is used, all weight update model variables are allocated to the kernel size specified by the sparse connectivity initialisation model and can be initialised using the full functionality of the variable initialisation system. Only downside is that slightly different code paths need to be used for initialising sparse connectivity using a kernel and using the kernel directly (genn-team/ml_genn@03fda90)

@codecov
Copy link

codecov bot commented Nov 11, 2021

Codecov Report

Merging #478 (0234b7e) into master (33b81dc) will decrease coverage by 0.01%.
The diff coverage is 88.75%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #478      +/-   ##
==========================================
- Coverage   87.46%   87.45%   -0.02%     
==========================================
  Files          82       82              
  Lines       17266    17524     +258     
==========================================
+ Hits        15102    15325     +223     
- Misses       2164     2199      +35     
Impacted Files Coverage Δ
...nclude/genn/backends/single_threaded_cpu/backend.h 82.35% <ø> (ø)
include/genn/genn/code_generator/backendBase.h 91.52% <ø> (ø)
include/genn/genn/code_generator/backendSIMT.h 96.51% <ø> (ø)
include/genn/genn/code_generator/codeGenUtils.h 100.00% <ø> (+1.92%) ⬆️
include/genn/genn/code_generator/groupMerged.h 84.18% <ø> (ø)
include/genn/genn/synapseGroup.h 91.83% <ø> (ø)
include/genn/genn/synapseMatrixType.h 75.00% <ø> (ø)
src/genn/backends/single_threaded_cpu/backend.cc 56.89% <55.35%> (-0.07%) ⬇️
src/genn/genn/synapseGroup.cc 85.21% <88.88%> (+0.06%) ⬆️
src/genn/genn/code_generator/backendSIMT.cc 96.16% <89.65%> (-0.42%) ⬇️
... and 11 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 33b81dc...0234b7e. Read the comment docs.

* fixed bug in substitutions
Copy link
Member

@tnowotny tnowotny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking pretty good.
Probably better to correct the typo in the variable name (see comments).
What is the reason behind the empty files "skip_SingleThreadedCPU"?

const bool anyConnectivityInitGroups = !modelMerged.getMergedSynapseConnectivityInitGroups().empty();
genMergedGroupKernelParams(initializeKernels, modelMerged.getMergedNeuronInitGroups(),
anyCustomUpdateInitGroups || anyCustomWUUpdateDenseInitGroups || anyDenseInitGroups || anyConnectivityInitGroups || additionalParamsRequired);
anyCustomUpdateInitGroups || anyCustomWUUpdateDenseInitGroups || anyDenseInitGroups || anyKerneInitGroups || anyConnectivityInitGroups || additionalParamsRequired);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should anyKerneInitGroups be anyKernelInitGroups?

genMergedGroupKernelParams(initializeKernels, modelMerged.getMergedCustomUpdateInitGroups(),
anyCustomWUUpdateDenseInitGroups || anyDenseInitGroups || anyConnectivityInitGroups || additionalParamsRequired);
anyCustomWUUpdateDenseInitGroups || anyDenseInitGroups || anyKerneInitGroups || anyConnectivityInitGroups || additionalParamsRequired);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here, too

genMergedGroupKernelParams(initializeKernels, modelMerged.getMergedCustomWUUpdateDenseInitGroups(),
anyDenseInitGroups || anyConnectivityInitGroups || additionalParamsRequired);
genMergedGroupKernelParams(initializeKernels, modelMerged.getMergedSynapseDenseInitGroups(), anyConnectivityInitGroups || additionalParamsRequired);
anyDenseInitGroups || anyKerneInitGroups || anyConnectivityInitGroups || additionalParamsRequired);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and here ...

@neworderofjamie
Copy link
Contributor Author

So the skip_BACKEND_NAME files are just used to tell the test runner that this feature test shouldn't be run on a given backend e.g. single-threaded CPU doesn't support procedural connectivity and there's a few CUDA-specific tests as well

Copy link
Member

@tnowotny tnowotny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all good.

@neworderofjamie neworderofjamie merged commit dcfd43f into master Nov 18, 2021
@neworderofjamie neworderofjamie deleted the procedural_kernelg branch November 18, 2021 09:23
@neworderofjamie neworderofjamie mentioned this pull request Nov 18, 2021
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants