-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proper implementation of kernel weights #478
Conversation
# Conflicts: # src/genn/genn/code_generator/generateRunner.cc
* expose kernel weights to PyGeNN
…eGroupMergedBase``
Codecov Report
@@ Coverage Diff @@
## master #478 +/- ##
==========================================
- Coverage 87.46% 87.45% -0.02%
==========================================
Files 82 82
Lines 17266 17524 +258
==========================================
+ Hits 15102 15325 +223
- Misses 2164 2199 +35
Continue to review full report at Codecov.
|
* fixed bug in substitutions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking pretty good.
Probably better to correct the typo in the variable name (see comments).
What is the reason behind the empty files "skip_SingleThreadedCPU"?
src/genn/backends/opencl/backend.cc
Outdated
const bool anyConnectivityInitGroups = !modelMerged.getMergedSynapseConnectivityInitGroups().empty(); | ||
genMergedGroupKernelParams(initializeKernels, modelMerged.getMergedNeuronInitGroups(), | ||
anyCustomUpdateInitGroups || anyCustomWUUpdateDenseInitGroups || anyDenseInitGroups || anyConnectivityInitGroups || additionalParamsRequired); | ||
anyCustomUpdateInitGroups || anyCustomWUUpdateDenseInitGroups || anyDenseInitGroups || anyKerneInitGroups || anyConnectivityInitGroups || additionalParamsRequired); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should anyKerneInitGroups
be anyKernelInitGroups
?
src/genn/backends/opencl/backend.cc
Outdated
genMergedGroupKernelParams(initializeKernels, modelMerged.getMergedCustomUpdateInitGroups(), | ||
anyCustomWUUpdateDenseInitGroups || anyDenseInitGroups || anyConnectivityInitGroups || additionalParamsRequired); | ||
anyCustomWUUpdateDenseInitGroups || anyDenseInitGroups || anyKerneInitGroups || anyConnectivityInitGroups || additionalParamsRequired); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here, too
src/genn/backends/opencl/backend.cc
Outdated
genMergedGroupKernelParams(initializeKernels, modelMerged.getMergedCustomWUUpdateDenseInitGroups(), | ||
anyDenseInitGroups || anyConnectivityInitGroups || additionalParamsRequired); | ||
genMergedGroupKernelParams(initializeKernels, modelMerged.getMergedSynapseDenseInitGroups(), anyConnectivityInitGroups || additionalParamsRequired); | ||
anyDenseInitGroups || anyKerneInitGroups || anyConnectivityInitGroups || additionalParamsRequired); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and here ...
So the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all good.
Previously, the way kernel weights were implemented was a bit of hack. You used procedural connectivity with a special variable initialisation snippet which pulled through the weights from an extra global parameter. This has a number of disadvantages:
This all stands in the way of both learning in convolutions and implementing a more efficient convolution implementation. This change hopefully solves all these problems via a new
PROCEDURAL_KERNELG
matrix mode. When this mode is used, all weight update model variables are allocated to the kernel size specified by the sparse connectivity initialisation model and can be initialised using the full functionality of the variable initialisation system. Only downside is that slightly different code paths need to be used for initialising sparse connectivity using a kernel and using the kernel directly (genn-team/ml_genn@03fda90)