Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix precedence with PointerSymbols #268

Merged
merged 4 commits into from
Oct 7, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions CHANGES.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,12 @@ Release History
1.1.1 (unreleased)
==================

**Fixed**

- ``PointerSymbol`` precedence handling has been fixed. For example,
``(sym.A + sym.B) * sym.C`` will be evaluated correctly now.
(`#267 <https://github.com/nengo/nengo_spa/issues/267>`__,
`#268 <https://github.com/nengo/nengo_spa/pull/268>`__)



Expand Down
309 changes: 192 additions & 117 deletions docs/examples/associative-memory.ipynb

Large diffs are not rendered by default.

78 changes: 43 additions & 35 deletions docs/examples/convolution.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Our model is going to compute the convolution (or binding) of two semantic pointers `A` and `B`. We can use the `nengo_spa.Vocabulary` class to create the two random semantic pointers, and compute their exact convolution `C = A * B`."
"Our model is going to compute the convolution (or binding) of two semantic pointers `A`\n",
"and `B`. We can use the `nengo_spa.Vocabulary` class to create the two random semantic\n",
"pointers, and compute their exact convolution `C = A * B`."
]
},
{
Expand All @@ -53,16 +55,20 @@
"\n",
"vocab = Vocabulary(dimensions=dimensions, pointer_gen=rng)\n",
"\n",
"# Set `C` to equal the convolution of `A` with `B`. This will be \n",
"# Set `C` to equal the convolution of `A` with `B`. This will be\n",
"# our ground-truth to test the accuracy of the neural network.\n",
"vocab.populate('A; B; C=A*B')"
"vocab.populate(\"A; B; C=A*B\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Our network will then use neurons to compute this same convolution. We use the `nengo.networks.CircularConvolution` class, which performs circular convolution by taking the Fourier transform of both vectors, performing element-wise complex-number multiplication in the Fourier domain, and finally taking the inverse Fourier transform to get the result."
"Our network will then use neurons to compute this same convolution. We use the\n",
"`nengo.networks.CircularConvolution` class, which performs circular convolution by\n",
"taking the Fourier transform of both vectors, performing element-wise complex-number\n",
"multiplication in the Fourier domain, and finally taking the inverse Fourier transform\n",
"to get the result."
]
},
{
Expand All @@ -74,16 +80,16 @@
"model = nengo.Network(seed=1)\n",
"with model:\n",
" # Get the raw vectors for the pointers using `vocab['A'].v`\n",
" a = nengo.Node(output=vocab['A'].v)\n",
" b = nengo.Node(output=vocab['B'].v)\n",
" \n",
" a = nengo.Node(output=vocab[\"A\"].v)\n",
" b = nengo.Node(output=vocab[\"B\"].v)\n",
"\n",
" # Make the circular convolution network with 200 neurons\n",
" cconv = nengo.networks.CircularConvolution(200, dimensions=dimensions)\n",
" \n",
"\n",
" # Connect the input nodes to the input slots `A` and `B` on the network\n",
" nengo.Connection(a, cconv.input_a)\n",
" nengo.Connection(b, cconv.input_b)\n",
" \n",
"\n",
" # Probe the output\n",
" out = nengo.Probe(cconv.output, synapse=0.03)"
]
Expand All @@ -95,7 +101,7 @@
"outputs": [],
"source": [
"with nengo.Simulator(model) as sim:\n",
" sim.run(1.)"
" sim.run(1.0)"
]
},
{
Expand All @@ -104,11 +110,18 @@
"source": [
"## Analyze the results\n",
"\n",
"We plot the dot product between the exact convolution of `A` and `B` (given by `C = A * B`), and the result of the neural convolution (given by `sim.data[out]`).\n",
"We plot the dot product between the exact convolution of `A` and `B` (given by `C = A *\n",
"B`), and the result of the neural convolution (given by `sim.data[out]`).\n",
"\n",
"The dot product is a common measure of similarity between semantic pointers, since it approximates the cosine similarity when the semantic pointer lengths are close to one. The cosine similarity is a common similarity measure for vectors; it is simply the cosine of the angle between the vectors.\n",
"The dot product is a common measure of similarity between semantic pointers, since it\n",
"approximates the cosine similarity when the semantic pointer lengths are close to one.\n",
"The cosine similarity is a common similarity measure for vectors; it is simply the\n",
"cosine of the angle between the vectors.\n",
"\n",
"Both the dot product and the exact cosine similarity can be computed with `nengo_spa.similarity`. Normally, this function will compute the dot products between each data vector and each vocabulary vector, but setting `normalize=True` normalizes all vectors so that the exact cosine similarity is computed instead."
"Both the dot product and the exact cosine similarity can be computed with\n",
"`nengo_spa.similarity`. Normally, this function will compute the dot products between\n",
"each data vector and each vocabulary vector, but setting `normalize=True` normalizes all\n",
"vectors so that the exact cosine similarity is computed instead."
]
},
{
Expand All @@ -120,16 +133,23 @@
"plt.plot(sim.trange(), nengo_spa.similarity(sim.data[out], vocab))\n",
"plt.legend(list(vocab.keys()), loc=4)\n",
"plt.xlabel(\"t [s]\")\n",
"plt.ylabel(\"dot product\");"
"plt.ylabel(\"dot product\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above plot shows that the neural output is much closer to `C = A * B` than to either `A` or `B`, suggesting that our network is correctly computing the convolution. It also highlights an important property of circular convolution: The circular convolution of two vectors is dissimilar to both of the vectors.\n",
"The above plot shows that the neural output is much closer to `C = A * B` than to either\n",
"`A` or `B`, suggesting that our network is correctly computing the convolution. It also\n",
"highlights an important property of circular convolution: The circular convolution of\n",
"two vectors is dissimilar to both of the vectors.\n",
"\n",
"The dot product between the neural output and `C` is not exactly one due in large part to the fact that the length of `C` is not exactly one (see below). To actually measure the cosine similarity between the vectors (that is, the cosine of the angle between the vectors), we have to divide the dot product by the lengths of both `C` and the neural output vector approximating `C`."
"The dot product between the neural output and `C` is not exactly one due in large part\n",
"to the fact that the length of `C` is not exactly one (see below). To actually measure\n",
"the cosine similarity between the vectors (that is, the cosine of the angle between the\n",
"vectors), we have to divide the dot product by the lengths of both `C` and the neural\n",
"output vector approximating `C`."
]
},
{
Expand All @@ -139,14 +159,16 @@
"outputs": [],
"source": [
"# The length of `C` is not exactly one\n",
"print(vocab['C'].length())"
"print(vocab[\"C\"].length())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Performing this normalization, we can see that the cosine similarity between the neural output vectors and `C` is almost exactly one, demonstrating that the neural population is quite accurate in computing the convolution."
"Performing this normalization, we can see that the cosine similarity between the neural\n",
"output vectors and `C` is almost exactly one, demonstrating that the neural population\n",
"is quite accurate in computing the convolution."
]
},
{
Expand All @@ -155,31 +177,17 @@
"metadata": {},
"outputs": [],
"source": [
"plt.plot(sim.trange(), nengo_spa.similarity(\n",
" sim.data[out], vocab, normalize=True))\n",
"plt.plot(sim.trange(), nengo_spa.similarity(sim.data[out], vocab, normalize=True))\n",
"plt.legend(list(vocab.keys()), loc=4)\n",
"plt.xlabel(\"t [s]\")\n",
"plt.ylabel(\"cosine similarity\");"
"plt.ylabel(\"cosine similarity\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
"pygments_lexer": "ipython3"
}
},
"nbformat": 4,
Expand Down
58 changes: 27 additions & 31 deletions docs/examples/custom-module.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,9 @@
"source": [
"# Implementing a custom module\n",
"\n",
"This example demonstrates how custom SPA modules can be created that can take advantage of all the features of the SPA syntax. We will adapt the `InputGatedMemory` from `nengo`."
"This example demonstrates how custom SPA modules can be created that can take advantage\n",
"of all the features of the SPA syntax. We will adapt the `InputGatedMemory` from\n",
"`nengo`."
]
},
{
Expand All @@ -26,10 +28,13 @@
"Implementing a SPA module requires a few steps:\n",
"\n",
"1. Implement a class inheriting from `spa.Network`.\n",
"2. Use `VocabularyOrDimParam` to declare class variables for storing vocublary parameters. This will allow the usage of integer dimensions instead of vocabularies without any further additions.\n",
"2. Use `VocabularyOrDimParam` to declare class variables for storing vocublary\n",
"parameters. This will allow the usage of integer dimensions instead of vocabularies\n",
"without any further additions.\n",
"3. Declare inputs and outputs with their respective vocabularies.\n",
"\n",
"Not that parameters in SPA modules should usually be defined as readonly because changing them will usually not update the network accordingly."
"Not that parameters in SPA modules should usually be defined as readonly because\n",
"changing them will usually not update the network accordingly."
]
},
{
Expand All @@ -39,41 +44,45 @@
"outputs": [],
"source": [
"class GatedMemory(spa.Network):\n",
" \n",
"\n",
" # The vocabulary parameter.\n",
" vocab = spa.vocabulary.VocabularyOrDimParam(\n",
" 'vocab', default=None, readonly=True)\n",
" vocab = spa.vocabulary.VocabularyOrDimParam(\"vocab\", default=None, readonly=True)\n",
" # The number of neurons per dimensions.\n",
" neurons_per_dimension = nengo.params.IntParam(\n",
" 'neurons_per_dimension', default=200, low=1, readonly=True)\n",
" \n",
" \"neurons_per_dimension\", default=200, low=1, readonly=True\n",
" )\n",
"\n",
" # Arguments assigned to parameters should be assigned\n",
" # nengo.params.Default as default value. This makes sure they work\n",
" # properly with nengo.Config. It is a good idea to pass on the keyword\n",
" # arguments **kwargs to the spa.Network constructor to allow the user to\n",
" # set the network label etc.\n",
" def __init__(\n",
" self, vocab=nengo.params.Default,\n",
" neurons_per_dimension=nengo.params.Default, **kwargs):\n",
" self,\n",
" vocab=nengo.params.Default,\n",
" neurons_per_dimension=nengo.params.Default,\n",
" **kwargs\n",
" ):\n",
" super(GatedMemory, self).__init__(**kwargs)\n",
" \n",
"\n",
" # Assign parameter values\n",
" # If vocab is an integer dimension, the appropriate Vocabulary\n",
" # instance will assigned to self.vocab.\n",
" self.vocab = vocab\n",
" self.neurons_per_dimension = neurons_per_dimension\n",
" \n",
"\n",
" # Construct the network\n",
" with self:\n",
" self.mem = nengo.networks.InputGatedMemory(\n",
" self.neurons_per_dimension, self.vocab.dimensions)\n",
" \n",
" self.neurons_per_dimension, self.vocab.dimensions\n",
" )\n",
"\n",
" # Assign inputs to root object for easier referencing\n",
" self.input = self.mem.input\n",
" self.input_gate = self.mem.gate\n",
" self.input_reset = self.mem.reset\n",
" self.output = self.mem.output\n",
" \n",
"\n",
" # Declare inputs and outputs\n",
" # Use None as vocabulary for scalar inputs/outputs\n",
" self.declare_input(self.input, self.vocab)\n",
Expand All @@ -100,35 +109,22 @@
"with spa.Network() as model:\n",
" # The module can be configured\n",
" model.config[GatedMemory].neurons_per_dimension = 150\n",
" \n",
"\n",
" spa_in = spa.Transcode(\"OKAY\", output_vocab=dimensions)\n",
" gate_in = nengo.Node(lambda t: 1 if t < 0.1 else 0)\n",
"\n",
" g_mem = GatedMemory(dimensions)\n",
" \n",
"\n",
" # It can be in routing rules\n",
" spa_in >> g_mem\n",
" gate_in >> g_mem.input_gate"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
"pygments_lexer": "ipython3"
}
},
"nbformat": 4,
Expand Down
Loading