-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Qgrnn updates #207
Qgrnn updates #207
Conversation
Thanks @albi3ro! Do the changes allow the demo to run in tape-mode? |
It ran in tape mode with no changes and still does. But executing it got me reading it. And reading it gave me ideas of things to change. |
Awesome 🙂 I assumed it was a required change for the demo to build with the new PL version. Since it is not required for supporting the new PL version, no need to prioritize review until after release -- I'll remove the requests for review I made. |
@albi3ro out of curiosity did you record execution times for tape mode vs. non-tape mode? This is one of the demos that's currently non-executable because of its long runtime, however I noticed while testing some of the other demos that using tape mode resulted in some speedup. |
Ok! That was significantly faster in tape mode. Basically an order of magnitude faster. I ran for ten steps and timed using Non-tape mode was: Tape mode: |
Incredible!! Thanks for looking into that. When we tackle #201 then we may be able to make this one executable 🎉 |
@albi3ro if you have the bandwidth, did you want to take a crack at making this one executable? (I am also happy to do so) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@albi3ro changes look great, what an amazing speedup! I just left a few minor comments, but otherwise this looks good to go.
demonstrations/tutorial_qgrnn.py
Outdated
:property="og:description": Using a quantum graph recurrent neural network to learn quantum dynamics. | ||
:property="og:image": https://pennylane.ai/qml/_images/qgrnn_thumbnail.png | ||
|
||
*Author: Jack Ceroni. Posted: 27 July 2020. Last updated: 26 Oct 2020.* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reminder to change the "last updated" to the day this gets merged in :)
z_term = x_term = 1 | ||
for j in range(0, n_qubits): | ||
if j == i: | ||
z_term = np.kron(z_term, qml.PauliZ.matrix) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, renaming the variables in this part makes things much clearer!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
demonstrations/tutorial_qgrnn.py
Outdated
# | ||
|
||
print("Target parameters \tLearned parameters") | ||
print("\t Weights:") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would remove the \t
here and left-justify, then add a row of ------
and some spacing underneath to help with separating the two types of parameters. Something like this maybe:
Target parameters Learned parameters
Weights
------------------------------------------------
0.56 0.5782895244479018
1.24 1.335028329676283
1.67 1.804480439985849
-0.79 -0.8395497395039528
Bias
------------------------------------------------
-1.44 -1.352958693194635
-1.43 -1.313891802560214
1.18 0.9811173767093425
-0.93 -1.0579331212003393
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a bar too "|"
Co-authored-by: Olivia Di Matteo <[email protected]>
…to qgrnn_updates
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just two small additional changes, but once those are done, go ahead and merge 🎉
:property="og:description": Using a quantum graph recurrent neural network to learn quantum dynamics. | ||
:property="og:image": https://pennylane.ai/qml/_images/qgrnn_thumbnail.png | ||
|
||
*Author: Jack Ceroni. Posted: 27 July 2020. Last updated: 25 March 2021.* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
*Author: Jack Ceroni. Posted: 27 July 2020. Last updated: 25 March 2021.* | |
*Author: Jack Ceroni. Posted: 27 July 2020. Last updated: 25 Feb 2021.* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have no idea how that happened...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking ahead 😀
axes[2].set_title("Learned", y=1.13) | ||
|
||
plt.subplots_adjust(wspace=0.3, hspace=0.3) | ||
plt.show() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The rendered plot here is quite small, any way to increase it? (Sorry didn't catch this earlier, I wasn't sure if in the previous iteration it was the old static image or the rendered image that was small.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to be less an issue with the image and more the amount of padding given to the image. Is there a way to reduce the padding given to the generated figure? I'm not too familiar with the procedure used to generate the website.
I could also switch it to have "initial" in its own row and then "target" and "learned" sharing a row. That would widen each plot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm... maybe try increasing the figsize
, and/or throwing a plt.tight_layout()
in there rather than plt.subplots_adjust
? I know it's a pain to test things out because the runtime is too long to iterate quickly. If you can't get something working after a try or two we should ask Josh since he's the most knowledgeable about the demo build process.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you can't get something working after a try or two we should ask Josh since he's the most knowledgeable about the demo build process.
Unfortunately I don't have any good solutions here! Normally I would download the Jupyter notebook version of the demo (or just copy the demo into a jupyter notebook), run all the cells, and then that allows you to iterate on the final plotting without performing the optimization every time.
The matplotlib image created in the notebook should match exactly the one on the website; Sphinx does not do any post-processing (I believe).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given any updates to the plot generating code will probably make the code more complicated, and any improvements to sphinx to display it better are out of our technical depth, I think at this point it may be just better to leave as is. I'm merging it in.
@@ -100,7 +100,7 @@ Demos | |||
</a> | |||
</li> | |||
<li> | |||
<a href="demos/qgrnn.html"> | |||
<a href="demos/tutorial_qgrnn.html"> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch!
Some minor updates to the QGRNN tutorial. The updates include
nx.complete_graph
step_and_cost
instead of juststep
More complete work could transform the tutorial to using
ApproxTimeEvolution
, instead of a custom implementation of it.