Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Qgrnn updates #207

Merged
merged 14 commits into from
Mar 5, 2021
Merged

Qgrnn updates #207

merged 14 commits into from
Mar 5, 2021

Conversation

albi3ro
Copy link
Contributor

@albi3ro albi3ro commented Jan 25, 2021

Some minor updates to the QGRNN tutorial. The updates include

  • separating weights and bias into two separate variables
  • use of numpy.rng
  • use of nx.complete_graph
  • Move optimization printing out of cost function
  • Use step_and_cost instead of just step
  • renaming some variables for readability

More complete work could transform the tutorial to using ApproxTimeEvolution, instead of a custom implementation of it.

@josh146
Copy link
Member

josh146 commented Jan 26, 2021

Thanks @albi3ro! Do the changes allow the demo to run in tape-mode?

@albi3ro
Copy link
Contributor Author

albi3ro commented Jan 26, 2021

It ran in tape mode with no changes and still does.

But executing it got me reading it. And reading it gave me ideas of things to change.

@josh146
Copy link
Member

josh146 commented Jan 26, 2021

Awesome 🙂 I assumed it was a required change for the demo to build with the new PL version. Since it is not required for supporting the new PL version, no need to prioritize review until after release -- I'll remove the requests for review I made.

@glassnotes
Copy link
Contributor

@albi3ro out of curiosity did you record execution times for tape mode vs. non-tape mode? This is one of the demos that's currently non-executable because of its long runtime, however I noticed while testing some of the other demos that using tape mode resulted in some speedup.

@albi3ro
Copy link
Contributor Author

albi3ro commented Jan 26, 2021

Ok! That was significantly faster in tape mode. Basically an order of magnitude faster.

I ran for ten steps and timed using /usr/bin/time from the command line.

Non-tape mode was:
real 242.65
user 243.89
sys 30.96

Tape mode:
real 19.06
user 22.03
sys 11.96

@glassnotes
Copy link
Contributor

Ok! That was significantly faster in tape mode. Basically an order of magnitude faster.

Incredible!! Thanks for looking into that. When we tackle #201 then we may be able to make this one executable 🎉

@glassnotes
Copy link
Contributor

@albi3ro if you have the bandwidth, did you want to take a crack at making this one executable? (I am also happy to do so)

Copy link
Contributor

@glassnotes glassnotes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@albi3ro changes look great, what an amazing speedup! I just left a few minor comments, but otherwise this looks good to go.

:property="og:description": Using a quantum graph recurrent neural network to learn quantum dynamics.
:property="og:image": https://pennylane.ai/qml/_images/qgrnn_thumbnail.png

*Author: Jack Ceroni. Posted: 27 July 2020. Last updated: 26 Oct 2020.*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reminder to change the "last updated" to the day this gets merged in :)

demonstrations/tutorial_qgrnn.py Outdated Show resolved Hide resolved
demonstrations/tutorial_qgrnn.py Outdated Show resolved Hide resolved
demonstrations/tutorial_qgrnn.py Outdated Show resolved Hide resolved
z_term = x_term = 1
for j in range(0, n_qubits):
if j == i:
z_term = np.kron(z_term, qml.PauliZ.matrix)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, renaming the variables in this part makes things much clearer!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

demonstrations/tutorial_qgrnn.py Outdated Show resolved Hide resolved
demonstrations/tutorial_qgrnn.py Outdated Show resolved Hide resolved
#

print("Target parameters \tLearned parameters")
print("\t Weights:")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would remove the \t here and left-justify, then add a row of ------ and some spacing underneath to help with separating the two types of parameters. Something like this maybe:

Target parameters       Learned parameters

Weights
------------------------------------------------
0.56                    0.5782895244479018
1.24                    1.335028329676283
1.67                    1.804480439985849
-0.79                   -0.8395497395039528

Bias
------------------------------------------------
-1.44                   -1.352958693194635
-1.43                   -1.313891802560214
1.18                    0.9811173767093425
-0.93                   -1.0579331212003393

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a bar too "|"

Copy link
Contributor

@glassnotes glassnotes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just two small additional changes, but once those are done, go ahead and merge 🎉

:property="og:description": Using a quantum graph recurrent neural network to learn quantum dynamics.
:property="og:image": https://pennylane.ai/qml/_images/qgrnn_thumbnail.png

*Author: Jack Ceroni. Posted: 27 July 2020. Last updated: 25 March 2021.*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
*Author: Jack Ceroni. Posted: 27 July 2020. Last updated: 25 March 2021.*
*Author: Jack Ceroni. Posted: 27 July 2020. Last updated: 25 Feb 2021.*

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have no idea how that happened...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking ahead 😀

axes[2].set_title("Learned", y=1.13)

plt.subplots_adjust(wspace=0.3, hspace=0.3)
plt.show()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The rendered plot here is quite small, any way to increase it? (Sorry didn't catch this earlier, I wasn't sure if in the previous iteration it was the old static image or the rendered image that was small.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to be less an issue with the image and more the amount of padding given to the image. Is there a way to reduce the padding given to the generated figure? I'm not too familiar with the procedure used to generate the website.

I could also switch it to have "initial" in its own row and then "target" and "learned" sharing a row. That would widen each plot.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm... maybe try increasing the figsize, and/or throwing a plt.tight_layout() in there rather than plt.subplots_adjust? I know it's a pain to test things out because the runtime is too long to iterate quickly. If you can't get something working after a try or two we should ask Josh since he's the most knowledgeable about the demo build process.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you can't get something working after a try or two we should ask Josh since he's the most knowledgeable about the demo build process.

Unfortunately I don't have any good solutions here! Normally I would download the Jupyter notebook version of the demo (or just copy the demo into a jupyter notebook), run all the cells, and then that allows you to iterate on the final plotting without performing the optimization every time.

The matplotlib image created in the notebook should match exactly the one on the website; Sphinx does not do any post-processing (I believe).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given any updates to the plot generating code will probably make the code more complicated, and any improvements to sphinx to display it better are out of our technical depth, I think at this point it may be just better to leave as is. I'm merging it in.

@@ -100,7 +100,7 @@ Demos
</a>
</li>
<li>
<a href="demos/qgrnn.html">
<a href="demos/tutorial_qgrnn.html">
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch!

@albi3ro albi3ro merged commit 50627ce into master Mar 5, 2021
@albi3ro albi3ro deleted the qgrnn_updates branch March 5, 2021 16:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants