-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QuTiPv5 Paper Notebook: QOC #112
base: main
Are you sure you want to change the base?
Conversation
- added all demonstrated algorithms - added summarized description
}, | ||
tlist=times, | ||
algorithm_kwargs={"alg": "GRAPE", "fid_err_targ": 0.01}, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe here it would be nice to show the optimization result a bit.
I was trying this and got some weird results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am mostly confused by the predefined control parameters here for the GRAPE method.
I think these initial guesses are not used at all in the GRAPE algorithm.
Correction: I found that the initial pulse is directly updated in optimize_pulses
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you tell me, what you mean with weird results? In my case, the plots of the paper are reproduced and then shown at the end of the notebook in the comparison section
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is not entirely surprising that one gets different results on different machines. The optimization results highly depend on the initial value and optimization methods used. As long as the result fidelity is good, it should be correct.
Could you copy the fidelity you got on your machine? I got a very weird 0.75 instead of 99%
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The first run today also gave me a fidelity around 80% and I reproduced my plot from earlier. But now, after running it about ten times, I always arrive at >99% and get the exact same plot as in the paper.
I also tried with various initial conditions, but it always reaches the fidelity target.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it a good idea to add the (in)fidelity check as a test maybe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The first run today also gave me a fidelity around 80% and I reproduced my plot from earlier. But now, after running it about ten times, I always arrive at >99% and get the exact same plot as in the paper.
This is very confusing, why did running this multiple times change the result? That sounds very dangerous, like some parameters are not correctly reset between each execution? What will happen if one resets the numpy random number generator before each repeat?
Yes, adding the infidelity check as a test would be very nice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@flowerthrower @ajgpitch any thoughts on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is very confusing, why did running this multiple times change the result? That sounds very dangerous, like some parameters are not correctly reset between each execution? What will happen if one resets the numpy random number generator before each repeat?
After retrying this a couple times today, I always arrive at the result of the paper.
Yes, adding the infidelity check as a test would be very nice.
Added ✅
- added references to text - corrected typos - less text directly taken from paper now - added tests - corrected python version name
This PR adds the examples for QOC from the v5 paper to the tutorials.
What is left to do: