Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QuTiPv5 Paper Notebook: QOC #112

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

Langhaarzombie
Copy link
Contributor

@Langhaarzombie Langhaarzombie commented Nov 14, 2024

This PR adds the examples for QOC from the v5 paper to the tutorials.

What is left to do:

  • Tests
  • Include references and format like this

- added all demonstrated algorithms
- added summarized description
},
tlist=times,
algorithm_kwargs={"alg": "GRAPE", "fid_err_targ": 0.01},
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe here it would be nice to show the optimization result a bit.

I was trying this and got some weird results.

Copy link
Member

@BoxiLi BoxiLi Nov 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am mostly confused by the predefined control parameters here for the GRAPE method.

I think these initial guesses are not used at all in the GRAPE algorithm.

https://github.com/qutip/qutip-qoc/blob/881fb3939a606bdf544a78f4f4ffeb8340af71b2/src/qutip_qoc/objective.py#L98

Correction: I found that the initial pulse is directly updated in optimize_pulses

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you tell me, what you mean with weird results? In my case, the plots of the paper are reproduced and then shown at the end of the notebook in the comparison section

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, let me update my comment... after rerunning this, I also get a different result than in the paper. But it seems to be limited to the CRAB algorithm. This is what I get:

output

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is not entirely surprising that one gets different results on different machines. The optimization results highly depend on the initial value and optimization methods used. As long as the result fidelity is good, it should be correct.

Could you copy the fidelity you got on your machine? I got a very weird 0.75 instead of 99%

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first run today also gave me a fidelity around 80% and I reproduced my plot from earlier. But now, after running it about ten times, I always arrive at >99% and get the exact same plot as in the paper.
I also tried with various initial conditions, but it always reaches the fidelity target.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it a good idea to add the (in)fidelity check as a test maybe?

Copy link
Member

@BoxiLi BoxiLi Nov 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first run today also gave me a fidelity around 80% and I reproduced my plot from earlier. But now, after running it about ten times, I always arrive at >99% and get the exact same plot as in the paper.

This is very confusing, why did running this multiple times change the result? That sounds very dangerous, like some parameters are not correctly reset between each execution? What will happen if one resets the numpy random number generator before each repeat?

Yes, adding the infidelity check as a test would be very nice.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@flowerthrower @ajgpitch any thoughts on this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very confusing, why did running this multiple times change the result? That sounds very dangerous, like some parameters are not correctly reset between each execution? What will happen if one resets the numpy random number generator before each repeat?

After retrying this a couple times today, I always arrive at the result of the paper.

Yes, adding the infidelity check as a test would be very nice.

Added ✅

- added references to text
- corrected typos
- less text directly taken from paper now
- added tests
- corrected python version name
@Langhaarzombie Langhaarzombie marked this pull request as ready for review December 2, 2024 04:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants