Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] qml.math.fidelity is not differentiable for 2 or more wires #4373

Closed
1 task done
ludmilaasb opened this issue Jul 19, 2023 · 2 comments · Fixed by #4380
Closed
1 task done

[BUG] qml.math.fidelity is not differentiable for 2 or more wires #4373

ludmilaasb opened this issue Jul 19, 2023 · 2 comments · Fixed by #4380
Assignees
Labels
bug 🐛 Something isn't working

Comments

@ludmilaasb
Copy link
Contributor

ludmilaasb commented Jul 19, 2023

Expected behavior

The fidelity should be differentiable. For QNodes with a single wire, the gradient of fidelity is computed:

wires = 1
dev = qml.device('default.mixed', wires=wires)

@qml.qnode(dev)
def circuit1(x):
    qml.RX(x,wires=0)
    return qml.density_matrix(wires = [x for x in range(wires)])

@qml.qnode(dev)
def circuit2():
    return qml.density_matrix(wires = [x for x in range(wires)])

def cost(x):
    return qml.math.fidelity(circuit1(x),circuit2(),check_state=True)

grad_fn = qml.grad(cost)

x = np.tensor(0.5, requires_grad=True)
print("gradient: ",grad_fn(x))
gradient:  -0.6666666666666667

but returns some warnings, such as:

[/Users/ludmila.botelho/anaconda3/envs/qml/lib/python3.11/site-packages/autograd/numpy/numpy_vjps.py:99](https://file+.vscode-resource.vscode-cdn.net/Users/ludmila.botelho/anaconda3/envs/qml/lib/python3.11/site-packages/autograd/numpy/numpy_vjps.py:99): RuntimeWarning: divide by zero encountered in power
  defvjp(anp.sqrt,    lambda ans, x : lambda g: g * 0.5 * x**-0.5)
[/Users/ludmila.botelho/anaconda3/envs/qml/lib/python3.11/site-packages/autograd/numpy/numpy_wrapper.py:156](https://file+.vscode-resource.vscode-cdn.net/Users/ludmila.botelho/anaconda3/envs/qml/lib/python3.11/site-packages/autograd/numpy/numpy_wrapper.py:156): ComplexWarning: Casting complex values to real discards the imaginary part
  return A.astype(dtype, order, casting, subok, copy)

Actual behavior

For 2 or more wires, the gradient fails and returns nan.

Additional information

This problem seems to be happening particularly for computing the gradient of fidelity between matrices with low rank, blocked, or sparse somehow. I also tried computing the gradient with jaxbut it didn't work:

grad_fn = jax.grad(cost)
print(grad_fn(0.3))

and the output:

Array(nan, dtype=float32, weak_type=True)

Source code

import pennylane as qml
import pennylane.numpy as np

wires = 2
dev = qml.device('default.mixed', wires=wires)

@qml.qnode(dev)
def circuit1(x):
    qml.RX(x,wires=0)
    return qml.density_matrix(wires = [x for x in range(wires)])

@qml.qnode(dev)
def circuit2():
    return qml.density_matrix(wires = [x for x in range(wires)])

def cost(x):
    return qml.math.fidelity(circuit1(x),circuit2(),check_state=True)

x = np.tensor(0.3, requires_grad=True)
grad_fn = qml.grad(cost)
print(grad_fn(x))

Tracebacks

~/anaconda3/envs/qml/lib/python3.11/site-packages/pennylane/math/quantum.py:46: UserWarning: Argument passed to fidelity has shape (4, 4) and will be interpreted as a density matrix. If a batched state vector was intended, please call qml.math.dm_from_state_vector first, as passing state vectors to fidelity is deprecated.
  warnings.warn(
~/anaconda3/envs/qml/lib/python3.11/site-packages/autograd/numpy/numpy_vjps.py:99: RuntimeWarning: invalid value encountered in power
  defvjp(anp.sqrt,    lambda ans, x : lambda g: g * 0.5 * x**-0.5)
~/anaconda3/envs/qml/lib/python3.11/site-packages/autograd/numpy/linalg.py:180: RuntimeWarning: divide by zero encountered in divide
  F = off_diag / (T(w_repeated) - w_repeated + anp.eye(N))
~anaconda3/envs/qml/lib/python3.11/site-packages/autograd/numpy/linalg.py:181: RuntimeWarning: invalid value encountered in multiply
  vjp_temp += _dot(_dot(vc, F * _dot(T(v), vg)), T(v))

System information

Name: PennyLane
Version: 0.31.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/PennyLaneAI/pennylane
Author: 
Author-email: 
License: Apache License 2.0
Location: ~/anaconda3/envs/qml/lib/python3.11/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml
Required-by: PennyLane-Lightning

Platform info:           macOS-13.4.1-arm64-arm-64bit
Python version:          3.11.3
Numpy version:           1.23.5
Scipy version:           1.10.0
Installed devices:
- default.gaussian (PennyLane-0.31.0)
- default.mixed (PennyLane-0.31.0)
- default.qubit (PennyLane-0.31.0)
- default.qubit.autograd (PennyLane-0.31.0)
- default.qubit.jax (PennyLane-0.31.0)
- default.qubit.tf (PennyLane-0.31.0)
- default.qubit.torch (PennyLane-0.31.0)
- default.qutrit (PennyLane-0.31.0)
- null.qubit (PennyLane-0.31.0)
- lightning.qubit (PennyLane-Lightning-0.31.0)

Existing GitHub issues

  • I have searched existing GitHub issues to make sure the issue does not already exist.
@ludmilaasb ludmilaasb added the bug 🐛 Something isn't working label Jul 19, 2023
@ludmilaasb
Copy link
Contributor Author

ludmilaasb commented Jul 19, 2023

One extra thing: unfortunately tests for the gradient of the fidelity between QNodes with wires $\geq 2$ are apparently lacking, I only spotted tests using just a single wire 😢

fid_grad = qml.grad(qml.qinfo.fidelity(circuit0, circuit1, wires0=[0], wires1=[0]))(
(qml.numpy.array(param, requires_grad=True))
)

fid_grad = qml.grad(qml.qinfo.fidelity(circuit0, circuit1, wires0=[0], wires1=[0]))(
None, (qml.numpy.array(param, requires_grad=True))
)

fid_grad = qml.grad(qml.qinfo.fidelity(circuit, circuit, wires0=[0], wires1=[0]))(
(qml.numpy.array(param, requires_grad=True)),

fid_grad = qml.grad(qml.qinfo.fidelity(circuit0, circuit1, wires0=[0], wires1=[0]))(
(qml.numpy.array(param, requires_grad=True)), (qml.numpy.array(2.0, requires_grad=True))
)

@albi3ro
Copy link
Contributor

albi3ro commented Nov 28, 2023

Is this issue now closed?

@albi3ro albi3ro closed this as completed Nov 29, 2023
astralcai added a commit that referenced this issue Jun 17, 2024
**Context:**

It would be nice to have a function to calculate the expectation values
of an operator $A$ given a state vector $\vert\psi\rangle$. This can be
also used for computing the fidelity between a mixed and a pure state in
simple way, avoiding eigendecomposition problems.

**Description of the Change:**

Added a overlap calculation between state vectors and density matrices 

**Benefits:**

It is faster then computing $\text{Tr}(A
\vert\psi\rangle\langle\psi\vert)$. No need for eigendecomposition,
therefore, it is a way to avoid issues regarding differentiation as
pointed on issue #4373.

**Possible Drawbacks:**

It does not work if the user pass as input two state vectors or two
matrices. It is tailored to first input as (batched) matrices and the
second one as (batched) state vectors.

**Related GitHub Issues:**
#4373

---------

Co-authored-by: Frederik Wilde <[email protected]>
Co-authored-by: Christina Lee <[email protected]>
Co-authored-by: Matthew Silverman <[email protected]>
Co-authored-by: Romain Moyard <[email protected]>
Co-authored-by: Mudit Pandey <[email protected]>
Co-authored-by: Olivia Di Matteo <[email protected]>
Co-authored-by: Olivia Di Matteo <[email protected]>
Co-authored-by: Jay Soni <[email protected]>
Co-authored-by: Christina Lee <[email protected]>
Co-authored-by: Matthew Silverman <[email protected]>
Co-authored-by: David Wierichs <[email protected]>
Co-authored-by: Edward Jiang <[email protected]>
Co-authored-by: Utkarsh <[email protected]>
Co-authored-by: Korbinian Kottmann <[email protected]>
Co-authored-by: Thomas R. Bromley <[email protected]>
Co-authored-by: Astral Cai <[email protected]>
Co-authored-by: Astral Cai <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants