Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] HamiltonianEvolution supported in GPSR with fixed generators #255

Merged
merged 92 commits into from
Aug 9, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
92 commits
Select commit Hold shift + click to select a range
0ee2a50
[Feature] Single gap GPSR
dominikandreasseitz Jul 3, 2024
5cbe904
rm file
dominikandreasseitz Jul 3, 2024
621fe48
Merge branch 'main' into ds/singlegap_psr
dominikandreasseitz Jul 3, 2024
608bd30
activate
dominikandreasseitz Jul 3, 2024
a8ab9b6
fix missing grad tol
Jul 3, 2024
45363e2
[Feature] SingleGap GPSR
dominikandreasseitz Jul 4, 2024
4b25446
Merge remote-tracking branch 'origin/main' into ds/singlegapgpsr_2
Jul 4, 2024
68a16a0
Merge remote-tracking branch 'origin/main' into ds/singlegapgpsr_2
Jul 4, 2024
ab632a0
fix call grad grad gpsr
Jul 4, 2024
61b9b90
fix order call gpsr state obs and tests
Jul 4, 2024
4dee7b4
fix grads not giving same shape when using repeated params
Jul 4, 2024
37f9a45
use None for param dict
Jul 4, 2024
ed2a042
Merge remote-tracking branch 'origin/main' into ds/singlegapgpsr_2
Jul 4, 2024
639639a
separate diff tests + check psr for sequences
Jul 4, 2024
ce8e155
handling cases with repeated params in vjp
Jul 4, 2024
d5a1c85
docstrings for psr
Jul 4, 2024
92c4e80
add layer to etst_diff_adjoint
Jul 4, 2024
4027a60
use uuid instead of temp
Jul 4, 2024
3edff45
use uuid only as temp
Jul 4, 2024
cd1d40a
adding example of higher order failing
Jul 5, 2024
72750a8
removing changing op name in gpsr as compiler will handle it
Jul 5, 2024
4f1b904
Merge remote-tracking branch 'origin/main' into ds/singlegapgpsr_2
Jul 5, 2024
f676183
only str param_name to check support gpsr
Jul 5, 2024
9961e5a
add multigap
Jul 5, 2024
6e8acae
change diff circuit
Jul 5, 2024
81f47e2
test multigap gpsr
Jul 5, 2024
3dd768a
Update pyqtorch/gpsr.py
chMoussa Jul 5, 2024
2c84821
Update pyqtorch/gpsr.py
chMoussa Jul 5, 2024
256ad81
Update pyqtorch/gpsr.py
chMoussa Jul 5, 2024
20cb871
Update pyqtorch/gpsr.py
chMoussa Jul 5, 2024
ca36051
Update pyqtorch/gpsr.py
chMoussa Jul 5, 2024
2051d2f
Update pyqtorch/gpsr.py
chMoussa Jul 5, 2024
da287ff
Update pyqtorch/primitive.py
chMoussa Jul 5, 2024
aa9e83d
Update pyqtorch/primitive.py
chMoussa Jul 5, 2024
b885b3d
fix f_minus and typings
Jul 5, 2024
b2665ce
Merge remote-tracking branch 'origin/ds/singlegapgpsr_2' into cm/mult…
Jul 5, 2024
5a8fd4a
change docstrings and rm multi gap raises
Jul 5, 2024
2ee851b
Merge remote-tracking branch 'origin/main' into cm/multigapgpsr
Jul 5, 2024
f47d601
Update pyqtorch/gpsr.py
chMoussa Jul 5, 2024
00b0f87
Update pyqtorch/gpsr.py
chMoussa Jul 5, 2024
85e9bc0
Update pyqtorch/gpsr.py
chMoussa Jul 5, 2024
5761385
minor changes and paper ref
Jul 5, 2024
5395250
changing warning
Jul 5, 2024
9cd3fab
adding gpu support
Jul 11, 2024
09a4ebd
adding GPSR tests
vytautas-a Jul 11, 2024
2c4b520
moving tests gpsr
Jul 11, 2024
a62a1a4
filter non str op name in gpsr
Jul 11, 2024
06f1bb8
more circuits and ops to test in gpsr
Jul 11, 2024
2c0f832
change if for grads
Jul 11, 2024
aea1301
lint
Jul 11, 2024
b28ceab
Merge remote-tracking branch 'origin/main' into cm/multigapgpsr
Jul 15, 2024
e75c3c3
tests working for first param first order grad
Jul 16, 2024
e67117c
Merge remote-tracking branch 'origin/main' into cm/multigapgpsr
Jul 16, 2024
72e6ce4
move psr support test
Jul 16, 2024
a99596b
change crx eigen_vals_gen
Jul 16, 2024
4e27019
do promote types and reput full tests gradients
Jul 17, 2024
20f82c5
rm bad autograd test_diff
Jul 17, 2024
d4a04a2
gpsr working with good conversions
Jul 17, 2024
d50188a
Merge remote-tracking branch 'origin/main' into cm/multigapgpsr
Jul 17, 2024
3c2021f
bump version
Jul 17, 2024
222ab47
Merge remote-tracking branch 'origin/main' into cm/multigapgpsr
Jul 18, 2024
1a9e25b
change support
Jul 18, 2024
4b261d6
update check support
Jul 18, 2024
f0ab1ae
test using sequence with check support
Jul 18, 2024
2a8f29c
Merge remote-tracking branch 'origin/cm/multigapgpsr' into cm/hamevo_…
Jul 18, 2024
6a0903f
fix gpsr support lint
Jul 18, 2024
160b31a
fix lint long lines
Jul 18, 2024
c29d3f1
Merge remote-tracking branch 'origin/cm/multigapgpsr' into cm/hamevo_…
Jul 18, 2024
ef9ee83
Merge remote-tracking branch 'origin' into cm/hamevo_diff
Aug 5, 2024
64ab105
adapt compatibility gpsr with hamevo
Aug 5, 2024
4731332
not allow parametric op hamevo
Aug 5, 2024
9c3fddd
fix eigenvalues generator by adding eigenvalues to primitive
Aug 5, 2024
264ef97
use eigenvalues in quantum_ops
Aug 5, 2024
a35fe04
changing gpsr for using prefactors in spectral gap
Aug 6, 2024
ee99843
adding round complex to reduce n_eqs with precision
Aug 6, 2024
6a861c4
make gpsr hamevo its own test
Aug 6, 2024
a9d796c
modify flatten hamevo for fix circuit flatten
Aug 7, 2024
fd78c70
fix supported gpsr hamevo
Aug 7, 2024
04ba5fc
rm second derivative etst
Aug 7, 2024
4d4a7d0
lint
Aug 7, 2024
13c1682
Merge remote-tracking branch 'origin/main' into cm/hamevo_diff
Aug 8, 2024
a6a8cb4
fix back gpsr
Aug 8, 2024
e5fa8bb
using rtol for hamevo gpsr
Aug 8, 2024
474d7c6
reduce rtol
Aug 8, 2024
d957514
reput atol 0.1
Aug 8, 2024
c9dcf5d
if in round fct and adding different test fct with hamevo
Aug 8, 2024
030afd7
rm round for general quantum_ops
Aug 8, 2024
f330dbc
multiply sg by 2 for hamevo
Aug 8, 2024
e866414
rm tols gpsr
Aug 8, 2024
d2354d6
reduce tol for second order hamevo gpsr
Aug 8, 2024
9576202
update usage restrictions
Aug 9, 2024
fe3f323
rm primitive file from merge conflict
Aug 9, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/differentiation.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ The [adjoint differentiation mode](https://arxiv.org/abs/2009.02823) computes fi
The Generalized parameter shift rule (GPSR mode) is an extension of the well known [parameter shift rule (PSR)](https://arxiv.org/abs/1811.11184) algorithm [to arbitrary quantum operations](https://arxiv.org/abs/2108.01218). Indeed, PSR only works for quantum operations whose generator has a single gap in its eigenvalue spectrum, GPSR extending to multi-gap.

!!! warning "Usage restrictions"
At the moment, circuits with one or more Scale or HamiltonianEvolution operations are not supported.
At the moment, circuits with one or more Scale or HamiltonianEvolution with parametric generators operations are not supported.
They should be handled differently as GPSR requires operations to be of the form presented below.
Also, circuits with operations sharing a same parameter name are also not supported
as such cases are handled by our other Python package for differentiable digital-analog quantum programs Qadence
Expand Down
81 changes: 45 additions & 36 deletions pyqtorch/differentiation/gpsr.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@
from torch.autograd import Function

from pyqtorch.circuit import QuantumCircuit
from pyqtorch.composite import Scale, Sequence
from pyqtorch.composite import Scale
from pyqtorch.embed import Embedding
from pyqtorch.hamiltonians import HamiltonianEvolution, Observable
from pyqtorch.hamiltonians import GeneratorType, HamiltonianEvolution, Observable
from pyqtorch.matrices import DEFAULT_REAL_DTYPE
from pyqtorch.primitives import Parametric
from pyqtorch.utils import param_dict
Expand Down Expand Up @@ -123,16 +123,16 @@ def backward(ctx: Any, grad_out: Tensor) -> Tuple[None, ...]:
"""

values = param_dict(ctx.param_names, ctx.saved_tensors)
shift_pi2 = torch.tensor(torch.pi) / 2.0
shift_multi = 0.5

dtype_values = DEFAULT_REAL_DTYPE
device = torch.device("cpu")
try:
dtype_values, device = [(v.dtype, v.device) for v in values.values()][0]
except Exception:
pass

shift_pi2 = torch.tensor(torch.pi, dtype=dtype_values) / 2.0
shift_multi = 0.5

def expectation_fn(values: dict[str, Tensor]) -> Tensor:
"""Use the PSRExpectation for nested grad calls.

Expand All @@ -156,7 +156,7 @@ def single_gap_shift(
param_name: str,
values: dict[str, Tensor],
spectral_gap: Tensor,
shift: Tensor = torch.tensor(torch.pi) / 2.0,
shift: Tensor = shift_pi2,
) -> Tensor:
"""Implements single gap PSR rule.

Expand All @@ -183,14 +183,14 @@ def single_gap_shift(
return (
spectral_gap
* (f_plus - f_minus)
/ (4 * torch.sin(spectral_gap * shift / 2))
/ (4.0 * torch.sin(spectral_gap * shift / 2.0))
)

def multi_gap_shift(
param_name: str,
values: dict[str, Tensor],
spectral_gaps: Tensor,
shift_prefac: float = 0.5,
shift_prefac: float = shift_multi,
) -> Tensor:
"""Implement multi gap PSR rule.

Expand Down Expand Up @@ -244,38 +244,46 @@ def multi_gap_shift(
F = torch.stack(F).reshape(n_eqs, -1)
R = torch.linalg.solve(M, F)
dfdx = torch.sum(spectral_gaps * R, dim=0).reshape(batch_size)

return dfdx

def vjp(operation: Parametric, values: dict[str, Tensor]) -> Tensor:
def vjp(
param_name: str, spectral_gap: Tensor, values: dict[str, Tensor]
) -> Tensor:
"""Vector-jacobian product between `grad_out` and jacobians of parameters.

Args:
operation: Parametric operation to compute PSR.
param_name: Parameter name to compute gradient over.
spectral_gap: Spectral gap of the corresponding operation.
values: Dictionary with parameter values.

Returns:
Updated jacobian by PSR.
"""
psr_fn, shift = (
(multi_gap_shift, shift_multi)
if len(operation.spectral_gap) > 1
else (single_gap_shift, shift_pi2)
)
psr_fn = multi_gap_shift if len(spectral_gap) > 1 else single_gap_shift

return grad_out * psr_fn( # type: ignore[operator]
operation.param_name, # type: ignore
param_name, # type: ignore
values,
operation.spectral_gap,
shift,
spectral_gap,
)

grads = {p: None for p in ctx.param_names}

def update_gradient(param_name: str, spectral_gap: Tensor):
if values[param_name].requires_grad:
if grads[param_name] is not None:
grads[param_name] += vjp(param_name, spectral_gap, values)
else:
grads[param_name] = vjp(param_name, spectral_gap, values)

for op in ctx.circuit.flatten():
if isinstance(op, Parametric) and isinstance(op.param_name, str):
if values[op.param_name].requires_grad:
if grads[op.param_name] is not None:
grads[op.param_name] += vjp(op, values)
else:
grads[op.param_name] = vjp(op, values)

if isinstance(op, (Parametric, HamiltonianEvolution)) and isinstance(
op.param_name, str
):
factor = 1.0 if isinstance(op, Parametric) else 2.0
update_gradient(op.param_name, factor * op.spectral_gap)

return (
None,
Expand All @@ -300,24 +308,25 @@ def check_support_psr(circuit: QuantumCircuit):
"""

param_names = list()
for op in circuit.operations:
if isinstance(op, Scale) or isinstance(op, HamiltonianEvolution):
for op in circuit.flatten():
if isinstance(op, Scale):
raise ValueError(
f"PSR is not applicable as circuit contains an operation of type: {type(op)}."
)
if isinstance(op, Sequence):
for subop in op.flatten():
if isinstance(subop, Scale) or isinstance(subop, HamiltonianEvolution):
raise ValueError(
f"PSR is not applicable as circuit contains \
an operation of type: {type(subop)}."
)
if isinstance(subop, Parametric):
if isinstance(subop.param_name, str):
param_names.append(subop.param_name)
if isinstance(op, HamiltonianEvolution) and op.generator_type in [
GeneratorType.SYMBOL,
GeneratorType.PARAMETRIC_OPERATION,
]:
raise ValueError(
f"PSR is not applicable as circuit contains an operation of type: {type(op)} \
whose generator type is {op.generator_type}."
)
elif isinstance(op, Parametric):
if isinstance(op.param_name, str):
param_names.append(op.param_name)
elif isinstance(op, HamiltonianEvolution):
if isinstance(op.time, str):
param_names.append(op.time)
else:
continue

Expand Down
35 changes: 35 additions & 0 deletions pyqtorch/hamiltonians/evolution.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

import logging
from collections import OrderedDict
from functools import cached_property
from logging import getLogger
from typing import Callable, Tuple, Union

Expand All @@ -18,6 +19,7 @@
Operator,
State,
StrEnum,
_round_operator,
expand_operator,
is_diag,
)
Expand Down Expand Up @@ -223,6 +225,13 @@ def generator(self) -> ModuleList:
"""
return self.operations

def flatten(self) -> ModuleList:
return ModuleList([self])

@property
def param_name(self) -> Tensor | str:
return self.time

def _symbolic_generator(
self,
values: dict,
Expand Down Expand Up @@ -273,6 +282,32 @@ def create_hamiltonian(self) -> Callable[[dict], Operator]:
"""
return self._generator_map[self.generator_type]

@cached_property
def eigenvals_generator(self) -> Tensor:
"""Get eigenvalues of the underlying hamiltonian.

Note: Only works for GeneratorType.TENSOR
or GeneratorType.OPERATION.

Returns:
Eigenvalues of the operation.
"""

return self.generator[0].eigenvalues

@cached_property
def spectral_gap(self) -> Tensor:
"""Difference between the moduli of the two largest eigenvalues of the generator.

Returns:
Tensor: Spectral gap value.
"""
spectrum = self.eigenvals_generator
diffs = spectrum - spectrum.T
diffs = _round_operator(diffs)
spectral_gap = torch.unique(torch.abs(torch.tril(diffs)))
return spectral_gap[spectral_gap.nonzero()]

def forward(
self,
state: Tensor,
Expand Down
32 changes: 28 additions & 4 deletions pyqtorch/matrices.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,14 +67,38 @@ def PROJMAT(ket: Tensor, bra: Tensor) -> Tensor:


def parametric_unitary(
theta: torch.Tensor, P: torch.Tensor, I: torch.Tensor, batch_size: int # noqa: E741
theta: torch.Tensor,
P: torch.Tensor,
identity_mat: torch.Tensor,
chMoussa marked this conversation as resolved.
Show resolved Hide resolved
batch_size: int,
a: float = 0.5, # noqa: E741
) -> torch.Tensor:
cos_t = torch.cos(theta / 2).unsqueeze(0).unsqueeze(1)
"""Compute the exponentiation of a Pauli matrix :math:`P`

The exponentiation is given by:
:math:`exp(-i a \\theta P ) = I cos(r \\theta) - i a P sin(r \\theta) / r`

where :math:`a` is a prefactor
and :math:`r = a * sg / 2`, :math:`sg` corresponding to the spectral gap.

Here, we assume :math:`sg = 2`

Args:
theta (torch.Tensor): Parameter values.
P (torch.Tensor): Pauli matrix to exponentiate.
I (torch.Tensor): Identity matrix
batch_size (int): Batch size of parameters.
a (float): Prefactor.

Returns:
torch.Tensor: The exponentiation of P
"""
cos_t = torch.cos(theta * a).unsqueeze(0).unsqueeze(1)
cos_t = cos_t.repeat((2, 2, 1))
sin_t = torch.sin(theta / 2).unsqueeze(0).unsqueeze(1)
sin_t = torch.sin(theta * a).unsqueeze(0).unsqueeze(1)
sin_t = sin_t.repeat((2, 2, 1))

batch_imat = I.unsqueeze(2).repeat(1, 1, batch_size)
batch_imat = identity_mat.unsqueeze(2).repeat(1, 1, batch_size)
batch_operation_mat = P.unsqueeze(2).repeat(1, 1, batch_size)

return cos_t * batch_imat - 1j * sin_t * batch_operation_mat
Expand Down
21 changes: 20 additions & 1 deletion pyqtorch/quantum_operation.py
Original file line number Diff line number Diff line change
Expand Up @@ -245,6 +245,24 @@ def eigenvals_generator(self) -> Tensor:
"""
return torch.linalg.eigvalsh(self.operation).reshape(-1, 1)

@cached_property
def eigenvalues(
self,
values: dict[str, Tensor] | Tensor = dict(),
embedding: Embedding | None = None,
) -> Tensor:
"""Get eigenvalues of the tensor of QuantumOperation.

Args:
values (dict[str, Tensor], optional): Parameter values. Defaults to dict().
embedding (Embedding | None, optional): Optional embedding. Defaults to None.

Returns:
Eigenvalues of the related tensor.
"""
blockmat = self.tensor(values, embedding)
return torch.linalg.eigvals(blockmat.permute((2, 0, 1))).reshape(-1, 1)

@cached_property
def spectral_gap(self) -> Tensor:
"""Difference between the moduli of the two largest eigenvalues of the generator.
Expand All @@ -253,7 +271,8 @@ def spectral_gap(self) -> Tensor:
Tensor: Spectral gap value.
"""
spectrum = self.eigenvals_generator
spectral_gap = torch.unique(torch.abs(torch.tril(spectrum - spectrum.T)))
diffs = spectrum - spectrum.T
spectral_gap = torch.unique(torch.abs(torch.tril(diffs)))
return spectral_gap[spectral_gap.nonzero()]

def _default_operator_function(
Expand Down
18 changes: 17 additions & 1 deletion pyqtorch/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,6 @@
GRADCHECK_ATOL = 1e-05
GRADCHECK_sampling_ATOL = 1e-01
PSR_ACCEPTANCE = 1e-05
GPSR_ACCEPTANCE = 1e-05
ABC_ARRAY: NDArray = array(list(ABC))

logger = getLogger(__name__)
Expand All @@ -48,6 +47,23 @@ def qubit_support_as_tuple(support: int | tuple[int, ...]) -> tuple[int, ...]:
return qubit_support


def _round_operator(t: Tensor, decimals: int = 4) -> Tensor:
if torch.is_complex(t):

def _round(_t: Tensor) -> Tensor:
r = _t.real.round(decimals=decimals)
i = _t.imag.round(decimals=decimals)
return torch.complex(r, i)

else:

def _round(_t: Tensor) -> Tensor:
return _t.round(decimals=decimals)

fn = torch.vmap(_round)
return fn(t)


def inner_prod(bra: Tensor, ket: Tensor) -> Tensor:
"""
Compute the inner product :math:`\\langle\\bra|\\ket\\rangle`
Expand Down
Loading