Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Add semi-local addressing #184

Merged
merged 7 commits into from
Nov 28, 2023
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
172 changes: 172 additions & 0 deletions docs/digital_analog_qc/semi-local-addressing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,172 @@


## Physics behind semi-local addressing patterns

Recall that in Qadence the general neutral-atom Hamiltonian for a set of $n$ interacting qubits is given by expression

$$
\mathcal{H} = \mathcal{H}_{\rm drive} + \mathcal{H}_{\rm int} = \sum_{i=0}^{n-1}\left(\mathcal{H}^\text{d}_{i}(t) + \sum_{j<i}\mathcal{H}^\text{int}_{ij}\right)
$$

as is described in detail in the [analog interface basics](analog-basics.md) documentation.

The driving Hamiltonian term in priciple can model any local single-qubit rotation by addressing each qubit individually. However, on the current generation of neutral-atom devices full local addressing is not yet achievable. Fortunatelly, using devices called spatial light modulators (SLMs) it is possible to implement semi-local addressing where the pattern of targeted qubits is fixed during the execution of the quantum circuit. More formally, the addressing pattern appears as an additional term in the neutral-atom Hamiltonian:

$$
\mathcal{H} = \mathcal{H}_{\rm drive} + \mathcal{H}_{\rm int} + \mathcal{H}_{\rm pattern},
$$

where $\mathcal{H}_{\rm pattern}$ is given by

$$
\mathcal{H}_{\rm pattern} = \sum_{i=0}^{n-1}\left(-\Delta w_i^{\rm det} N_i + \Gamma w_i^{\rm drive} X_i\right).
$$

Here $\Delta$ specifies the maximal **negative** detuning that each qubit in the register can be exposed to. The weight $w_i^{\rm det}\in [0, 1]$ determines the actual value of detuning that $i$-th qubit feels and this way the detuning pattern is emulated. Similarly, for the amplitude pattern $\Gamma$ determines the maximal additional **positive** drive that acts on qubits. In this case the corresponding weights $w_i^{\rm drive}$ can vary in the interval $[0, 1]$.

Using the detuning and amplitude patterns described above one can modify the behavior of a selected set of qubits, thus achieving semi-local addressing.

## Creating semi-local addressing patterns

In Qadence semi-local addressing patterns can be created by either specifying fixed values for the weights of qubits to be addressed or defining them as trainable parameters that can be optimized later in some training loop. Semi-local addressing patterns can be used only with analog blocks.

### Fixed weights

With fixed weights detuning/amplitude addressing patterns can defined in the following way.

```python exec="on" source="material-block" session="emu"
import torch
from qadence.analog.addressing import AddressingPattern

n_qubits = 3

w_det = {0: 0.9, 1: 0.5, 2: 1.0}
w_amp = {0: 0.1, 1: 0.4, 2: 0.8}
det = 9.0
amp = 6.5
p = AddressingPattern(
n_qubits=n_qubits,
det=det,
amp=amp,
weights_det=w_det,
weights_amp=w_amp,
)
```

If only detuning or amplitude pattern is needed - the corresponding weights for all qubits can be set to 0.

The created addressing pattern can now be passed to a `QuantumModel` to perform the actual simulation. With the `pyqtorch` backend the pattern object is passed to the `add_interaction` function governing the specifics of interaction between qubits.

```python exec="on" source="material-block" session="emu"
from qadence import (
AnalogRX,
AnalogRY,
BackendName,
DiffMode,
Parameter,
QuantumCircuit,
QuantumModel,
Register,
chain,
total_magnetization,
)
from qadence.analog.interaction import add_interaction

# define register and circuit
spacing = 8.0
x = Parameter("x")
block = chain(AnalogRX(3 * x), AnalogRY(0.5 * x))
reg = Register(support=n_qubits, spacing=spacing)
circ = QuantumCircuit(reg, block)

# define parameter values and observable
values = {"x": torch.linspace(0.5, 2 * torch.pi, 50)}
obs = total_magnetization(n_qubits)

# define pyq backend
int_circ = add_interaction(circ, pattern=p)
model = QuantumModel(
circuit=int_circ, observable=obs, backend=BackendName.PYQTORCH, diff_mode=DiffMode.AD
)

# calculate expectation value of the circuit
expval_pyq = model.expectation(values=values)
```

When using Pulser backend the addressing pattern object is passed to the backend configuration.

```python exec="on" source="material-block" session="emu"
from qadence.backends.pulser.config import Configuration

# create Pulser configuration object
conf = Configuration(addressing_pattern=p)

# define pulser backend
model = QuantumModel(
circuit=circ,
observable=obs,
backend=BackendName.PULSER,
diff_mode=DiffMode.GPSR,
configuration=conf,
)

# calculate expectation value of the circuit
expval_pulser = model.expectation(values=values)
```

### Trainable weights

Since the user can specify both the maximal detuning/amplitude value of the addressing pattern and the corresponding weights, it is natural to make these parameters variational in order to use them in some QML setting. This can be achieved by defining pattern weights as trainable `Parameter` instances.

```python exec="on" source="material-block" session="emu"
n_qubits = 3
reg = Register(support=n_qubits, spacing=8)
f_value = torch.rand(1)

# define training parameters
w_amp = {i: Parameter(f"w_amp{i}", trainable=True) for i in range(n_qubits)}
w_det = {i: Parameter(f"w_det{i}", trainable=True) for i in range(n_qubits)}
amp = Parameter("amp", trainable=True)
det = Parameter("det", trainable=True)
p = AddressingPattern(
n_qubits=n_qubits,
det=det,
amp=amp,
weights_det=w_det, # type: ignore [arg-type]
weights_amp=w_amp, # type: ignore [arg-type]
)

# define training circuit
block = chain(AnalogRX(1 + torch.rand(1).item()), AnalogRY(1 + torch.rand(1).item()))
circ = QuantumCircuit(reg, block)
circ = add_interaction(circ, pattern=p)

# define quantum model
obs = total_magnetization(n_qubits)
model = QuantumModel(circuit=circ, observable=obs, backend=BackendName.PYQTORCH)

# prepare for training
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)
loss_criterion = torch.nn.MSELoss()
n_epochs = 200
loss_save = []

# train model
for _ in range(n_epochs):
optimizer.zero_grad()
out = model.expectation({})
loss = loss_criterion(f_value, out)
loss.backward()
optimizer.step()
loss_save.append(loss.item())

# get final results
f_value_model = model.expectation({}).detach()

assert torch.isclose(f_value, f_value_model, atol=0.01)
```

Here the value of expectation of the circuit is fitted to some predefined value only by varying the parameters of the addressing pattern.

!!! note
Trainable parameters currently are supported only by `pyqtorch` backend.
163 changes: 163 additions & 0 deletions qadence/analog/addressing.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,163 @@
from __future__ import annotations

from dataclasses import dataclass
from warnings import warn

import sympy
import torch
from numpy import pi

from qadence.parameters import Parameter, evaluate
from qadence.types import StrEnum

GLOBAL_MAX_AMPLITUDE = 300
GLOBAL_MAX_DETUNING = 2 * pi * 2000
LOCAL_MAX_AMPLITUDE = 3
LOCAL_MAX_DETUNING = 2 * pi * 20


class WeightConstraint(StrEnum):
"""Supported types of constraints for addressing weights."""

NORMALIZE = "normalize"
"""Normalize weights so that they sum up to 1."""
vytautas-a marked this conversation as resolved.
Show resolved Hide resolved

RESTRICT = "restrict"
"""Restrict weight values to interval [0, 1]."""


def sigmoid(x: torch.Tensor, a: float, b: float) -> sympy.Expr:
return 1.0 / (1.0 + sympy.exp(-a * (x + b)))


@dataclass
class AddressingPattern:
# number of qubits
vytautas-a marked this conversation as resolved.
Show resolved Hide resolved
n_qubits: int

# list of weights for fixed amplitude pattern that cannot be changed during the execution
weights_amp: dict[int, float | torch.Tensor | Parameter]

# list of weights for fixed detuning pattern that cannot be changed during the execution
weights_det: dict[int, float | torch.Tensor | Parameter]

# amplitude can also be chosen as a variational parameter if needed
amp: float | torch.Tensor | Parameter = LOCAL_MAX_AMPLITUDE

# detuning can also be chosen as a variational parameter if needed
det: float | torch.Tensor | Parameter = LOCAL_MAX_DETUNING

def _validate_weights(
self,
weights: dict[int, float | torch.Tensor | Parameter],
) -> None:
for v in weights.values():
if not isinstance(v, Parameter):
if not (v >= 0.0 and v <= 1.0):
raise ValueError("Addressing pattern weights must sum fall in range [0.0, 1.0]")

def _constrain_weights(
self,
weights: dict[int, float | torch.Tensor | Parameter],
) -> dict:
# augment weight dict if needed
weights = {
i: Parameter(0.0)
if i not in weights
else (Parameter(weights[i]) if not isinstance(weights[i], Parameter) else weights[i])
for i in range(self.n_qubits)
}

# restrict weights to [0, 1] range
weights = {
k: abs(v * (sigmoid(v, 20, 1.0) - sigmoid(v, 20.0, -1.0))) for k, v in weights.items()
}
vytautas-a marked this conversation as resolved.
Show resolved Hide resolved

return weights

def _constrain_max_vals(self) -> None:
# enforce constraints:
# 0 <= amp <= GLOBAL_MAX_AMPLITUDE
# 0 <= abs(det) <= GLOBAL_MAX_DETUNING
vytautas-a marked this conversation as resolved.
Show resolved Hide resolved
self.amp = abs(
self.amp
* (
sympy.Heaviside(self.amp + GLOBAL_MAX_AMPLITUDE)
- sympy.Heaviside(self.amp - GLOBAL_MAX_AMPLITUDE)
)
)
self.det = -abs(
self.det
* (
sympy.Heaviside(self.det + GLOBAL_MAX_DETUNING)
- sympy.Heaviside(self.det - GLOBAL_MAX_DETUNING)
)
)

def _create_local_constraint(self, val: sympy.Expr, weights: dict, max_val: float) -> dict:
# enforce local constraints:
# amp * w_amp_i < LOCAL_MAX_AMPLITUDE or
# abs(det) * w_det_i < LOCAL_MAX_DETUNING
vytautas-a marked this conversation as resolved.
Show resolved Hide resolved
local_constr = {k: val * v for k, v in weights.items()}
local_constr = {
k: sympy.Heaviside(v) - sympy.Heaviside(v - max_val) for k, v in local_constr.items()
}

return local_constr

def _create_global_constraint(
self, val: sympy.Expr, weights: dict, max_val: float
) -> sympy.Expr:
# enforce global constraints:
# amp * sum(w_amp_0, w_amp_1, ...) < GLOBAL_MAX_AMPLITUDE or
# abs(det) * sum(w_det_0, w_det_1, ...) < GLOBAL_MAX_DETUNING
vytautas-a marked this conversation as resolved.
Show resolved Hide resolved
weighted_vals_global = val * sum([v for v in weights.values()])
weighted_vals_global = sympy.Heaviside(weighted_vals_global) - sympy.Heaviside(
weighted_vals_global - max_val
)

return weighted_vals_global

def __post_init__(self) -> None:
# validate amplitude/detuning weights
self._validate_weights(self.weights_amp)
self._validate_weights(self.weights_det)

# validate maximum global amplitude/detuning values
if not isinstance(self.amp, Parameter):
if self.amp > GLOBAL_MAX_AMPLITUDE:
warn("Maximum absolute value of amplitude is exceeded")
if not isinstance(self.det, Parameter):
if abs(self.det) > GLOBAL_MAX_DETUNING:
warn("Maximum absolute value of detuning is exceeded")

# constrain amplitude/detuning parameterized weights to [0.0, 1.0] interval
self.weights_amp = self._constrain_weights(self.weights_amp)
self.weights_det = self._constrain_weights(self.weights_det)

# constrain max global amplitude and detuning to strict interval
self._constrain_max_vals()

# create additional local and global constraints for amplitude/detuning masks
self.local_constr_amp = self._create_local_constraint(
self.amp, self.weights_amp, LOCAL_MAX_AMPLITUDE
)
self.local_constr_det = self._create_local_constraint(
-self.det, self.weights_det, LOCAL_MAX_DETUNING
)
self.global_constr_amp = self._create_global_constraint(
self.amp, self.weights_amp, GLOBAL_MAX_AMPLITUDE
)
self.global_constr_det = self._create_global_constraint(
-self.det, self.weights_det, GLOBAL_MAX_DETUNING
)

# validate number of qubits in mask
if max(list(self.weights_amp.keys())) >= self.n_qubits:
raise ValueError("Amplitude weight specified for non-existing qubit")
if max(list(self.weights_det.keys())) >= self.n_qubits:
raise ValueError("Detuning weight specified for non-existing qubit")

def evaluate(self, weights: dict, values: dict) -> dict:
# evaluate weight expressions with actual values
return {k: evaluate(v, values, as_torch=True).flatten() for k, v in weights.items()} # type: ignore [union-attr]
madagra marked this conversation as resolved.
Show resolved Hide resolved
Loading
Loading