Skip to content

Commit

Permalink
[Docs] Feature maps and QML reorganization (#92)
Browse files Browse the repository at this point in the history
  • Loading branch information
jpmoutinho authored Oct 18, 2023
1 parent 145fd73 commit 8acba9b
Show file tree
Hide file tree
Showing 10 changed files with 438 additions and 349 deletions.
2 changes: 1 addition & 1 deletion docs/development/draw.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ print(html_string(block)) # markdown-exec: hide
```python exec="on" source="material-block" html="1"
from qadence import feature_map, hea, chain

block = chain(feature_map(4, fm_type="tower"), hea(4,2))
block = chain(feature_map(4, reupload_scaling="Tower"), hea(4,2))
from qadence.draw import html_string # markdown-exec: hide
print(html_string(block)) # markdown-exec: hide
```
Expand Down
46 changes: 27 additions & 19 deletions docs/qml/index.md
Original file line number Diff line number Diff line change
@@ -1,68 +1,76 @@
Variational algorithms on noisy devices and quantum machine learning (QML) [^1] in particular are
the target applications for Qadence. For this purpose, the
library offers both flexible symbolic expressions for the
quantum circuit parameters via `sympy` (see [here](../tutorials/parameters.md) for more
details) and native automatic differentiation via integration with
[PyTorch](https://pytorch.org/) deep learning framework.
Variational algorithms on noisy devices and quantum machine learning (QML)[^1] in particular are one of the main
target applications for Qadence. For this purpose, the library offers both flexible symbolic expressions for the
quantum circuit parameters via `sympy` (see [here](../tutorials/parameters.md) for more details) and native automatic
differentiation via integration with [PyTorch](https://pytorch.org/) deep learning framework.

Furthermore, Qadence offers a wide range of utilities for helping building and researching quantum machine learning algorithms, including:

* [a set of constructors](qml_constructors.md) for circuits commonly used in quantum machine learning such as feature maps and ansatze
* [a set of tools](ml_tools) for training and optimizing quantum neural networks and loading classical data into a QML algorithm

## Some simple examples

Qadence symbolic parameter interface allows to create
arbitrary feature maps to encode classical data into quantum circuits
with an arbitrary non-linear function embedding for the input values:

```python exec="on" source="material-block" html="1" result="json" session="qml"
```python exec="on" source="material-block" result="json" session="qml"
import qadence as qd
from qadence.operations import *
import torch
from sympy import acos

n_qubits = 4

# Example feature map, also directly available with the `feature_map` function
fp = qd.FeatureParameter("phi")
feature_map = qd.kron(RX(i, 2 * acos(fp)) for i in range(n_qubits))
fm = qd.kron(RX(i, acos(fp)) for i in range(n_qubits))

# the key in the dictionary must correspond to
# the name of the assigned to the feature parameter
inputs = {"phi": torch.rand(3)}
samples = qd.sample(feature_map, values=inputs)
print(samples)
samples = qd.sample(fm, values=inputs)
print(f"samples = {samples[0]}") # markdown-exec: hide
```

The [`constructors.feature_map`][qadence.constructors.feature_map] module provides
convenience functions to build commonly used feature maps where the input parameter
is encoded in the single-qubit gates rotation angle.
is encoded in the single-qubit gates rotation angle. This function will be further
demonstrated in the [QML constructors tutorial](qml_constructors.md).

Furthermore, Qadence is natively integrated with PyTorch automatic differentiation engine thus
Qadence quantum models can be used seamlessly in a PyTorch workflow.

Let's create a quantum neural network model using the feature map just defined, a
digital-analog variational ansatz and a simple observable $X(0) \otimes X(1)$. We
use the convenience `QNN` quantum model abstraction.
digital-analog variational ansatz ([also explained here](qml_constructors.md)) and a
simple observable $X(0) \otimes X(1)$. We use the convenience `QNN` quantum model abstraction.

```python exec="on" source="material-block" result="json" session="qml"
ansatz = qd.hea(n_qubits, strategy="sDAQC")
circuit = qd.QuantumCircuit(n_qubits, feature_map, ansatz)
circuit = qd.QuantumCircuit(n_qubits, fm, ansatz)
observable = qd.kron(X(0), X(1))

model = qd.QNN(circuit, observable)

# NOTE: the `QNN` is a torch.nn.Module
assert isinstance(model, torch.nn.Module)
print(isinstance(model, torch.nn.Module)) # markdown-exec: hide
```

Differentiation works the same way as any other PyTorch module:

```python exec="on" source="material-block" html="1" result="json" session="qml"
```python exec="on" source="material-block" result="json" session="qml"
values = {"phi": torch.rand(10, requires_grad=True)}

# the forward pass of the quantum model returns the expectation
# value of the input observable
out = model(values)
print(f"Quantum model output: {out}")
print(f"Quantum model output: \n{out}\n") # markdown-exec: hide

# you can compute the gradient with respect to inputs using
# PyTorch autograd differentiation engine
dout = torch.autograd.grad(out, values["phi"], torch.ones_like(out), create_graph=True)[0]
print(f"First-order derivative w.r.t. the feature parameter: {dout}")
print(f"First-order derivative w.r.t. the feature parameter: \n{dout}")

# you can also call directly a backward pass to compute derivatives with respect
# to the variational parameters and use it for implementing variational
Expand All @@ -74,12 +82,12 @@ To run QML on real devices, Qadence offers generalized parameter shift rules (GP
for arbitrary quantum operations which can be selected when constructing the
`QNN` model:

```python exec="on" source="material-block" html="1" result="json" session="qml"
```python exec="on" source="material-block" result="json" session="qml"
model = qd.QNN(circuit, observable, diff_mode="gpsr")
out = model(values)

dout = torch.autograd.grad(out, values["phi"], torch.ones_like(out), create_graph=True)[0]
print(f"First-order derivative w.r.t. the feature parameter: {dout}")
print(f"First-order derivative w.r.t. the feature parameter: \n{dout}")
```

See [here](../advanced_tutorials/differentiability.md) for more details on how the parameter
Expand Down
213 changes: 213 additions & 0 deletions docs/qml/ml_tools.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,213 @@
## Dataloaders

When using Qadence, you can supply classical data to a quantum machine learning
algorithm by using a standard PyTorch `DataLoader` instance. Qadence also provides
the `DictDataLoader` convenience class which allows
to build dictionaries of `DataLoader`s instances and easily iterate over them.

```python exec="on" source="material-block"
import torch
from torch.utils.data import DataLoader, TensorDataset
from qadence.ml_tools import DictDataLoader

def dataloader() -> DataLoader:
batch_size = 5
x = torch.linspace(0, 1, batch_size).reshape(-1, 1)
y = torch.sin(x)

dataset = TensorDataset(x, y)
return DataLoader(dataset, batch_size=batch_size)


def dictdataloader() -> DictDataLoader:
batch_size = 5

keys = ["y1", "y2"]
dls = {}
for k in keys:
x = torch.rand(batch_size, 1)
y = torch.sin(x)
dataset = TensorDataset(x, y)
dataloader = DataLoader(dataset, batch_size=batch_size)
dls[k] = dataloader

return DictDataLoader(dls)

n_epochs = 2

# iterate standard DataLoader
dl = dataloader()
for i in range(n_epochs):
data = next(iter(dl))

# iterate DictDataLoader
ddl = dictdataloader()
for i in range(n_epochs):
data = next(iter(ddl))

```

## Optimization routines

For training QML models, Qadence also offers a few out-of-the-box routines for optimizing differentiable
models, _e.g._ `QNN`s and `QuantumModel`, containing either *trainable* and/or *non-trainable* parameters
(see [the parameters tutorial](../tutorials/parameters.md) for detailed information about parameter types):

* [`train_with_grad`][qadence.ml_tools.train_with_grad] for gradient-based optimization using PyTorch native optimizers
* [`train_gradient_free`][qadence.ml_tools.train_gradient_free] for gradient-free optimization using
the [Nevergrad](https://facebookresearch.github.io/nevergrad/) library.

These routines performs training, logging/printing loss metrics and storing intermediate checkpoints of models. In the following, we
use `train_with_grad` as example but the code can be used directly with the gradient-free routine.

As every other training routine commonly used in Machine Learning, it requires
`model`, `data` and an `optimizer` as input arguments.
However, in addition, it requires a `loss_fn` and a `TrainConfig`.
A `loss_fn` is required to be a function which expects both a model and data and returns a tuple of (loss, metrics: `<dict>`), where `metrics` is a dict of scalars which can be customized too.

```python exec="on" source="material-block"
import torch
from itertools import count
cnt = count()
criterion = torch.nn.MSELoss()

def loss_fn(model: torch.nn.Module, data: torch.Tensor) -> tuple[torch.Tensor, dict]:
next(cnt)
x, y = data[0], data[1]
out = model(x)
loss = criterion(out, y)
return loss, {}

```

The [`TrainConfig`][qadence.ml_tools.config.TrainConfig] tells `train_with_grad` what batch_size should be used,
how many epochs to train, in which intervals to print/log metrics and how often to store intermediate checkpoints.

```python exec="on" source="material-block"
from qadence.ml_tools import TrainConfig

batch_size = 5
n_epochs = 100

config = TrainConfig(
folder="some_path/",
max_iter=n_epochs,
checkpoint_every=100,
write_every=100,
batch_size=batch_size,
)
```

Let's see it in action with a simple example.

### Fitting a funtion with a QNN using `ml_tools`

Let's look at a complete example of how to use `train_with_grad` now.

```python exec="on" source="material-block" html="1"
from pathlib import Path
import torch
from itertools import count
from qadence.constructors import hamiltonian_factory, hea, feature_map
from qadence import chain, Parameter, QuantumCircuit, Z
from qadence.models import QNN
from qadence.ml_tools import train_with_grad, TrainConfig
import matplotlib.pyplot as plt

n_qubits = 2
fm = feature_map(n_qubits)
ansatz = hea(n_qubits=n_qubits, depth=3)
observable = hamiltonian_factory(n_qubits, detuning=Z)
circuit = QuantumCircuit(n_qubits, fm, ansatz)

model = QNN(circuit, observable, backend="pyqtorch", diff_mode="ad")
batch_size = 1
input_values = {"phi": torch.rand(batch_size, requires_grad=True)}
pred = model(input_values)

cnt = count()
criterion = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)

def loss_fn(model: torch.nn.Module, data: torch.Tensor) -> tuple[torch.Tensor, dict]:
next(cnt)
x, y = data[0], data[1]
out = model(x)
loss = criterion(out, y)
return loss, {}

tmp_path = Path("/tmp")

n_epochs = 50

config = TrainConfig(
folder=tmp_path,
max_iter=n_epochs,
checkpoint_every=100,
write_every=100,
batch_size=batch_size,
)

batch_size = 25

x = torch.linspace(0, 1, batch_size).reshape(-1, 1)
y = torch.sin(x)

train_with_grad(model, (x, y), optimizer, config, loss_fn=loss_fn)

plt.clf() # markdown-exec: hide
plt.plot(x.numpy(), y.numpy())
plt.plot(x.numpy(), model(x).detach().numpy())
from docs import docsutils # markdown-exec: hide
print(docsutils.fig_to_html(plt.gcf())) # markdown-exec: hide
```

For users who want to use the low-level API of `qadence`, here is the example from above
written without `train_with_grad`.

### Fitting a function - Low-level API

```python exec="on" source="material-block"
from pathlib import Path
import torch
from itertools import count
from qadence.constructors import hamiltonian_factory, hea, feature_map
from qadence import chain, Parameter, QuantumCircuit, Z
from qadence.models import QNN
from qadence.ml_tools import train_with_grad, TrainConfig

n_qubits = 2
fm = feature_map(n_qubits)
ansatz = hea(n_qubits=n_qubits, depth=3)
observable = hamiltonian_factory(n_qubits, detuning=Z)
circuit = QuantumCircuit(n_qubits, fm, ansatz)

model = QNN(circuit, observable, backend="pyqtorch", diff_mode="ad")
batch_size = 1
input_values = {"phi": torch.rand(batch_size, requires_grad=True)}
pred = model(input_values)

criterion = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)
n_epochs=50
cnt = count()

tmp_path = Path("/tmp")

config = TrainConfig(
folder=tmp_path,
max_iter=n_epochs,
checkpoint_every=100,
write_every=100,
batch_size=batch_size,
)

x = torch.linspace(0, 1, batch_size).reshape(-1, 1)
y = torch.sin(x)

for i in range(n_epochs):
out = model(x)
loss = criterion(out, y)
loss.backward()
optimizer.step()
```
2 changes: 1 addition & 1 deletion docs/qml/qaoa.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ for i in range(n_epochs):
```

Qadence offers some convenience functions to implement this training loop with advanced
logging and metrics track features. You can refer to [this](../qml/qml_tools.md) for more details.
logging and metrics track features. You can refer to [this tutorial](../qml/ml_tools.md) for more details.

## Results

Expand Down
2 changes: 1 addition & 1 deletion docs/qml/qcl.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ assert loss.item() < 1e-3
```

Qadence offers some convenience functions to implement this training loop with advanced
logging and metrics track features. You can refer to [this](../qml/qml_tools.md) for more details.
logging and metrics track features. You can refer to [this tutorial](../qml/ml_tools.md) for more details.

The quantum model is now trained on the training data points. To determine the quality of the results,
one can check to see how well it fits the function on the test set.
Expand Down
Loading

0 comments on commit 8acba9b

Please sign in to comment.