Skip to content

Commit

Permalink
[Refactor][Feature] Train functions refactoring (#593)
Browse files Browse the repository at this point in the history
  • Loading branch information
mlahariya authored Nov 26, 2024
1 parent 05fac4a commit faa3d5e
Show file tree
Hide file tree
Showing 40 changed files with 4,465 additions and 1,597 deletions.
15 changes: 10 additions & 5 deletions docs/api/ml_tools.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,22 @@
## ML Tools

This module implements gradient-free and gradient-based training loops for torch Modules and QuantumModel. It also implements the QNN class.
This module implements a `Trainer` class for torch `Modules` and `QuantumModel`. It also implements the `QNN` class and callbacks that can be used with the trainer module.


### ::: qadence.ml_tools.trainer

### ::: qadence.ml_tools.config

### ::: qadence.ml_tools.parameters

### ::: qadence.ml_tools.optimize_step

### ::: qadence.ml_tools.train_grad

### ::: qadence.ml_tools.train_no_grad

### ::: qadence.ml_tools.data

### ::: qadence.ml_tools.models

### ::: qadence.ml_tools.callbacks.callback

### ::: qadence.ml_tools.train_utils.base_trainer

### ::: qadence.ml_tools.callbacks.writer_registry
8 changes: 5 additions & 3 deletions docs/tutorials/advanced_tutorials/custom-models.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,8 @@ This model can then be trained with the standard Qadence helper functions.

```python exec="on" source="material-block" result="json" session="custom-model"
from qadence import run
from qadence.ml_tools import train_with_grad, TrainConfig
from qadence.ml_tools import Trainer, TrainConfig
Trainer.set_use_grad(True)

criterion = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-1)
Expand All @@ -128,9 +129,10 @@ def loss_fn(model: LearnHadamard, _unused) -> tuple[torch.Tensor, dict]:
return loss, {}

config = TrainConfig(max_iter=2500)
model, optimizer = train_with_grad(
model, None, optimizer, config, loss_fn=loss_fn
trainer = Trainer(
model, optimizer, config, loss_fn
)
model, optimizer = trainer.fit()

wf_target = run(target_circuit)
assert torch.allclose(wf_target, model.wavefunction(), atol=1e-2)
Expand Down
11 changes: 7 additions & 4 deletions docs/tutorials/digital_analog_qc/analog-qubo.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ ensure the reproducibility of this tutorial.
import torch
from qadence import QuantumModel, QuantumCircuit, Register
from qadence import RydbergDevice, AnalogRX, AnalogRZ, chain
from qadence.ml_tools import train_gradient_free, TrainConfig, num_parameters
from qadence.ml_tools import Trainer, TrainConfig, num_parameters
import nevergrad as ng
import matplotlib.pyplot as plt

Expand All @@ -80,12 +80,12 @@ Q = np.array(
]
)

def loss(model: QuantumModel, *args) -> tuple[float, dict]:
def loss(model: QuantumModel, *args) -> tuple[torch.Tensor, dict]:
to_arr_fn = lambda bitstring: np.array(list(bitstring), dtype=int)
cost_fn = lambda arr: arr.T @ Q @ arr
samples = model.sample({}, n_shots=1000)[0] # extract samples
cost_fn = sum(samples[key] * cost_fn(to_arr_fn(key)) for key in samples)
return cost_fn / sum(samples.values()), {} # We return an optional metrics dict
return torch.tensor(cost_fn / sum(samples.values())), {} # We return an optional metrics dict
```

The QAOA algorithm needs a variational quantum circuit with optimizable parameters.
Expand Down Expand Up @@ -132,11 +132,14 @@ ML facilities to run gradient-free optimizations using the
[`nevergrad`](https://facebookresearch.github.io/nevergrad/) library.

```python exec="on" source="material-block" session="qubo"
Trainer.set_use_grad(False)

config = TrainConfig(max_iter=100)
optimizer = ng.optimizers.NGOpt(
budget=config.max_iter, parametrization=num_parameters(model)
)
train_gradient_free(model, None, optimizer, config, loss)
trainer = Trainer(model, optimizer, config, loss)
trainer.fit()

optimal_counts = model.sample({}, n_shots=1000)[0]
print(f"optimal_count = {optimal_counts}") # markdown-exec: hide
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/qml/dqc_1d.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ print(html_string(circuit)) # markdown-exec: hide

## Training the model

Now that the model is defined we can proceed with the training. the `QNN` class can be used like any other `torch.nn.Module`. Here we write a simple training loop, but you can also look at the [ml tools tutorial](ml_tools.md) to use the convenience training functions that Qadence provides.
Now that the model is defined we can proceed with the training. the `QNN` class can be used like any other `torch.nn.Module`. Here we write a simple training loop, but you can also look at the [ml tools tutorial](ml_tools/trainer.md) to use the convenience training functions that Qadence provides.

To train the model, we will select a random set of collocation points uniformly distributed within $-1.0< x <1.0$ and compute the loss function for those points.

Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/qml/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ differentiation via integration with [PyTorch](https://pytorch.org/) deep learni
Furthermore, Qadence offers a wide range of utilities for helping building and researching quantum machine learning algorithms, including:

* [a set of constructors](../../content/qml_constructors.md) for circuits commonly used in quantum machine learning such as feature maps and ansatze
* [a set of tools](ml_tools.md) for training and optimizing quantum neural networks and loading classical data into a QML algorithm
* [a set of tools](ml_tools/trainer.md) for training and optimizing quantum neural networks and loading classical data into a QML algorithm

## Some simple examples

Expand Down
Loading

0 comments on commit faa3d5e

Please sign in to comment.