Skip to content

Commit

Permalink
more fix
Browse files Browse the repository at this point in the history
  • Loading branch information
jpmoutinho committed Oct 18, 2023
1 parent 9b4bd6d commit af4aeee
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion docs/qml/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ fm = qd.kron(RX(i, acos(fp)) for i in range(n_qubits))
# the name of the assigned to the feature parameter
inputs = {"phi": torch.rand(3)}
samples = qd.sample(fm, values=inputs)
print(samples[0]) # markdown-exec: hide
print(f"samples = {samples[0]}") # markdown-exec: hide
```

The [`constructors.feature_map`][qadence.constructors.feature_map] module provides
Expand Down
2 changes: 1 addition & 1 deletion docs/qml/ml_tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ for i in range(n_epochs):
## Optimization routines

For training QML models, Qadence also offers a few out-of-the-box routines for optimizing differentiable
models, _e.g._ `QNN`s and `QuantumModel`s containing either *trainable* and/or *non-trainable* parameters
models, _e.g._ `QNN`s and `QuantumModel`, containing either *trainable* and/or *non-trainable* parameters
(see [the parameters tutorial](../tutorials/parameters.md) for detailed information about parameter types):

* [`train_with_grad`][qadence.ml_tools.train_with_grad] for gradient-based optimization using PyTorch native optimizers
Expand Down

0 comments on commit af4aeee

Please sign in to comment.