Skip to content

Commit

Permalink
comments
Browse files Browse the repository at this point in the history
  • Loading branch information
jpmoutinho committed Oct 18, 2023
1 parent a258f6e commit df22967
Show file tree
Hide file tree
Showing 4 changed files with 12 additions and 13 deletions.
9 changes: 5 additions & 4 deletions docs/qml/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,15 @@ from sympy import acos

n_qubits = 4

# Example feature map, also directly available with the `feature_map` function
fp = qd.FeatureParameter("phi")
feature_map = qd.kron(RX(i, 2 * acos(fp)) for i in range(n_qubits))
fm = qd.kron(RX(i, acos(fp)) for i in range(n_qubits))

# the key in the dictionary must correspond to
# the name of the assigned to the feature parameter
inputs = {"phi": torch.rand(3)}
samples = qd.sample(feature_map, values=inputs)
print(samples[0])
samples = qd.sample(fm, values=inputs)
print(samples[0]) # markdown-exec: hide
```

The [`constructors.feature_map`][qadence.constructors.feature_map] module provides
Expand Down Expand Up @@ -64,7 +65,7 @@ values = {"phi": torch.rand(10, requires_grad=True)}
# the forward pass of the quantum model returns the expectation
# value of the input observable
out = model(values)
print(f"Quantum model output: \n{out}\n")
print(f"Quantum model output: \n{out}\n") # markdown-exec: hide

# you can compute the gradient with respect to inputs using
# PyTorch autograd differentiation engine
Expand Down
4 changes: 2 additions & 2 deletions docs/qml/ml_tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,8 @@ for i in range(n_epochs):
## Optimization routines

For training QML models, Qadence also offers a few out-of-the-box routines for optimizing differentiable
models like `QNN`s and `QuantumModel`s containing either *trainable* and/or *non-trainable* parameters
(you can refer to [the parameters tutorial](../tutorials/parameters.md) for a refresh about different parameter types):
models, _e.g._ `QNN`s and `QuantumModel`s containing either *trainable* and/or *non-trainable* parameters
(see [the parameters tutorial](../tutorials/parameters.md) for a refresh about different parameter types):

* [`train_with_grad`][qadence.ml_tools.train_with_grad] for gradient-based optimization using PyTorch native optimizers
* [`train_gradient_free`][qadence.ml_tools.train_gradient_free] for gradient-free optimization using
Expand Down
2 changes: 1 addition & 1 deletion docs/qml/qml_constructors.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ from qadence.draw import html_string # markdown-exec: hide
print(html_string(ansatz, size="8,4")) # markdown-exec: hide
```

Having a truly *hardware-efficient* ansatz means that the entangling operation can be chosen according to each device's native interactions. Besides digital operations, in Qadence it is also possible to build digital-analog HEAs with the entanglement produced by the natural evolution of a set of interacting qubits, as natively implemented in neutral atom devices. As with other digital-analog functions, this can be controlled with the `strategy` argument which can be chosen from the [`Strategy`](../qadence/types.md) enum type. Currently, only `Strategy.DIGITAL` and `Strategy.SDAQC` are available. By default, calling `strategy = Strategy.SDAQC` will use a global entangling Hamiltonian with Ising-like NN interactions and constant interaction strength,
Having a truly *hardware-efficient* ansatz means that the entangling operation can be chosen according to each device's native interactions. Besides digital operations, in Qadence it is also possible to build digital-analog HEAs with the entanglement produced by the natural evolution of a set of interacting qubits, as natively implemented in neutral atom devices. As with other digital-analog functions, this can be controlled with the `strategy` argument which can be chosen from the [`Strategy`](../qadence/types.md) enum type. Currently, only `Strategy.DIGITAL` and `Strategy.SDAQC` are available. By default, calling `strategy = Strategy.SDAQC` will use a global entangling Hamiltonian with Ising-like $NN$ interactions and constant interaction strength,

```python exec="on" source="material-block" html="1" session="ansatz"
from qadence import Strategy
Expand Down
10 changes: 4 additions & 6 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,12 +28,10 @@ nav:

- Variational quantum algorithms:
- qml/index.md
- Tools for quantum machine learning:
- Constructors: qml/qml_constructors.md
- Training tools: qml/ml_tools.md
- Example applications:
- Quantum circuit learning: qml/qcl.md
- Solving MaxCut with QAOA: qml/qaoa.md
- Constructors: qml/qml_constructors.md
- Training tools: qml/ml_tools.md
- Quantum circuit learning: qml/qcl.md
- Solving MaxCut with QAOA: qml/qaoa.md

- Advanced Tutorials:
- Quantum circuits differentiation: advanced_tutorials/differentiability.md
Expand Down

0 comments on commit df22967

Please sign in to comment.