Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update SPSA demo to count circuit executions manually. #706

Merged
merged 25 commits into from
Feb 24, 2023
Merged
Changes from 22 commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
a066b3f
new version of SPSA demo
dwierichs Feb 2, 2023
cd4db50
Update demonstrations/spsa.py
dwierichs Feb 3, 2023
71dc81c
simplify code, update outputs and figures
dwierichs Feb 3, 2023
4ef4f0c
Apply suggestions from code review
dwierichs Feb 6, 2023
0ecaa7d
review and tweaking
dwierichs Feb 6, 2023
1d61b50
Merge branch 'master' into update-spsa
dwierichs Feb 6, 2023
78574e0
Merge branch 'update-spsa' of github.com:PennyLaneAI/qml into update-…
dwierichs Feb 6, 2023
7b33d6d
Merge branch 'master' into update-spsa
rmoyard Feb 6, 2023
7d8cd52
make spsa an executing tutorial
dwierichs Feb 7, 2023
46020a3
Merge branch 'update-spsa' of github.com:PennyLaneAI/qml into update-…
dwierichs Feb 7, 2023
9708709
move file
dwierichs Feb 7, 2023
f59cbd9
Merge branch 'master' into update-spsa
rmoyard Feb 8, 2023
396f3f8
fix links
dwierichs Feb 9, 2023
83fcabe
Merge branch 'update-spsa' of github.com:PennyLaneAI/qml into update-…
dwierichs Feb 9, 2023
37d1a38
fix some more links
dwierichs Feb 9, 2023
1a3dd4c
author bio
dwierichs Feb 9, 2023
9975b86
line breaks, undo link change, fixing them actually
dwierichs Feb 9, 2023
33287fb
links...
dwierichs Feb 9, 2023
14da5c7
links again... bio
dwierichs Feb 9, 2023
72e7155
merge
dwierichs Feb 23, 2023
8aedf8f
update to count executions manually
dwierichs Feb 23, 2023
2414324
reword device executions -> circuit executions. avoid recreating devi…
dwierichs Feb 23, 2023
5e82b99
Merge branch 'dev' into update-spsa
rmoyard Feb 23, 2023
ec06345
Merge branch 'dev' into update-spsa
rmoyard Feb 23, 2023
f9d47ca
print executions
dwierichs Feb 24, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
107 changes: 43 additions & 64 deletions demonstrations/tutorial_spsa.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
tutorial_vqe_qng Accelerating VQEs with quantum natural gradient
qnspsa Quantum natural SPSA optimizer

*Authors: Antal Szava & David Wierichs — Posted: 19 March 2021. Last updated: 10 February 2023.*
*Authors: Antal Szava & David Wierichs — Posted: 19 March 2021. Last updated: 23 February 2023.*

In this tutorial, we investigate using a stochastic optimizer called
the Simultaneous Perturbation Stochastic Approximation (SPSA) algorithm to optimize quantum
Expand All @@ -29,7 +29,7 @@
2. The variational quantum eigensolver on a simulated hardware device.

Throughout the demo, we show results obtained with SPSA and with gradient
descent and also compare the number of device executions required to complete
descent and also compare the number of executed circuits required to complete
each optimization.

Background
Expand Down Expand Up @@ -147,7 +147,7 @@

.. note::

Just as with other PennyLane device, the number of samples taken for a device
Just as with other PennyLane device, the number of samples taken for a circuit
execution can be specified using the ``shots`` keyword argument of the
device.

Expand Down Expand Up @@ -191,52 +191,38 @@ def circuit(param):

##############################################################################
# We will execute a few optimizations in this demo, so let's prepare a convenience
# function that runs an optimizer instance and records the cost values and the
# number of device executions along the way. The latter will be an interesting
# quantity to evaluate the optimization cost on hardware!
# function that runs an optimizer instance and records the cost values
# along the way. Together with the number of executed circuits, this will be an
# interesting quantity to evaluate the optimization cost on hardware!


def run_optimizer(optimizer, cost_function, init_param, num_steps, interval):
def run_optimizer(opt, cost_function, init_param, num_steps, interval):
# Copy the initial parameters to make sure they are never overwritten
param = init_param.copy()

# Obtain the device used in the cost function
dev = cost_function.device

# Initialize the memory for cost values and device executions during the optimization
# Initialize the memory for cost values during the optimization
cost_history = []
exec_history = [0]
# Monitor the initial cost value
cost_history.append(cost_function(param))
execs_per_cost_eval = dev.num_executions

print(
f"\nRunning the {optimizer.__class__.__name__} optimizer for {num_steps} iterations."
)
print(f"\nRunning the {opt.__class__.__name__} optimizer for {num_steps} iterations.")
for step in range(num_steps):
# Perform an update step
param = optimizer.step(cost_function, param)

# Monitor the device executions, deducting the executions for cost monitoring
exec_history.append(dev.num_executions - (step + 1) * execs_per_cost_eval)
param = opt.step(cost_function, param)

# Monitor the cost value
cost_history.append(cost_function(param))

# Print out the status of the optimization
if step % interval == 0:
print(
f"Iteration = {step:3d}, "
f"Device executions = {exec_history[step]:4d}, "
f"Cost = {cost_history[step]}"
)
print(f"Iteration = {step:3d}, Cost = {cost_history[step]}")
rmoyard marked this conversation as resolved.
Show resolved Hide resolved

print(
f"Iteration = {num_steps:3d}, Device executions = {exec_history[-1]:4d}, "
f"Cost = {cost_history[-1]}"
)
print(f"Iteration = {num_steps:3d}, Cost = {cost_history[-1]}")

return cost_history, exec_history
return cost_history


##############################################################################
Expand Down Expand Up @@ -269,40 +255,36 @@ def run_optimizer(optimizer, cost_function, init_param, num_steps, interval):

num_steps_spsa = 200
opt = qml.SPSAOptimizer(maxiter=num_steps_spsa, c=0.15, a=0.2)
cost_history_spsa, exec_history_spsa = run_optimizer(
opt, cost_function, init_param, num_steps_spsa, 20
)
cost_history_spsa = run_optimizer(opt, cost_function, init_param, num_steps_spsa, 20)
# We have 201 energy recordings, including the initial value, and spent 2 circuit
# evaluations in each step
exec_history_spsa = np.arange(num_steps_spsa + 1) * 2

##############################################################################
# Now let's perform the same optimization using gradient descent. We set the
# step size according to a favourable value found after grid search for fast
# convergence. Note that we also create a new device in order to reset the
# execution count to 0. With the new device, we recreate the cost function as well.

# Create a new device and a qnode as cost function
device = qml.device("qiskit.aer", wires=num_wires, shots=1000)
cost_function = qml.QNode(circuit, device)
# convergence.

num_steps_grad = 15
opt = qml.GradientDescentOptimizer(stepsize=0.3)
cost_history_grad, exec_history_grad = run_optimizer(
opt, cost_function, init_param, num_steps_grad, 3
)
cost_history_grad = run_optimizer(opt, cost_function, init_param, num_steps_grad, 3)
# We have 16 energy recordings, including the initial value, and spent 2 circuit
# evaluations *per parameter* in each step
exec_history_grad = np.arange(num_steps_grad + 1) * (2 * np.prod(param_shape))
dwierichs marked this conversation as resolved.
Show resolved Hide resolved

##############################################################################
# SPSA and gradient descent comparison
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#
# At this point, nothing else remains but to check which of these approaches did
# better!
# At this point, nothing else remains but to check which of these approaches did better!
import matplotlib.pyplot as plt

plt.figure(figsize=(10, 6))

plt.plot(exec_history_grad, cost_history_grad, label="Gradient descent")
plt.plot(exec_history_spsa, cost_history_spsa, label="SPSA")

plt.xlabel("Device executions", fontsize=14)
plt.xlabel("Circuit executions", fontsize=14)
plt.ylabel("Cost function value", fontsize=14)
plt.grid()

Expand All @@ -316,15 +298,15 @@ def run_optimizer(optimizer, cost_function, init_param, num_steps, interval):
# compared to gradient descent!
#
# Let's take a deeper dive to see how much better it actually is by computing
# the ratio of required device executions to reach an absolute accuracy of 0.01.
# the ratio of required circuit executions to reach an absolute accuracy of 0.01.
#
grad_execs_to_prec = exec_history_grad[np.where(np.array(cost_history_grad) < -0.99)[0][0]]
spsa_execs_to_prec = exec_history_spsa[np.where(np.array(cost_history_spsa) < -0.99)[0][0]]
print(f"Device execution ratio: {np.round(grad_execs_to_prec/spsa_execs_to_prec, 3)}.")
print(f"Circuit execution ratio: {np.round(grad_execs_to_prec/spsa_execs_to_prec, 3)}.")

##############################################################################
# This means that SPSA found the minimum up to an absolute accuracy of 0.01 while
# using ten times fewer device executions than gradient descent! That's a huge
# using multiple times fewer circuit executions than gradient descent! That's an important
# saving, especially when running the algorithm on actual quantum hardware.
#
# SPSA and the variational quantum eigensolver
Expand Down Expand Up @@ -398,15 +380,17 @@ def circuit(param):
# This random seed was used in the original VQE demo and is known to allow the
# gradient descent algorithm to converge to the global minimum.
np.random.seed(0)
init_param = np.random.normal(0, np.pi, (2, num_qubits, 3), requires_grad=True)
param_shape = (2, num_qubits, 3)
init_param = np.random.normal(0, np.pi, param_shape, requires_grad=True)

# Initialize the optimizer - optimal step size was found through a grid search
opt = qml.GradientDescentOptimizer(stepsize=2.2)

# Run the optimization
cost_history_grad, exec_history_grad = run_optimizer(
opt, cost_function, init_param, num_steps_grad, 3
)
cost_history_grad = run_optimizer(opt, cost_function, init_param, num_steps_grad, 3)
# We have 16 energy recordings, including the initial value, and spent 2 * 15 circuit
# evaluations *per parameter* in each step, as there are 15 Hamiltonian terms
exec_history_grad = np.arange(num_steps_grad + 1) * (2 * np.prod(param_shape) * 15)

final_energy = cost_history_grad[-1]
print(f"\nFinal estimated value of the ground state energy = {final_energy:.8f} Ha")
Expand All @@ -426,7 +410,7 @@ def circuit(param):

plt.xticks(fontsize=13)
plt.yticks(fontsize=13)
plt.xlabel("Device executions", fontsize=14)
plt.xlabel("Circuit executions", fontsize=14)
plt.ylabel("Energy (Ha)", fontsize=14)
plt.grid()

Expand All @@ -448,24 +432,19 @@ def circuit(param):
# ^^^^^^^^^^^^^
#
# Now let's perform the same experiment using SPSA for the VQE optimization.
# SPSA should use only 2 device executions per term in the expectation value.
# SPSA should use only 2 circuit executions per term in the expectation value.
# Since there are 15 terms and we choose 160 iterations with two evaluations for
# each gradient estimate, we expect 4800 total device
# executions. Again we create a new device and cost function in order to reset
# the number of executions.

noisy_device = qml.device(
"qiskit.aer", wires=num_qubits, shots=1000, noise_model=noise_model
)
cost_function = qml.QNode(circuit, noisy_device)
# executions.

num_steps_spsa = 160
opt = qml.SPSAOptimizer(maxiter=num_steps_spsa, c=0.3, a=1.5)

# Run the SPSA algorithm
cost_history_spsa, exec_history_spsa = run_optimizer(
opt, cost_function, init_param, num_steps_spsa, 20
)
cost_history_spsa = run_optimizer(opt, cost_function, init_param, num_steps_spsa, 20)
# We have 161 energy recordings, including the initial value, and spent 2 * 15 circuit
# evaluations in each step, as there are 15 terms in the Hamiltonian
exec_history_spsa = np.arange(num_steps_spsa + 1) * (2 * 15)
final_energy = cost_history_spsa[-1]

print(f"\nFinal estimated value of the ground state energy = {final_energy:.8f} Ha")
Expand All @@ -476,15 +455,15 @@ def circuit(param):
##############################################################################
# The SPSA optimization seems to have found a similar energy value.
# We again take a look at how the optimization curves compare, in particular
# with respect to the device executions spent on the task.
# with respect to the circuit executions spent on the task.

plt.figure(figsize=(10, 6))

plt.plot(exec_history_grad, cost_history_grad, label="Gradient descent")
plt.plot(exec_history_spsa, cost_history_spsa, label="SPSA")

plt.title("$H_2$ energy from VQE using gradient descent vs. SPSA", fontsize=16)
plt.xlabel("Device executions", fontsize=14)
plt.xlabel("Circuit executions", fontsize=14)
plt.ylabel("Energy (Ha)", fontsize=14)
plt.grid()

Expand All @@ -504,7 +483,7 @@ def circuit(param):
# ----------
#
# SPSA is a useful optimization technique that may be particularly beneficial on
# near-term quantum hardware. It uses significantly fewer device executions to achieve
# near-term quantum hardware. It uses significantly fewer circuit executions to achieve
# comparable results as gradient-based methods, giving it the potential
# to save time and resources. It can be a good alternative to
# gradient-based methods when the optimization problem involves executing
Expand Down