From ac7722ad7da4e529da40e0c2baccdaadf9c2c056 Mon Sep 17 00:00:00 2001 From: Pavan Jayasinha <70229100+Sinestro38@users.noreply.github.com> Date: Sun, 30 Apr 2023 23:45:51 -0700 Subject: [PATCH 1/9] Create release.md --- tensorflow_quantum/release.md | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) create mode 100644 tensorflow_quantum/release.md diff --git a/tensorflow_quantum/release.md b/tensorflow_quantum/release.md new file mode 100644 index 000000000..91c5a3331 --- /dev/null +++ b/tensorflow_quantum/release.md @@ -0,0 +1,18 @@ +# Release 0.7.0 +# Major Features and Improvements +- Significant performance improvements by introducing cuQuantum support for circuit execution on GPU: + - TensorFlow Quantum Keras layers can now be executed on GPU by setting `use_cuquantum=True` at layer instantiation. Examples: + - `tfq.layers.Expectation(use_cuquantum=True)` + - `tfq.layers.SampledExpectation(use_cuquantum=True)` (note that cuQuantum runtime is unsupported for any noisy circuit operations + - `tfq.layers.State(use_cuquantum=True)` + - `tfq.layers.Sample(use_cuquantum=True)` + - `tfq.layers.SimulateSample(use_cuquantum=True)` + +- Build, compilation, and packaging: + - The TensorFlow dependency has been upgraded from 2.7.0 to 2.11.0: + - TensorFlow Quantum is now compiled with `_GLIBCXX_USE_CXX11_ABI=1`. Downstream projects that encounter `std::__cxx11` or `[abi:cxx11]` linker errors will need to adopt this compiler option. See [the GNU C++ Library docs on Dual ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html). + - TensorFlow Quantum is now compiled with `-std=c++17`, see [install.md](/docs/install.md) for build instructions. + + +# Thanks to our Contributors +This release contains contributions from many people at Google, Nvidia, as well as: From c2ab0d5e08af1eed85f97e8ab5794bbbc7c20833 Mon Sep 17 00:00:00 2001 From: Pavan Jayasinha <70229100+Sinestro38@users.noreply.github.com> Date: Sun, 30 Apr 2023 23:49:17 -0700 Subject: [PATCH 2/9] Update release.md --- tensorflow_quantum/release.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/tensorflow_quantum/release.md b/tensorflow_quantum/release.md index 91c5a3331..01af81c36 100644 --- a/tensorflow_quantum/release.md +++ b/tensorflow_quantum/release.md @@ -1,4 +1,10 @@ # Release 0.7.0 +# Breaking Changes +- Build, compilation, and packaging: + - The TensorFlow dependency has been upgraded from 2.7.0 to 2.11.0: + - TensorFlow Quantum is now compiled with `_GLIBCXX_USE_CXX11_ABI=1`. Downstream projects that encounter `std::__cxx11` or `[abi:cxx11]` linker errors will need to adopt this compiler option. See [the GNU C++ Library docs on Dual ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html). + - TensorFlow Quantum is now compiled with `-std=c++17`, see [install.md](/docs/install.md) for build instructions. + # Major Features and Improvements - Significant performance improvements by introducing cuQuantum support for circuit execution on GPU: - TensorFlow Quantum Keras layers can now be executed on GPU by setting `use_cuquantum=True` at layer instantiation. Examples: @@ -8,11 +14,5 @@ - `tfq.layers.Sample(use_cuquantum=True)` - `tfq.layers.SimulateSample(use_cuquantum=True)` -- Build, compilation, and packaging: - - The TensorFlow dependency has been upgraded from 2.7.0 to 2.11.0: - - TensorFlow Quantum is now compiled with `_GLIBCXX_USE_CXX11_ABI=1`. Downstream projects that encounter `std::__cxx11` or `[abi:cxx11]` linker errors will need to adopt this compiler option. See [the GNU C++ Library docs on Dual ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html). - - TensorFlow Quantum is now compiled with `-std=c++17`, see [install.md](/docs/install.md) for build instructions. - - # Thanks to our Contributors This release contains contributions from many people at Google, Nvidia, as well as: From f933b6925b7ef673c78dbbb750e0a75e3c70dcd3 Mon Sep 17 00:00:00 2001 From: Pavan Jayasinha <70229100+Sinestro38@users.noreply.github.com> Date: Sun, 30 Apr 2023 23:49:50 -0700 Subject: [PATCH 3/9] update version num --- tensorflow_quantum/release.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tensorflow_quantum/release.md b/tensorflow_quantum/release.md index 01af81c36..6d12339a3 100644 --- a/tensorflow_quantum/release.md +++ b/tensorflow_quantum/release.md @@ -1,4 +1,4 @@ -# Release 0.7.0 +# Release 0.8.0 # Breaking Changes - Build, compilation, and packaging: - The TensorFlow dependency has been upgraded from 2.7.0 to 2.11.0: From c7f729a01527bb065bdc5a0e7b1ea43e1330f4ba Mon Sep 17 00:00:00 2001 From: Pavan Jayasinha <70229100+Sinestro38@users.noreply.github.com> Date: Mon, 1 May 2023 00:35:25 -0700 Subject: [PATCH 4/9] add notes --- tensorflow_quantum/release.md | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/tensorflow_quantum/release.md b/tensorflow_quantum/release.md index 6d12339a3..264c00477 100644 --- a/tensorflow_quantum/release.md +++ b/tensorflow_quantum/release.md @@ -6,13 +6,22 @@ - TensorFlow Quantum is now compiled with `-std=c++17`, see [install.md](/docs/install.md) for build instructions. # Major Features and Improvements -- Significant performance improvements by introducing cuQuantum support for circuit execution on GPU: - - TensorFlow Quantum Keras layers can now be executed on GPU by setting `use_cuquantum=True` at layer instantiation. Examples: +- Significant performance improvements by introducing cuQuantum support for circuit execution on Nvidia GPUs: + - TensorFlow Quantum Keras layers can now be executed on GPU by setting the optional arguement `use_cuquantum=True` at layer instantiation. Examples: - `tfq.layers.Expectation(use_cuquantum=True)` - `tfq.layers.SampledExpectation(use_cuquantum=True)` (note that cuQuantum runtime is unsupported for any noisy circuit operations - `tfq.layers.State(use_cuquantum=True)` - `tfq.layers.Sample(use_cuquantum=True)` - - `tfq.layers.SimulateSample(use_cuquantum=True)` + - `tfq.layers.PQC(model_circuit, operators, use_cuquantum=True)` + - `tfq.layers.ControlledPQC(model_circuit, operators, use_cuquantum=True)` + - Important notes: + - CuQuantum execution is currently only supported for source distributions meaning that the user must build TensorFlow Quantum & `tensorFlow-cpu` from source following the instructions in [install.md](/docs/install.md#build-from-source). + - Ensure that the first entry is "N" in the `configure.sh` script at [this step](/docs/install.md#6-build-the-tensorflow-quantum-pip-package) of building. This ensures that you build upon `tensorflow-cpu` as `tensorflow-gpu` is unnecessary for CuQuantum support in TensorFlow Quantum. + - The cuQuantum SDK must be installed locally. See [installation instructions](https://docs.nvidia.com/cuda/cuquantum/custatevec/getting_started.html) for details. As part of the installation process, ensure that the `CUQUANTUM_ROOT` environment variable is set (referred to in the installation instructions). If not set, bazel will attempt to locate the folder containing the cuQuantum installation upon running `configure.sh` at [this step](/docs/install.md#6-build-the-tensorflow-quantum-pip-package). + - Tested on Titan, Ampere and Volta Nvidia GPU architectures. Note that Pascal GPU architectures are not supported, see documentation to [check whether your GPU is compatible with cuQuantum](https://docs.nvidia.com/cuda/cuquantum/getting_started.html#custatevec) + - Quantum concurrency (global context option) should be turned off when `use_cuquantum=True`. This can be done by running: `tfq.python.quantum_context.set_quantum_concurrent_op_mode(False)` + + # Thanks to our Contributors This release contains contributions from many people at Google, Nvidia, as well as: From a7a67c8afc5f977a8347be0a78001c4c9fbd772a Mon Sep 17 00:00:00 2001 From: Pavan Jayasinha <70229100+Sinestro38@users.noreply.github.com> Date: Mon, 1 May 2023 00:36:54 -0700 Subject: [PATCH 5/9] Update release.md --- tensorflow_quantum/release.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tensorflow_quantum/release.md b/tensorflow_quantum/release.md index 264c00477..8abd51297 100644 --- a/tensorflow_quantum/release.md +++ b/tensorflow_quantum/release.md @@ -17,7 +17,7 @@ - Important notes: - CuQuantum execution is currently only supported for source distributions meaning that the user must build TensorFlow Quantum & `tensorFlow-cpu` from source following the instructions in [install.md](/docs/install.md#build-from-source). - Ensure that the first entry is "N" in the `configure.sh` script at [this step](/docs/install.md#6-build-the-tensorflow-quantum-pip-package) of building. This ensures that you build upon `tensorflow-cpu` as `tensorflow-gpu` is unnecessary for CuQuantum support in TensorFlow Quantum. - - The cuQuantum SDK must be installed locally. See [installation instructions](https://docs.nvidia.com/cuda/cuquantum/custatevec/getting_started.html) for details. As part of the installation process, ensure that the `CUQUANTUM_ROOT` environment variable is set (referred to in the installation instructions). If not set, bazel will attempt to locate the folder containing the cuQuantum installation upon running `configure.sh` at [this step](/docs/install.md#6-build-the-tensorflow-quantum-pip-package). + - The cuQuantum SDK must be installed locally. See [installation instructions](https://docs.nvidia.com/cuda/cuquantum/custatevec/getting_started.html) for details. As part of the installation process, ensure that the `CUQUANTUM_ROOT` environment variable is set (referred to in the installation instructions). If not set, bazel will attempt to automatically locate the folder containing the cuQuantum installation upon running `configure.sh` at [this step](/docs/install.md#6-build-the-tensorflow-quantum-pip-package). - Tested on Titan, Ampere and Volta Nvidia GPU architectures. Note that Pascal GPU architectures are not supported, see documentation to [check whether your GPU is compatible with cuQuantum](https://docs.nvidia.com/cuda/cuquantum/getting_started.html#custatevec) - Quantum concurrency (global context option) should be turned off when `use_cuquantum=True`. This can be done by running: `tfq.python.quantum_context.set_quantum_concurrent_op_mode(False)` From a44ab2659fde41daf73f1abbcca56aa34d78661d Mon Sep 17 00:00:00 2001 From: Pavan Jayasinha <70229100+Sinestro38@users.noreply.github.com> Date: Wed, 3 May 2023 14:33:43 -0700 Subject: [PATCH 6/9] update cirq dependency news --- tensorflow_quantum/release.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/tensorflow_quantum/release.md b/tensorflow_quantum/release.md index 8abd51297..63fcd0a0f 100644 --- a/tensorflow_quantum/release.md +++ b/tensorflow_quantum/release.md @@ -4,6 +4,16 @@ - The TensorFlow dependency has been upgraded from 2.7.0 to 2.11.0: - TensorFlow Quantum is now compiled with `_GLIBCXX_USE_CXX11_ABI=1`. Downstream projects that encounter `std::__cxx11` or `[abi:cxx11]` linker errors will need to adopt this compiler option. See [the GNU C++ Library docs on Dual ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html). - TensorFlow Quantum is now compiled with `-std=c++17`, see [install.md](/docs/install.md) for build instructions. +- Cirq dependency has been upgraded from `0.13.1` to `>=1.0` + - `cirq_google.XMON` was deprecated : https://github.com/quantumlib/Cirq/issues/4856 + - `QuantumEngineSampler` was deprecated : https://github.com/quantumlib/Cirq/issues/5371 + - So, we need [ProcessorSampler() for testing](https://github.com/quantumlib/Cirq/blob/master/cirq-google/cirq_google/engine/processor_sampler_test.py) + - `cirq.CNOT` interface was changed. + - https://quantumai.google/reference/python/cirq/CNOT + - No more control, target argument. + - `cirq.SingleQubitGate` was deprecated. + - For testing, use `cirq.testing.SingleQubitGate` : https://github.com/quantumlib/Cirq/pull/5272/files + - For implementation, use `cirq.Gate`. # Major Features and Improvements - Significant performance improvements by introducing cuQuantum support for circuit execution on Nvidia GPUs: From fe3a96e24a3001b76547a36bbe3301f8e5a8a255 Mon Sep 17 00:00:00 2001 From: Pavan Jayasinha <70229100+Sinestro38@users.noreply.github.com> Date: Wed, 3 May 2023 14:44:56 -0700 Subject: [PATCH 7/9] update > to ~ --- tensorflow_quantum/release.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tensorflow_quantum/release.md b/tensorflow_quantum/release.md index 63fcd0a0f..46d844e7a 100644 --- a/tensorflow_quantum/release.md +++ b/tensorflow_quantum/release.md @@ -4,7 +4,7 @@ - The TensorFlow dependency has been upgraded from 2.7.0 to 2.11.0: - TensorFlow Quantum is now compiled with `_GLIBCXX_USE_CXX11_ABI=1`. Downstream projects that encounter `std::__cxx11` or `[abi:cxx11]` linker errors will need to adopt this compiler option. See [the GNU C++ Library docs on Dual ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html). - TensorFlow Quantum is now compiled with `-std=c++17`, see [install.md](/docs/install.md) for build instructions. -- Cirq dependency has been upgraded from `0.13.1` to `>=1.0` +- Cirq dependency has been upgraded from `0.13.1` to `~=1.0` - `cirq_google.XMON` was deprecated : https://github.com/quantumlib/Cirq/issues/4856 - `QuantumEngineSampler` was deprecated : https://github.com/quantumlib/Cirq/issues/5371 - So, we need [ProcessorSampler() for testing](https://github.com/quantumlib/Cirq/blob/master/cirq-google/cirq_google/engine/processor_sampler_test.py) From 01471eea168752447a44695801237b8b7917dca6 Mon Sep 17 00:00:00 2001 From: QuantumJaeYoo Date: Mon, 22 May 2023 08:57:53 +0000 Subject: [PATCH 8/9] Fix warnings and errors due to TF/absl/C++ changes --- .../core/ops/math_ops/tfq_inner_product.cc | 12 +-- .../ops/math_ops/tfq_inner_product_grad.cc | 4 +- .../core/ops/noise/tfq_noisy_expectation.cc | 28 +++---- .../noise/tfq_noisy_sampled_expectation.cc | 28 +++---- .../core/ops/noise/tfq_noisy_samples.cc | 4 +- tensorflow_quantum/core/ops/parse_context.cc | 83 ++++++++++++------- .../core/ops/tfq_adj_grad_op.cc | 24 +++--- .../core/ops/tfq_calculate_unitary_op.cc | 4 +- .../core/ops/tfq_ps_decompose_op.cc | 4 +- .../core/ops/tfq_ps_symbol_replace_op.cc | 10 +-- .../ops/tfq_ps_weights_from_symbols_op.cc | 8 +- .../core/ops/tfq_simulate_expectation_op.cc | 8 +- .../core/ops/tfq_simulate_state_op.cc | 8 +- tensorflow_quantum/core/src/adj_util.cc | 10 +-- .../core/src/circuit_parser_qsim.cc | 58 ++++++++----- .../core/src/circuit_parser_qsim_test.cc | 7 +- .../core/src/program_resolution.cc | 23 ++--- .../core/src/program_resolution_test.cc | 27 +++--- tensorflow_quantum/core/src/util_qsim.h | 8 +- tensorflow_quantum/core/src/util_qsim_test.cc | 6 +- .../datasets/spin_system_test.py | 3 +- 21 files changed, 204 insertions(+), 163 deletions(-) diff --git a/tensorflow_quantum/core/ops/math_ops/tfq_inner_product.cc b/tensorflow_quantum/core/ops/math_ops/tfq_inner_product.cc index 74751f9cc..374aa5b55 100644 --- a/tensorflow_quantum/core/ops/math_ops/tfq_inner_product.cc +++ b/tensorflow_quantum/core/ops/math_ops/tfq_inner_product.cc @@ -174,7 +174,7 @@ class TfqInnerProductOp : public tensorflow::OpKernel { // Simulate programs one by one. Parallelizing over state vectors // we no longer parallelize over circuits. Each time we encounter a // a larger circuit we will grow the Statevector as necessary. - for (int i = 0; i < fused_circuits.size(); i++) { + for (size_t i = 0; i < fused_circuits.size(); i++) { int nq = num_qubits[i]; if (nq > largest_nq) { // need to switch to larger statespace. @@ -186,10 +186,10 @@ class TfqInnerProductOp : public tensorflow::OpKernel { // the state if there is a possibility that circuit[i] and // circuit[i + 1] produce the same state. ss.SetStateZero(sv); - for (int j = 0; j < fused_circuits[i].size(); j++) { + for (size_t j = 0; j < fused_circuits[i].size(); j++) { qsim::ApplyFusedGate(sim, fused_circuits[i][j], sv); } - for (int j = 0; j < other_fused_circuits[i].size(); j++) { + for (size_t j = 0; j < other_fused_circuits[i].size(); j++) { // (#679) Just ignore empty program if (fused_circuits[i].size() == 0) { (*output_tensor)(i, j) = std::complex(1, 0); @@ -197,7 +197,7 @@ class TfqInnerProductOp : public tensorflow::OpKernel { } ss.SetStateZero(scratch); - for (int k = 0; k < other_fused_circuits[i][j].size(); k++) { + for (size_t k = 0; k < other_fused_circuits[i][j].size(); k++) { qsim::ApplyFusedGate(sim, other_fused_circuits[i][j][k], scratch); } @@ -255,13 +255,13 @@ class TfqInnerProductOp : public tensorflow::OpKernel { // no need to update scratch_state since ComputeExpectation // will take care of things for us. ss.SetStateZero(sv); - for (int j = 0; j < fused_circuits[cur_batch_index].size(); j++) { + for (size_t j = 0; j < fused_circuits[cur_batch_index].size(); j++) { qsim::ApplyFusedGate(sim, fused_circuits[cur_batch_index][j], sv); } } ss.SetStateZero(scratch); - for (int k = 0; + for (size_t k = 0; k < other_fused_circuits[cur_batch_index][cur_internal_index].size(); k++) { diff --git a/tensorflow_quantum/core/ops/math_ops/tfq_inner_product_grad.cc b/tensorflow_quantum/core/ops/math_ops/tfq_inner_product_grad.cc index 3db493b11..534d7fef9 100644 --- a/tensorflow_quantum/core/ops/math_ops/tfq_inner_product_grad.cc +++ b/tensorflow_quantum/core/ops/math_ops/tfq_inner_product_grad.cc @@ -398,13 +398,13 @@ class TfqInnerProductGradOp : public tensorflow::OpKernel { // if applicable compute control qubit mask and control value bits. uint64_t mask = 0; uint64_t cbits = 0; - for (int k = 0; k < cur_gate.controlled_by.size(); k++) { + for (size_t k = 0; k < cur_gate.controlled_by.size(); k++) { uint64_t control_loc = cur_gate.controlled_by[k]; mask |= uint64_t{1} << control_loc; cbits |= ((cur_gate.cmask >> k) & 1) << control_loc; } - for (int k = 0; + for (size_t k = 0; k < gradient_gates[cur_batch_index][l - 1].grad_gates.size(); k++) { // Copy sv_adj onto scratch2 in anticipation of non-unitary diff --git a/tensorflow_quantum/core/ops/noise/tfq_noisy_expectation.cc b/tensorflow_quantum/core/ops/noise/tfq_noisy_expectation.cc index c67fa01f7..6f09da68f 100644 --- a/tensorflow_quantum/core/ops/noise/tfq_noisy_expectation.cc +++ b/tensorflow_quantum/core/ops/noise/tfq_noisy_expectation.cc @@ -175,8 +175,8 @@ class TfqNoisyExpectationOp : public tensorflow::OpKernel { tensorflow::GuardedPhiloxRandom random_gen; int max_n_shots = 1; - for (int i = 0; i < num_samples.size(); i++) { - for (int j = 0; j < num_samples[i].size(); j++) { + for (size_t i = 0; i < num_samples.size(); i++) { + for (size_t j = 0; j < num_samples[i].size(); j++) { max_n_shots = std::max(max_n_shots, num_samples[i][j]); } } @@ -188,12 +188,12 @@ class TfqNoisyExpectationOp : public tensorflow::OpKernel { // Simulate programs one by one. Parallelizing over state vectors // we no longer parallelize over circuits. Each time we encounter a // a larger circuit we will grow the Statevector as necessary. - for (int i = 0; i < ncircuits.size(); i++) { + for (size_t i = 0; i < ncircuits.size(); i++) { int nq = num_qubits[i]; // (#679) Just ignore empty program if (ncircuits[i].channels.size() == 0) { - for (int j = 0; j < pauli_sums[i].size(); j++) { + for (size_t j = 0; j < pauli_sums[i].size(); j++) { (*output_tensor)(i, j) = -2.0; } continue; @@ -220,7 +220,7 @@ class TfqNoisyExpectationOp : public tensorflow::OpKernel { sv, unused_stats); // Use this trajectory as a source for all expectation calculations. - for (int j = 0; j < pauli_sums[i].size(); j++) { + for (size_t j = 0; j < pauli_sums[i].size(); j++) { if (run_samples[j] >= num_samples[i][j]) { continue; } @@ -232,14 +232,14 @@ class TfqNoisyExpectationOp : public tensorflow::OpKernel { run_samples[j]++; } bool break_loop = true; - for (int j = 0; j < num_samples[i].size(); j++) { + for (size_t j = 0; j < num_samples[i].size(); j++) { if (run_samples[j] < num_samples[i][j]) { break_loop = false; break; } } if (break_loop) { - for (int j = 0; j < num_samples[i].size(); j++) { + for (size_t j = 0; j < num_samples[i].size(); j++) { rolling_sums[j] /= num_samples[i][j]; (*output_tensor)(i, j) = static_cast(rolling_sums[j]); } @@ -280,8 +280,8 @@ class TfqNoisyExpectationOp : public tensorflow::OpKernel { tensorflow::GuardedPhiloxRandom random_gen; int max_n_shots = 1; - for (int i = 0; i < num_samples.size(); i++) { - for (int j = 0; j < num_samples[i].size(); j++) { + for (size_t i = 0; i < num_samples.size(); i++) { + for (size_t j = 0; j < num_samples[i].size(); j++) { max_n_shots = std::max(max_n_shots, num_samples[i][j]); } } @@ -304,13 +304,13 @@ class TfqNoisyExpectationOp : public tensorflow::OpKernel { random_gen.ReserveSamples128(ncircuits.size() * max_n_shots + 1); tensorflow::random::SimplePhilox rand_source(&local_gen); - for (int i = 0; i < ncircuits.size(); i++) { + for (size_t i = 0; i < ncircuits.size(); i++) { int nq = num_qubits[i]; int rep_offset = rep_offsets[start][i]; // (#679) Just ignore empty program if (ncircuits[i].channels.size() == 0) { - for (int j = 0; j < pauli_sums[i].size(); j++) { + for (size_t j = 0; j < pauli_sums[i].size(); j++) { (*output_tensor)(i, j) = -2.0; } continue; @@ -337,7 +337,7 @@ class TfqNoisyExpectationOp : public tensorflow::OpKernel { sim, sv, unused_stats); // Compute expectations across all ops using this trajectory. - for (int j = 0; j < pauli_sums[i].size(); j++) { + for (size_t j = 0; j < pauli_sums[i].size(); j++) { int p_reps = (num_samples[i][j] + num_threads - 1) / num_threads; if (run_samples[j] >= p_reps + rep_offset) { continue; @@ -354,7 +354,7 @@ class TfqNoisyExpectationOp : public tensorflow::OpKernel { // Check if we have run enough trajectories for all ops. bool break_loop = true; - for (int j = 0; j < num_samples[i].size(); j++) { + for (size_t j = 0; j < num_samples[i].size(); j++) { int p_reps = (num_samples[i][j] + num_threads - 1) / num_threads; if (run_samples[j] < p_reps + rep_offset) { break_loop = false; @@ -364,7 +364,7 @@ class TfqNoisyExpectationOp : public tensorflow::OpKernel { if (break_loop) { // Lock writing to this batch index in output_tensor. batch_locks[i].lock(); - for (int j = 0; j < num_samples[i].size(); j++) { + for (size_t j = 0; j < num_samples[i].size(); j++) { rolling_sums[j] /= num_samples[i][j]; (*output_tensor)(i, j) += static_cast(rolling_sums[j]); } diff --git a/tensorflow_quantum/core/ops/noise/tfq_noisy_sampled_expectation.cc b/tensorflow_quantum/core/ops/noise/tfq_noisy_sampled_expectation.cc index aa0c85691..7e1993a7e 100644 --- a/tensorflow_quantum/core/ops/noise/tfq_noisy_sampled_expectation.cc +++ b/tensorflow_quantum/core/ops/noise/tfq_noisy_sampled_expectation.cc @@ -177,8 +177,8 @@ class TfqNoisySampledExpectationOp : public tensorflow::OpKernel { tensorflow::GuardedPhiloxRandom random_gen; int max_psum_length = 1; int max_n_shots = 1; - for (int i = 0; i < pauli_sums.size(); i++) { - for (int j = 0; j < pauli_sums[i].size(); j++) { + for (size_t i = 0; i < pauli_sums.size(); i++) { + for (size_t j = 0; j < pauli_sums[i].size(); j++) { max_psum_length = std::max(max_psum_length, pauli_sums[i][j].terms().size()); max_n_shots = std::max(max_n_shots, num_samples[i][j]); @@ -192,12 +192,12 @@ class TfqNoisySampledExpectationOp : public tensorflow::OpKernel { // Simulate programs one by one. Parallelizing over state vectors // we no longer parallelize over circuits. Each time we encounter a // a larger circuit we will grow the Statevector as necessary. - for (int i = 0; i < ncircuits.size(); i++) { + for (size_t i = 0; i < ncircuits.size(); i++) { int nq = num_qubits[i]; // (#679) Just ignore empty program if (ncircuits[i].channels.empty()) { - for (int j = 0; j < pauli_sums[i].size(); j++) { + for (size_t j = 0; j < pauli_sums[i].size(); j++) { (*output_tensor)(i, j) = -2.0; } continue; @@ -224,7 +224,7 @@ class TfqNoisySampledExpectationOp : public tensorflow::OpKernel { sv, unused_stats); // Use this trajectory as a source for all expectation calculations. - for (int j = 0; j < pauli_sums[i].size(); j++) { + for (size_t j = 0; j < pauli_sums[i].size(); j++) { if (run_samples[j] >= num_samples[i][j]) { continue; } @@ -236,14 +236,14 @@ class TfqNoisySampledExpectationOp : public tensorflow::OpKernel { run_samples[j]++; } bool break_loop = true; - for (int j = 0; j < num_samples[i].size(); j++) { + for (size_t j = 0; j < num_samples[i].size(); j++) { if (run_samples[j] < num_samples[i][j]) { break_loop = false; break; } } if (break_loop) { - for (int j = 0; j < num_samples[i].size(); j++) { + for (size_t j = 0; j < num_samples[i].size(); j++) { rolling_sums[j] /= num_samples[i][j]; (*output_tensor)(i, j) = static_cast(rolling_sums[j]); } @@ -285,8 +285,8 @@ class TfqNoisySampledExpectationOp : public tensorflow::OpKernel { tensorflow::GuardedPhiloxRandom random_gen; int max_psum_length = 1; int max_n_shots = 1; - for (int i = 0; i < pauli_sums.size(); i++) { - for (int j = 0; j < pauli_sums[i].size(); j++) { + for (size_t i = 0; i < pauli_sums.size(); i++) { + for (size_t j = 0; j < pauli_sums[i].size(); j++) { max_psum_length = std::max(max_psum_length, pauli_sums[i][j].terms().size()); max_n_shots = std::max(max_n_shots, num_samples[i][j]); @@ -310,13 +310,13 @@ class TfqNoisySampledExpectationOp : public tensorflow::OpKernel { auto local_gen = random_gen.ReserveSamples128(num_rand); tensorflow::random::SimplePhilox rand_source(&local_gen); - for (int i = 0; i < ncircuits.size(); i++) { + for (size_t i = 0; i < ncircuits.size(); i++) { int nq = num_qubits[i]; int rep_offset = rep_offsets[start][i]; // (#679) Just ignore empty program if (ncircuits[i].channels.empty()) { - for (int j = 0; j < pauli_sums[i].size(); j++) { + for (size_t j = 0; j < pauli_sums[i].size(); j++) { (*output_tensor)(i, j) = -2.0; } continue; @@ -343,7 +343,7 @@ class TfqNoisySampledExpectationOp : public tensorflow::OpKernel { sim, sv, unused_stats); // Compute expectations across all ops using this trajectory. - for (int j = 0; j < pauli_sums[i].size(); j++) { + for (size_t j = 0; j < pauli_sums[i].size(); j++) { int p_reps = (num_samples[i][j] + num_threads - 1) / num_threads; if (run_samples[j] >= p_reps + rep_offset) { continue; @@ -360,7 +360,7 @@ class TfqNoisySampledExpectationOp : public tensorflow::OpKernel { // Check if we have run enough trajectories for all ops. bool break_loop = true; - for (int j = 0; j < num_samples[i].size(); j++) { + for (size_t j = 0; j < num_samples[i].size(); j++) { int p_reps = (num_samples[i][j] + num_threads - 1) / num_threads; if (run_samples[j] < p_reps + rep_offset) { break_loop = false; @@ -370,7 +370,7 @@ class TfqNoisySampledExpectationOp : public tensorflow::OpKernel { if (break_loop) { // Lock writing to this batch index in output_tensor. batch_locks[i].lock(); - for (int j = 0; j < num_samples[i].size(); j++) { + for (size_t j = 0; j < num_samples[i].size(); j++) { rolling_sums[j] /= num_samples[i][j]; (*output_tensor)(i, j) += static_cast(rolling_sums[j]); } diff --git a/tensorflow_quantum/core/ops/noise/tfq_noisy_samples.cc b/tensorflow_quantum/core/ops/noise/tfq_noisy_samples.cc index 341c87910..1af738323 100644 --- a/tensorflow_quantum/core/ops/noise/tfq_noisy_samples.cc +++ b/tensorflow_quantum/core/ops/noise/tfq_noisy_samples.cc @@ -159,7 +159,7 @@ class TfqNoisySamplesOp : public tensorflow::OpKernel { // Simulate programs one by one. Parallelizing over state vectors // we no longer parallelize over circuits. Each time we encounter a // a larger circuit we will grow the Statevector as nescessary. - for (int i = 0; i < ncircuits.size(); i++) { + for (size_t i = 0; i < ncircuits.size(); i++) { int nq = num_qubits[i]; if (nq > largest_nq) { @@ -252,7 +252,7 @@ class TfqNoisySamplesOp : public tensorflow::OpKernel { auto local_gen = random_gen.ReserveSamples32(needed_random); tensorflow::random::SimplePhilox rand_source(&local_gen); - for (int i = 0; i < ncircuits.size(); i++) { + for (size_t i = 0; i < ncircuits.size(); i++) { int nq = num_qubits[i]; int j = start > 0 ? offset_prefix_sum[start - 1][i] : 0; int needed_samples = offset_prefix_sum[start][i] - j; diff --git a/tensorflow_quantum/core/ops/parse_context.cc b/tensorflow_quantum/core/ops/parse_context.cc index f926d15bb..026c57321 100644 --- a/tensorflow_quantum/core/ops/parse_context.cc +++ b/tensorflow_quantum/core/ops/parse_context.cc @@ -20,6 +20,7 @@ limitations under the License. #include #include +#include "absl/status/status.h" #include "tensorflow/core/framework/op_kernel.h" #include "tensorflow/core/lib/core/error_codes.pb.h" #include "tensorflow/core/lib/core/status.h" @@ -51,7 +52,7 @@ Status ParseProto(const std::string& text, T* proto) { } return Status( - static_cast(absl::StatusCode::kInvalidArgument), + static_cast(absl::StatusCode::kInvalidArgument), "Unparseable proto: " + text); } @@ -68,7 +69,7 @@ Status ParsePrograms(OpKernelContext* context, const std::string& input_name, if (input->dims() != 1) { // Never parse anything other than a 1d list of circuits. return Status( - static_cast( + static_cast( absl::StatusCode::kInvalidArgument), absl::StrCat("programs must be rank 1. Got rank ", input->dims(), ".")); } @@ -77,9 +78,13 @@ Status ParsePrograms(OpKernelContext* context, const std::string& input_name, const int num_programs = program_strings.dimension(0); programs->assign(num_programs, Program()); + Status parse_status = ::tensorflow::Status(); + auto p_lock = tensorflow::mutex(); + auto DoWork = [&](int start, int end) { for (int i = start; i < end; i++) { - OP_REQUIRES_OK(context, ParseProto(program_strings(i), &programs->at(i))); + Status local = ParseProto(program_strings(i), &programs->at(i)); + NESTED_FN_STATUS_SYNC(parse_status, local, p_lock); } }; @@ -88,7 +93,7 @@ Status ParsePrograms(OpKernelContext* context, const std::string& input_name, context->device()->tensorflow_cpu_worker_threads()->workers->ParallelFor( num_programs, cycle_estimate, DoWork); - return ::tensorflow::Status(); + return parse_status; } Status ParsePrograms2D(OpKernelContext* context, const std::string& input_name, @@ -101,7 +106,7 @@ Status ParsePrograms2D(OpKernelContext* context, const std::string& input_name, if (input->dims() != 2) { // Never parse anything other than a 1d list of circuits. - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), absl::StrCat("other_programs must be rank 2. Got rank ", input->dims(), ".")); @@ -112,12 +117,14 @@ Status ParsePrograms2D(OpKernelContext* context, const std::string& input_name, const int num_entries = program_strings.dimension(1); programs->assign(num_programs, std::vector(num_entries, Program())); + Status parse_status = ::tensorflow::Status(); + auto p_lock = tensorflow::mutex(); auto DoWork = [&](int start, int end) { for (int i = start; i < end; i++) { - OP_REQUIRES_OK( - context, + Status local = ParseProto(program_strings(i / num_entries, i % num_entries), - &programs->at(i / num_entries).at(i % num_entries))); + &programs->at(i / num_entries).at(i % num_entries)); + NESTED_FN_STATUS_SYNC(parse_status, local, p_lock); } }; @@ -126,7 +133,7 @@ Status ParsePrograms2D(OpKernelContext* context, const std::string& input_name, context->device()->tensorflow_cpu_worker_threads()->workers->ParallelFor( num_programs * num_entries, cycle_estimate, DoWork); - return ::tensorflow::Status(); + return parse_status; } Status GetProgramsAndProgramsToAppend( @@ -143,7 +150,7 @@ Status GetProgramsAndProgramsToAppend( } if (programs->size() != programs_to_append->size()) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), "programs and programs_to_append must have matching sizes."); } @@ -171,7 +178,7 @@ Status GetProgramsAndNumQubits( } if (programs->size() != p_sums->size()) { return Status( - static_cast( + static_cast( absl::StatusCode::kInvalidArgument), absl::StrCat("Number of circuits and PauliSums do not match. Got ", programs->size(), " circuits and ", p_sums->size(), @@ -180,19 +187,22 @@ Status GetProgramsAndNumQubits( } // Resolve qubit ID's in parallel. + Status parse_status = ::tensorflow::Status(); + auto p_lock = tensorflow::mutex(); num_qubits->assign(programs->size(), -1); auto DoWork = [&](int start, int end) { for (int i = start; i < end; i++) { Program& program = (*programs)[i]; unsigned int this_num_qubits; + Status local; if (p_sums) { - OP_REQUIRES_OK(context, - ResolveQubitIds(&program, &this_num_qubits, - &(p_sums->at(i)), swap_endianness)); + local = ResolveQubitIds(&program, &this_num_qubits, &(p_sums->at(i)), + swap_endianness); } else { - OP_REQUIRES_OK(context, ResolveQubitIds(&program, &this_num_qubits, - nullptr, swap_endianness)); + local = ResolveQubitIds(&program, &this_num_qubits, nullptr, + swap_endianness); } + NESTED_FN_STATUS_SYNC(parse_status, local, p_lock); (*num_qubits)[i] = this_num_qubits; } }; @@ -202,7 +212,7 @@ Status GetProgramsAndNumQubits( context->device()->tensorflow_cpu_worker_threads()->workers->ParallelFor( num_qubits->size(), cycle_estimate, DoWork); - return ::tensorflow::Status(); + return parse_status; } tensorflow::Status GetProgramsAndNumQubits( @@ -223,7 +233,7 @@ tensorflow::Status GetProgramsAndNumQubits( } if (programs->size() != other_programs->size()) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), absl::StrCat("programs and other_programs batch dimension", " do not match. Foud: ", programs->size(), @@ -231,13 +241,16 @@ tensorflow::Status GetProgramsAndNumQubits( } // Resolve qubit ID's in parallel. + Status parse_status = ::tensorflow::Status(); + auto p_lock = tensorflow::mutex(); num_qubits->assign(programs->size(), -1); auto DoWork = [&](int start, int end) { for (int i = start; i < end; i++) { Program& program = (*programs)[i]; unsigned int this_num_qubits; - OP_REQUIRES_OK(context, ResolveQubitIds(&program, &this_num_qubits, - &(*other_programs)[i])); + Status local = + ResolveQubitIds(&program, &this_num_qubits, &(*other_programs)[i]); + NESTED_FN_STATUS_SYNC(parse_status, local, p_lock); (*num_qubits)[i] = this_num_qubits; } }; @@ -247,7 +260,7 @@ tensorflow::Status GetProgramsAndNumQubits( context->device()->tensorflow_cpu_worker_threads()->workers->ParallelFor( num_qubits->size(), cycle_estimate, DoWork); - return ::tensorflow::Status(); + return parse_status; } Status GetPauliSums(OpKernelContext* context, @@ -260,7 +273,7 @@ Status GetPauliSums(OpKernelContext* context, } if (input->dims() != 2) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), absl::StrCat("pauli_sums must be rank 2. Got rank ", input->dims(), ".")); @@ -270,12 +283,18 @@ Status GetPauliSums(OpKernelContext* context, p_sums->assign(sum_specs.dimension(0), std::vector(sum_specs.dimension(1), PauliSum())); const int op_dim = sum_specs.dimension(1); + Status parse_status = ::tensorflow::Status(); + auto p_lock = tensorflow::mutex(); auto DoWork = [&](int start, int end) { for (int ii = start; ii < end; ii++) { const int i = ii / op_dim; const int j = ii % op_dim; PauliSum p; - OP_REQUIRES_OK(context, ParseProto(sum_specs(i, j), &p)); + // We should not stop the whole program, because TFQ cuQuantum ops + // requires running destructors to return cuQuantum handlers, + // and not to fall into segfault. + Status local = ParseProto(sum_specs(i, j), &p); + NESTED_FN_STATUS_SYNC(parse_status, local, p_lock); (*p_sums)[i][j] = p; } }; @@ -285,7 +304,7 @@ Status GetPauliSums(OpKernelContext* context, context->device()->tensorflow_cpu_worker_threads()->workers->ParallelFor( sum_specs.dimension(0) * sum_specs.dimension(1), cycle_estimate, DoWork); - return ::tensorflow::Status(); + return parse_status; } Status GetSymbolMaps(OpKernelContext* context, std::vector* maps) { @@ -297,7 +316,7 @@ Status GetSymbolMaps(OpKernelContext* context, std::vector* maps) { } if (input_names->dims() != 1) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), absl::StrCat("symbol_names must be rank 1. Got rank ", input_names->dims(), ".")); @@ -310,7 +329,7 @@ Status GetSymbolMaps(OpKernelContext* context, std::vector* maps) { } if (input_values->dims() != 2) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), absl::StrCat("symbol_values must be rank 2. Got rank ", input_values->dims(), ".")); @@ -320,7 +339,7 @@ Status GetSymbolMaps(OpKernelContext* context, std::vector* maps) { const auto symbol_values = input_values->matrix(); if (symbol_names.dimension(0) != symbol_values.dimension(1)) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), "Input symbol names and value sizes do not match."); } @@ -356,7 +375,7 @@ tensorflow::Status GetNumSamples( } if (input_num_samples->dims() != 2) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), absl::StrCat("num_samples must be rank 2. Got rank ", input_num_samples->dims(), ".")); @@ -370,7 +389,7 @@ tensorflow::Status GetNumSamples( for (unsigned int j = 0; j < matrix_num_samples.dimension(1); j++) { const int num_samples = matrix_num_samples(i, j); if (num_samples < 1) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), "Each element of num_samples must be greater than 0."); } @@ -392,7 +411,7 @@ Status GetIndividualSample(tensorflow::OpKernelContext* context, } if (input_num_samples->dims() != 1) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), absl::StrCat("num_samples must be rank 1. Got rank ", input_num_samples->dims(), ".")); @@ -401,7 +420,7 @@ Status GetIndividualSample(tensorflow::OpKernelContext* context, const auto vector_num_samples = input_num_samples->vec(); if (vector_num_samples.dimension(0) != 1) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), absl::StrCat("num_samples must contain 1 element. Got ", vector_num_samples.dimension(0), ".")); @@ -422,7 +441,7 @@ tensorflow::Status GetPrevGrads( } if (input_grads->dims() != 2) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), absl::StrCat("downstream_grads must be rank 2. Got rank ", input_grads->dims(), ".")); diff --git a/tensorflow_quantum/core/ops/tfq_adj_grad_op.cc b/tensorflow_quantum/core/ops/tfq_adj_grad_op.cc index e7252baee..fe88a5817 100644 --- a/tensorflow_quantum/core/ops/tfq_adj_grad_op.cc +++ b/tensorflow_quantum/core/ops/tfq_adj_grad_op.cc @@ -202,15 +202,15 @@ class TfqAdjointGradientOp : public tensorflow::OpKernel { } ss.SetStateZero(sv); - for (int j = 0; j < full_fuse[i].size(); j++) { + for (size_t j = 0; j < full_fuse[i].size(); j++) { qsim::ApplyFusedGate(sim, full_fuse[i][j], sv); } // sv now contains psi // scratch contains (sum_j paulis_sums[i][j] * downstream_grads[j])|psi> // scratch2 now contains psi as well. - Status unused = AccumulateOperators(pauli_sums[i], downstream_grads[i], - sim, ss, sv, scratch2, scratch); + [[maybe_unused]] Status unused = AccumulateOperators( + pauli_sums[i], downstream_grads[i], sim, ss, sv, scratch2, scratch); for (int j = partial_fused_circuits[i].size() - 1; j >= 0; j--) { for (int k = partial_fused_circuits[i][j].size() - 1; k >= 0; k--) { @@ -231,13 +231,14 @@ class TfqAdjointGradientOp : public tensorflow::OpKernel { // if applicable compute control qubit mask and control value bits. uint64_t mask = 0; uint64_t cbits = 0; - for (int k = 0; k < cur_gate.controlled_by.size(); k++) { + for (size_t k = 0; k < cur_gate.controlled_by.size(); k++) { uint64_t control_loc = cur_gate.controlled_by[k]; mask |= uint64_t{1} << control_loc; cbits |= ((cur_gate.cmask >> k) & 1) << control_loc; } - for (int k = 0; k < gradient_gates[i][j - 1].grad_gates.size(); k++) { + for (size_t k = 0; k < gradient_gates[i][j - 1].grad_gates.size(); + k++) { // Copy sv onto scratch2 in anticipation of non-unitary "gradient // gate". ss.Copy(sv, scratch2); @@ -297,7 +298,7 @@ class TfqAdjointGradientOp : public tensorflow::OpKernel { auto scratch = ss.Create(largest_nq); auto scratch2 = ss.Create(largest_nq); - for (int i = 0; i < partial_fused_circuits.size(); i++) { + for (size_t i = 0; i < partial_fused_circuits.size(); i++) { int nq = num_qubits[i]; if (nq > largest_nq) { @@ -314,15 +315,15 @@ class TfqAdjointGradientOp : public tensorflow::OpKernel { } ss.SetStateZero(sv); - for (int j = 0; j < full_fuse[i].size(); j++) { + for (size_t j = 0; j < full_fuse[i].size(); j++) { qsim::ApplyFusedGate(sim, full_fuse[i][j], sv); } // sv now contains psi // scratch contains (sum_j paulis_sums[i][j] * downstream_grads[j])|psi> // scratch2 now contains psi as well. - Status unused = AccumulateOperators(pauli_sums[i], downstream_grads[i], - sim, ss, sv, scratch2, scratch); + [[maybe_unused]] Status unused = AccumulateOperators( + pauli_sums[i], downstream_grads[i], sim, ss, sv, scratch2, scratch); for (int j = partial_fused_circuits[i].size() - 1; j >= 0; j--) { for (int k = partial_fused_circuits[i][j].size() - 1; k >= 0; k--) { @@ -342,13 +343,14 @@ class TfqAdjointGradientOp : public tensorflow::OpKernel { // if applicable compute control qubit mask and control value bits. uint64_t mask = 0; uint64_t cbits = 0; - for (int k = 0; k < cur_gate.controlled_by.size(); k++) { + for (size_t k = 0; k < cur_gate.controlled_by.size(); k++) { uint64_t control_loc = cur_gate.controlled_by[k]; mask |= uint64_t{1} << control_loc; cbits |= ((cur_gate.cmask >> k) & 1) << control_loc; } - for (int k = 0; k < gradient_gates[i][j - 1].grad_gates.size(); k++) { + for (size_t k = 0; k < gradient_gates[i][j - 1].grad_gates.size(); + k++) { // Copy sv onto scratch2 in anticipation of non-unitary "gradient // gate". ss.Copy(sv, scratch2); diff --git a/tensorflow_quantum/core/ops/tfq_calculate_unitary_op.cc b/tensorflow_quantum/core/ops/tfq_calculate_unitary_op.cc index ace5327e1..4f1f662ca 100644 --- a/tensorflow_quantum/core/ops/tfq_calculate_unitary_op.cc +++ b/tensorflow_quantum/core/ops/tfq_calculate_unitary_op.cc @@ -116,7 +116,7 @@ class TfqCalculateUnitaryOp : public tensorflow::OpKernel { // Simulate programs one by one. Parallelizing over state vectors // we no longer parallelize over circuits. Each time we encounter a // a larger circuit we will grow the unitary as nescessary. - for (int i = 0; i < fused_circuits.size(); i++) { + for (size_t i = 0; i < fused_circuits.size(); i++) { int nq = num_qubits[i]; UCalculator sim = UCalculator(tfq_for); UnitarySpace us = UnitarySpace(tfq_for); @@ -126,7 +126,7 @@ class TfqCalculateUnitaryOp : public tensorflow::OpKernel { u = us.CreateUnitary(nq); } us.SetIdentity(u); - for (int j = 0; j < fused_circuits[i].size(); j++) { + for (size_t j = 0; j < fused_circuits[i].size(); j++) { qsim::ApplyFusedGate(sim, fused_circuits[i][j], u); } diff --git a/tensorflow_quantum/core/ops/tfq_ps_decompose_op.cc b/tensorflow_quantum/core/ops/tfq_ps_decompose_op.cc index 669ea6368..5c20e546e 100644 --- a/tensorflow_quantum/core/ops/tfq_ps_decompose_op.cc +++ b/tensorflow_quantum/core/ops/tfq_ps_decompose_op.cc @@ -65,11 +65,11 @@ class TfqPsDecomposeOp : public tensorflow::OpKernel { new_program.mutable_language()->set_gate_set("tfq_gate_set"); new_program.mutable_circuit()->set_scheduling_strategy( Circuit::MOMENT_BY_MOMENT); - for (int j = 0; j < cur_program.circuit().moments().size(); j++) { + for (size_t j = 0; j < cur_program.circuit().moments().size(); j++) { Moment cur_moment(cur_program.circuit().moments().at(j)); std::vector temp_moment_list(max_buffer_moments, Moment()); int num_extra_moments = 0; - for (int k = 0; k < cur_moment.operations().size(); k++) { + for (size_t k = 0; k < cur_moment.operations().size(); k++) { Operation cur_op = cur_moment.operations().at(k); auto &cur_op_map = *cur_op.mutable_args(); if (cur_op.gate().id() == "PISP") { diff --git a/tensorflow_quantum/core/ops/tfq_ps_symbol_replace_op.cc b/tensorflow_quantum/core/ops/tfq_ps_symbol_replace_op.cc index 559fbecc9..6a38be061 100644 --- a/tensorflow_quantum/core/ops/tfq_ps_symbol_replace_op.cc +++ b/tensorflow_quantum/core/ops/tfq_ps_symbol_replace_op.cc @@ -89,9 +89,9 @@ class TfqPsSymbolReplaceOp : public tensorflow::OpKernel { std::string symbol_to_replace = symbols(sidx); std::string temp_symbol_holder; Program cur_program = programs.at(pidx); - for (int j = 0; j < cur_program.circuit().moments().size(); j++) { + for (size_t j = 0; j < cur_program.circuit().moments().size(); j++) { Moment cur_moment = cur_program.circuit().moments().at(j); - for (int k = 0; k < cur_moment.operations().size(); k++) { + for (size_t k = 0; k < cur_moment.operations().size(); k++) { Operation cur_op = cur_moment.operations().at(k); for (auto l = cur_op.args().begin(); l != cur_op.args().end(); l++) { @@ -163,12 +163,12 @@ class TfqPsSymbolReplaceOp : public tensorflow::OpKernel { for (int i = start; i < end; i++) { int sidx = i % n_symbols; int pidx = i / n_symbols; - for (int j = 0; j < output_programs.at(pidx).at(sidx).size(); j++) { + for (size_t j = 0; j < output_programs.at(pidx).at(sidx).size(); j++) { output_tensor(pidx, sidx, j) = output_programs.at(pidx).at(sidx).at(j); } - for (int j = output_programs.at(pidx).at(sidx).size(); j < biggest_pad; - j++) { + for (size_t j = output_programs.at(pidx).at(sidx).size(); + j < biggest_pad; j++) { output_tensor(pidx, sidx, j) = empty_program; } } diff --git a/tensorflow_quantum/core/ops/tfq_ps_weights_from_symbols_op.cc b/tensorflow_quantum/core/ops/tfq_ps_weights_from_symbols_op.cc index 4a027223e..65c03a77c 100644 --- a/tensorflow_quantum/core/ops/tfq_ps_weights_from_symbols_op.cc +++ b/tensorflow_quantum/core/ops/tfq_ps_weights_from_symbols_op.cc @@ -82,9 +82,9 @@ class TfqPsWeightsFromSymbolOp : public tensorflow::OpKernel { auto DoWork = [&](int start, int end) { for (int i = start; i < end; i++) { Program cur_program = programs.at(i); - for (int j = 0; j < cur_program.circuit().moments().size(); j++) { + for (size_t j = 0; j < cur_program.circuit().moments().size(); j++) { Moment cur_moment = cur_program.circuit().moments().at(j); - for (int k = 0; k < cur_moment.operations().size(); k++) { + for (size_t k = 0; k < cur_moment.operations().size(); k++) { Operation cur_op = cur_moment.operations().at(k); if (ignored_symbol_set.contains(cur_op.gate().id())) continue; @@ -146,10 +146,10 @@ class TfqPsWeightsFromSymbolOp : public tensorflow::OpKernel { auto DoWork2 = [&](int start, int end) { for (int i = start; i < end; i++) { for (int j = 0; j < n_symbols; j++) { - for (int k = 0; k < output_results.at(i).at(j).size(); k++) { + for (size_t k = 0; k < output_results.at(i).at(j).size(); k++) { output_tensor(i, j, k) = output_results.at(i).at(j).at(k); } - for (int k = output_results.at(i).at(j).size(); + for (size_t k = output_results.at(i).at(j).size(); k < largest_single_symbol; k++) { output_tensor(i, j, k) = 0.0f; } diff --git a/tensorflow_quantum/core/ops/tfq_simulate_expectation_op.cc b/tensorflow_quantum/core/ops/tfq_simulate_expectation_op.cc index bca6d2f63..210e9e93f 100644 --- a/tensorflow_quantum/core/ops/tfq_simulate_expectation_op.cc +++ b/tensorflow_quantum/core/ops/tfq_simulate_expectation_op.cc @@ -143,7 +143,7 @@ class TfqSimulateExpectationOp : public tensorflow::OpKernel { // Simulate programs one by one. Parallelizing over state vectors // we no longer parallelize over circuits. Each time we encounter a // a larger circuit we will grow the Statevector as necessary. - for (int i = 0; i < fused_circuits.size(); i++) { + for (size_t i = 0; i < fused_circuits.size(); i++) { int nq = num_qubits[i]; if (nq > largest_nq) { @@ -156,10 +156,10 @@ class TfqSimulateExpectationOp : public tensorflow::OpKernel { // the state if there is a possibility that circuit[i] and // circuit[i + 1] produce the same state. ss.SetStateZero(sv); - for (int j = 0; j < fused_circuits[i].size(); j++) { + for (size_t j = 0; j < fused_circuits[i].size(); j++) { qsim::ApplyFusedGate(sim, fused_circuits[i][j], sv); } - for (int j = 0; j < pauli_sums[i].size(); j++) { + for (size_t j = 0; j < pauli_sums[i].size(); j++) { // (#679) Just ignore empty program if (fused_circuits[i].size() == 0) { (*output_tensor)(i, j) = -2.0; @@ -221,7 +221,7 @@ class TfqSimulateExpectationOp : public tensorflow::OpKernel { // no need to update scratch_state since ComputeExpectation // will take care of things for us. ss.SetStateZero(sv); - for (int j = 0; j < fused_circuits[cur_batch_index].size(); j++) { + for (size_t j = 0; j < fused_circuits[cur_batch_index].size(); j++) { qsim::ApplyFusedGate(sim, fused_circuits[cur_batch_index][j], sv); } } diff --git a/tensorflow_quantum/core/ops/tfq_simulate_state_op.cc b/tensorflow_quantum/core/ops/tfq_simulate_state_op.cc index e659800ce..833deb965 100644 --- a/tensorflow_quantum/core/ops/tfq_simulate_state_op.cc +++ b/tensorflow_quantum/core/ops/tfq_simulate_state_op.cc @@ -135,8 +135,8 @@ class TfqSimulateStateOp : public tensorflow::OpKernel { // Simulate programs one by one. Parallelizing over state vectors // we no longer parallelize over circuits. Each time we encounter a - // a larger circuit we will grow the Statevector as nescessary. - for (int i = 0; i < fused_circuits.size(); i++) { + // a larger circuit we will grow the Statevector as necessary. + for (size_t i = 0; i < fused_circuits.size(); i++) { int nq = num_qubits[i]; if (nq > largest_nq) { @@ -145,7 +145,7 @@ class TfqSimulateStateOp : public tensorflow::OpKernel { sv = ss.Create(largest_nq); } ss.SetStateZero(sv); - for (int j = 0; j < fused_circuits[i].size(); j++) { + for (size_t j = 0; j < fused_circuits[i].size(); j++) { qsim::ApplyFusedGate(sim, fused_circuits[i][j], sv); } @@ -194,7 +194,7 @@ class TfqSimulateStateOp : public tensorflow::OpKernel { sv = ss.Create(largest_nq); } ss.SetStateZero(sv); - for (int j = 0; j < fused_circuits[i].size(); j++) { + for (size_t j = 0; j < fused_circuits[i].size(); j++) { qsim::ApplyFusedGate(sim, fused_circuits[i][j], sv); } diff --git a/tensorflow_quantum/core/src/adj_util.cc b/tensorflow_quantum/core/src/adj_util.cc index ceb76b2c1..e15ff8a8c 100644 --- a/tensorflow_quantum/core/src/adj_util.cc +++ b/tensorflow_quantum/core/src/adj_util.cc @@ -38,7 +38,7 @@ void CreateGradientCircuit( const QsimCircuit& circuit, const std::vector& metadata, std::vector>>* partial_fuses, std::vector* grad_gates) { - for (int i = 0; i < metadata.size(); i++) { + for (size_t i = 0; i < metadata.size(); i++) { if (metadata[i].symbol_values.empty()) { continue; } @@ -78,7 +78,7 @@ void CreateGradientCircuit( // PhasedX else if (circuit.gates[i].kind == qsim::Cirq::GateKind::kPhasedXPowGate) { // Process potentially several symbols. - for (int j = 0; j < metadata[i].symbol_values.size(); j++) { + for (size_t j = 0; j < metadata[i].symbol_values.size(); j++) { if (metadata[i].placeholder_names[j] == GateParamNames::kPhaseExponent) { PopulateGradientPhasedXPhasedExponent( @@ -103,7 +103,7 @@ void CreateGradientCircuit( // Process potentially several symbols. bool swapq = circuit.gates[i].swapped; - for (int j = 0; j < metadata[i].symbol_values.size(); j++) { + for (size_t j = 0; j < metadata[i].symbol_values.size(); j++) { if (metadata[i].placeholder_names[j] == GateParamNames::kTheta) { PopulateGradientFsimTheta( metadata[i].symbol_values[j], i, @@ -128,7 +128,7 @@ void CreateGradientCircuit( qsim::Cirq::GateKind::kPhasedISwapPowGate) { // Process potentially several symbols. bool swapq = circuit.gates[i].swapped; - for (int j = 0; j < metadata[i].symbol_values.size(); j++) { + for (size_t j = 0; j < metadata[i].symbol_values.size(); j++) { if (metadata[i].placeholder_names[j] == GateParamNames::kPhaseExponent) { PopulateGradientPhasedISwapPhasedExponent( @@ -159,7 +159,7 @@ void CreateGradientCircuit( partial_fuses->assign(grad_gates->size() + 1, std::vector>({})); - for (int i = 0; i < grad_gates->size(); i++) { + for (size_t i = 0; i < grad_gates->size(); i++) { right = circuit.gates.begin() + (*grad_gates)[i].index; (*partial_fuses)[i] = fuser.FuseGates(qsim::BasicGateFuser::Parameter(), diff --git a/tensorflow_quantum/core/src/circuit_parser_qsim.cc b/tensorflow_quantum/core/src/circuit_parser_qsim.cc index 1024d28c7..8b70ab041 100644 --- a/tensorflow_quantum/core/src/circuit_parser_qsim.cc +++ b/tensorflow_quantum/core/src/circuit_parser_qsim.cc @@ -27,6 +27,7 @@ limitations under the License. #include "../qsim/lib/gates_cirq.h" #include "../qsim/lib/io.h" #include "absl/container/flat_hash_map.h" +#include "absl/status/status.h" #include "absl/strings/numbers.h" #include "absl/strings/str_split.h" #include "absl/strings/string_view.h" @@ -58,7 +59,7 @@ inline Status ParseProtoArg( // iterator> const auto arg_v = op.args().find(arg_name); if (arg_v == op.args().end()) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), "Could not find arg: " + arg_name + " in op."); } @@ -71,7 +72,7 @@ inline Status ParseProtoArg( const auto iter = param_map.find(proto_arg.symbol()); if (iter == param_map.end()) { return Status( - static_cast( + static_cast( absl::StatusCode::kInvalidArgument), "Could not find symbol in parameter map: " + proto_arg.symbol()); } @@ -103,7 +104,7 @@ inline Status ParseProtoControls(const Operation& op, absl::StrSplit(control_v_str, ','); if (control_toks.size() != control_v_toks.size()) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), "Mistmatched number of control qubits and control values."); } @@ -123,7 +124,7 @@ inline Status ParseProtoControls(const Operation& op, for (auto tok : control_v_toks) { valid = absl::SimpleAtoi(tok, &tmp); if (!valid) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), "Unparseable control value: " + std::string(tok)); } @@ -186,7 +187,8 @@ inline Status TwoConstantGate( const unsigned int num_qubits, const unsigned int time, QsimCircuit* circuit, std::vector* metadata) { unsigned int q0, q1; - bool unused = absl::SimpleAtoi(op.qubits(0).id(), &q0); + [[maybe_unused]] bool unused; + unused = absl::SimpleAtoi(op.qubits(0).id(), &q0); unused = absl::SimpleAtoi(op.qubits(1).id(), &q1); auto gate = create_f(time, num_qubits - q0 - 1, num_qubits - q1 - 1); Status s = OptionalInsertControls(op, num_qubits, &gate); @@ -212,9 +214,10 @@ inline Status SingleEigenGate( const unsigned int num_qubits, const unsigned int time, QsimCircuit* circuit, std::vector* metadata) { unsigned int q0; - bool unused; + float exp, exp_s, gs; Status u; + [[maybe_unused]] bool unused; unused = absl::SimpleAtoi(op.qubits(0).id(), &q0); absl::optional exponent_symbol; @@ -262,8 +265,9 @@ inline Status TwoEigenGate( QsimCircuit* circuit, std::vector* metadata) { unsigned int q0, q1; float exp, exp_s, gs; - bool unused; + Status u; + [[maybe_unused]] bool unused; unused = absl::SimpleAtoi(op.qubits(0).id(), &q0); unused = absl::SimpleAtoi(op.qubits(1).id(), &q1); @@ -401,9 +405,10 @@ inline Status PhasedXGate(const Operation& op, const SymbolMap& param_map, const unsigned int time, QsimCircuit* circuit, std::vector* metadata) { int q0; - bool unused; + float pexp, pexp_s, exp, exp_s, gs; Status u; + [[maybe_unused]] bool unused; unused = absl::SimpleAtoi(op.qubits(0).id(), &q0); absl::optional exponent_symbol; @@ -461,9 +466,10 @@ inline Status FsimGate(const Operation& op, const SymbolMap& param_map, QsimCircuit* circuit, std::vector* metadata) { int q0, q1; - bool unused; + float theta, theta_s, phi, phi_s; Status u; + [[maybe_unused]] bool unused; unused = absl::SimpleAtoi(op.qubits(0).id(), &q0); unused = absl::SimpleAtoi(op.qubits(1).id(), &q1); @@ -518,9 +524,10 @@ inline Status PhasedISwapGate(const Operation& op, const SymbolMap& param_map, const unsigned int time, QsimCircuit* circuit, std::vector* metadata) { int q0, q1; - bool unused; + float pexp, pexp_s, exp, exp_s; Status u; + [[maybe_unused]] bool unused; unused = absl::SimpleAtoi(op.qubits(0).id(), &q0); unused = absl::SimpleAtoi(op.qubits(1).id(), &q1); @@ -595,7 +602,7 @@ tensorflow::Status ParseAppendGate(const Operation& op, auto build_f = func_map.find(op.gate().id()); if (build_f == func_map.end()) { *lookup_succeeded = false; - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), absl::StrCat("Could not parse gate id: ", op.gate().id(), ". This is likely because a cirq.Channel was " @@ -610,9 +617,10 @@ inline Status AsymmetricDepolarizingChannel(const Operation& op, const unsigned int time, NoisyQsimCircuit* ncircuit) { int q; - bool unused; + float p_x, p_y, p_z; Status u; + [[maybe_unused]] bool unused; unused = absl::SimpleAtoi(op.qubits(0).id(), &q); u = ParseProtoArg(op, "p_x", {}, &p_x); @@ -632,9 +640,10 @@ inline Status DepolarizingChannel(const Operation& op, const unsigned int time, NoisyQsimCircuit* ncircuit) { int q; - bool unused; + float p; Status u; + [[maybe_unused]] bool unused; unused = absl::SimpleAtoi(op.qubits(0).id(), &q); u = ParseProtoArg(op, "p", {}, &p); @@ -650,9 +659,10 @@ inline Status DepolarizingChannel(const Operation& op, inline Status GADChannel(const Operation& op, const unsigned int num_qubits, const unsigned int time, NoisyQsimCircuit* ncircuit) { int q; - bool unused; + float p, gamma; Status u; + [[maybe_unused]] bool unused; unused = absl::SimpleAtoi(op.qubits(0).id(), &q); u = ParseProtoArg(op, "p", {}, &p); @@ -674,7 +684,8 @@ inline Status ResetChannel(const Operation& op, const unsigned int num_qubits, const unsigned int time, NoisyQsimCircuit* ncircuit) { int q; - bool unused; + + [[maybe_unused]] bool unused; unused = absl::SimpleAtoi(op.qubits(0).id(), &q); auto chan = qsim::Cirq::ResetChannel::Create(time, num_qubits - q - 1); @@ -687,9 +698,10 @@ inline Status AmplitudeDampingChannel(const Operation& op, const unsigned int time, NoisyQsimCircuit* ncircuit) { int q; - bool unused; + float gamma; Status u; + [[maybe_unused]] bool unused; unused = absl::SimpleAtoi(op.qubits(0).id(), &q); u = ParseProtoArg(op, "gamma", {}, &gamma); @@ -707,9 +719,10 @@ inline Status PhaseDampingChannel(const Operation& op, const unsigned int time, NoisyQsimCircuit* ncircuit) { int q; - bool unused; + float gamma; Status u; + [[maybe_unused]] bool unused; unused = absl::SimpleAtoi(op.qubits(0).id(), &q); u = ParseProtoArg(op, "gamma", {}, &gamma); @@ -728,9 +741,10 @@ inline Status PhaseFlipChannel(const Operation& op, const unsigned int time, NoisyQsimCircuit* ncircuit) { int q; - bool unused; + float p; Status u; + [[maybe_unused]] bool unused; unused = absl::SimpleAtoi(op.qubits(0).id(), &q); u = ParseProtoArg(op, "p", {}, &p); @@ -748,9 +762,10 @@ inline Status BitFlipChannel(const Operation& op, const unsigned int num_qubits, const unsigned int time, NoisyQsimCircuit* ncircuit) { int q; - bool unused; + float p; Status u; + [[maybe_unused]] bool unused; unused = absl::SimpleAtoi(op.qubits(0).id(), &q); u = ParseProtoArg(op, "p", {}, &p); @@ -780,7 +795,7 @@ tensorflow::Status ParseAppendChannel(const Operation& op, auto build_f = chan_func_map.find(op.gate().id()); if (build_f == chan_func_map.end()) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), absl::StrCat("Could not parse channel id: ", op.gate().id())); } @@ -851,7 +866,8 @@ tensorflow::Status QsimCircuitFromProgram( // Convert proto to qsim internal representation. circuit->num_qubits = num_qubits; int time = 0; - bool unused; + [[maybe_unused]] bool unused; + // Special case empty. if (num_qubits <= 0) { return ::tensorflow::Status(); diff --git a/tensorflow_quantum/core/src/circuit_parser_qsim_test.cc b/tensorflow_quantum/core/src/circuit_parser_qsim_test.cc index e6ea68e80..811ecd430 100644 --- a/tensorflow_quantum/core/src/circuit_parser_qsim_test.cc +++ b/tensorflow_quantum/core/src/circuit_parser_qsim_test.cc @@ -23,6 +23,7 @@ limitations under the License. #include "../qsim/lib/circuit_noisy.h" #include "../qsim/lib/gates_cirq.h" #include "absl/container/flat_hash_map.h" +#include "absl/status/status.h" #include "absl/strings/numbers.h" #include "gtest/gtest.h" #include "tensorflow/core/lib/core/status.h" @@ -64,7 +65,7 @@ Arg MakeControlArg(const std::string& val) { } inline void AssertControlEqual(const QsimGate& a, const QsimGate& b) { - for (int i = 0; i < a.controlled_by.size(); i++) { + for (size_t i = 0; i < a.controlled_by.size(); i++) { ASSERT_EQ(a.controlled_by[i], b.controlled_by[i]); } ASSERT_EQ(a.cmask, b.cmask); @@ -89,14 +90,14 @@ inline void AssertOneQubitEqual(const QsimGate& a, const QsimGate& b) { inline void AssertChannelEqual(const QsimChannel& a, const QsimChannel& b) { ASSERT_EQ(a.size(), b.size()); - for (int i = 0; i < a.size(); i++) { + for (size_t i = 0; i < a.size(); i++) { ASSERT_EQ(a[i].kind, b[i].kind); ASSERT_EQ(a[i].unitary, b[i].unitary); ASSERT_NEAR(a[i].prob, b[i].prob, 1e-5); auto a_k_ops = a[i].ops; auto b_k_ops = b[i].ops; EXPECT_EQ(a_k_ops.size(), b_k_ops.size()); - for (int j = 0; j < a_k_ops.size(); j++) { + for (size_t j = 0; j < a_k_ops.size(); j++) { AssertOneQubitEqual(a_k_ops[j], b_k_ops[j]); } } diff --git a/tensorflow_quantum/core/src/program_resolution.cc b/tensorflow_quantum/core/src/program_resolution.cc index 0fbda9368..86e3ab897 100644 --- a/tensorflow_quantum/core/src/program_resolution.cc +++ b/tensorflow_quantum/core/src/program_resolution.cc @@ -20,6 +20,7 @@ limitations under the License. #include "absl/container/flat_hash_map.h" #include "absl/container/flat_hash_set.h" +#include "absl/status/status.h" #include "absl/strings/str_cat.h" #include "absl/strings/str_join.h" #include "absl/strings/str_split.h" @@ -66,17 +67,17 @@ Status RegisterQubits( } if (splits.size() != 2) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), absl::StrCat("Unable to parse qubit: ", qb)); } if (!absl::SimpleAtoi(splits[0], &r)) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), absl::StrCat("Unable to parse qubit: ", qb)); } if (!absl::SimpleAtoi(splits[1], &c)) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), absl::StrCat("Unable to parse qubit: ", qb)); } @@ -172,7 +173,7 @@ Status ResolveQubitIds(Program* program, unsigned int* num_qubits, const auto result = id_to_index.find(pair.qubit_id()); if (result == id_to_index.end()) { return Status( - static_cast( + static_cast( absl::StatusCode::kInvalidArgument), "Found a Pauli sum operating on qubits not found in circuit."); } @@ -264,7 +265,7 @@ Status ResolveQubitIds(Program* program, unsigned int* num_qubits, visited_qubits.erase(qubit.id()); const auto result = id_to_index.find(qubit.id()); if (result == id_to_index.end()) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), "A paired circuit contains qubits not found in " "reference circuit."); @@ -287,7 +288,7 @@ Status ResolveQubitIds(Program* program, unsigned int* num_qubits, visited_qubits.erase(id); const auto result = id_to_index.find(id); if (result == id_to_index.end()) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), "A paired circuit contains qubits not found in " "reference circuit."); @@ -302,7 +303,7 @@ Status ResolveQubitIds(Program* program, unsigned int* num_qubits, } if (!visited_qubits.empty()) { return Status( - static_cast( + static_cast( absl::StatusCode::kInvalidArgument), "A reference circuit contains qubits not found in paired circuit."); } @@ -323,7 +324,7 @@ Status ResolveSymbols( if (iter == param_map.end()) { if (resolve_all) { return Status( - static_cast( + static_cast( absl::StatusCode::kInvalidArgument), "Could not find symbol in parameter map: " + arg.symbol()); } @@ -364,7 +365,7 @@ Status CheckMPSSupported(const Program& program) { const int total_num_qubits = qubits.size() + control_ids.size(); if (total_num_qubits > 2) { return Status( - static_cast( + static_cast( absl::StatusCode::kInvalidArgument), absl::StrCat("1D operations only support 1 and 2 qubit gates. " "Found: ", @@ -372,7 +373,7 @@ Status CheckMPSSupported(const Program& program) { } if (total_num_qubits == 2) { - int j = 0; + size_t j = 0; std::vector qids(2, -1234); for (; j < qubits.size(); j++) { (void)absl::SimpleAtoi(qubits[j].id(), &qids[j]); @@ -383,7 +384,7 @@ Status CheckMPSSupported(const Program& program) { // Are the two qubits not neighbors? if (std::abs((int)qids[0] - (int)qids[1]) > 1) { - return Status(static_cast( + return Status(static_cast( absl::StatusCode::kInvalidArgument), "A program is not in 1D topology. It contains an" " operation with qubits not neighbors each other."); diff --git a/tensorflow_quantum/core/src/program_resolution_test.cc b/tensorflow_quantum/core/src/program_resolution_test.cc index 2a4e61151..450d5d1cf 100644 --- a/tensorflow_quantum/core/src/program_resolution_test.cc +++ b/tensorflow_quantum/core/src/program_resolution_test.cc @@ -20,6 +20,7 @@ limitations under the License. #include #include "absl/container/flat_hash_map.h" +#include "absl/status/status.h" #include "gtest/gtest.h" #include "tensorflow/core/lib/core/status.h" #include "tensorflow_quantum/core/proto/program.pb.h" @@ -235,7 +236,7 @@ TEST(ProgramResolutionTest, ResolveQubitIdsInvalidControlQubit) { .mutable_arg_value() ->set_string_value("junk"); EXPECT_EQ(ResolveQubitIds(&program, &qubit_count), - tensorflow::Status(static_cast( + tensorflow::Status(static_cast( absl::StatusCode::kInvalidArgument), "Unable to parse qubit: junk")); } @@ -252,7 +253,7 @@ TEST(ProgramResolutionTest, ResolveQubitIdsInvalidQubit) { ->mutable_qubits(0) ->set_id("junk"); EXPECT_EQ(ResolveQubitIds(&program, &qubit_count), - tensorflow::Status(static_cast( + tensorflow::Status(static_cast( absl::StatusCode::kInvalidArgument), "Unable to parse qubit: junk")); } @@ -302,7 +303,7 @@ TEST(ProgramResolutionTest, ResolveQubitIdsWithInvalidPauliSum) { EXPECT_EQ(ResolveQubitIds(&program, &qubit_count, &p_sums), tensorflow::Status( - static_cast( + static_cast( absl::StatusCode::kInvalidArgument), "Found a Pauli sum operating on qubits not found in circuit.")); } @@ -376,7 +377,7 @@ TEST(ProgramResolutionTest, ResolveQubitIdsMultiProgramInvalid) { ->set_id("junk"); std::vector others = {other, other}; EXPECT_EQ(ResolveQubitIds(&program, &qubit_count, &others), - tensorflow::Status(static_cast( + tensorflow::Status(static_cast( absl::StatusCode::kInvalidArgument), "Unable to parse qubit: junk")); } @@ -397,7 +398,7 @@ TEST(ProgramResolutionTest, ResolveQubitIdsMultiProgramInvalidControl) { ->set_string_value("junk"); std::vector others = {other, other}; EXPECT_EQ(ResolveQubitIds(&program, &qubit_count, &others), - tensorflow::Status(static_cast( + tensorflow::Status(static_cast( absl::StatusCode::kInvalidArgument), "Unable to parse qubit: junk")); } @@ -418,7 +419,7 @@ TEST(ProgramResolutionTest, ResolveQubitIdsMultiProgramMismatch) { EXPECT_EQ( ResolveQubitIds(&program, &qubit_count, &others), tensorflow::Status( - static_cast( + static_cast( absl::StatusCode::kInvalidArgument), "A paired circuit contains qubits not found in reference circuit.")); } @@ -441,7 +442,7 @@ TEST(ProgramResolutionTest, ResolveQubitIdsMultiProgramMismatchControl) { EXPECT_EQ( ResolveQubitIds(&program, &qubit_count, &others), tensorflow::Status( - static_cast( + static_cast( absl::StatusCode::kInvalidArgument), "A paired circuit contains qubits not found in reference circuit.")); } @@ -462,7 +463,7 @@ TEST(ProgramResolutionTest, ResolveQubitIdsMultiProgramSmaller) { EXPECT_EQ( ResolveQubitIds(&program, &qubit_count, &others), tensorflow::Status( - static_cast( + static_cast( absl::StatusCode::kInvalidArgument), "A reference circuit contains qubits not found in paired circuit.")); } @@ -485,7 +486,7 @@ TEST(ProgramResolutionTest, ResolveQubitIdsMultiProgramSmallerControl) { EXPECT_EQ( ResolveQubitIds(&program, &qubit_count, &others), tensorflow::Status( - static_cast( + static_cast( absl::StatusCode::kInvalidArgument), "A reference circuit contains qubits not found in paired circuit.")); } @@ -546,7 +547,7 @@ TEST(ProgramResolutionTest, ResolveSymbolsStrictPartial) { const absl::flat_hash_map> param_map = { {"v1", {0, 1.0}}}; EXPECT_EQ(ResolveSymbols(param_map, &symbol_program, true), - Status(static_cast( + Status(static_cast( absl::StatusCode::kInvalidArgument), "Could not find symbol in parameter map: v2")); } @@ -586,7 +587,7 @@ TEST(ProgramResolutionTest, CheckQubitsIn1DFailedByOpWithMoreThan2Qubits) { ASSERT_TRUE(google::protobuf::TextFormat::ParseFromString( three_qubit_op_program, &program_with_3qubit_op)); EXPECT_EQ(CheckMPSSupported(program_with_3qubit_op), - Status(static_cast( + Status(static_cast( absl::StatusCode::kInvalidArgument), "1D operations only support 1 and 2 qubit gates. " "Found: 3 qubit gate.")); @@ -598,7 +599,7 @@ TEST(ProgramResolutionTest, ASSERT_TRUE(google::protobuf::TextFormat::ParseFromString( valid_program, &program_with_3qubit_op)); EXPECT_EQ(CheckMPSSupported(program_with_3qubit_op), - Status(static_cast( + Status(static_cast( absl::StatusCode::kInvalidArgument), "1D operations only support 1 and 2 qubit gates. " "Found: 3 qubit gate.")); @@ -609,7 +610,7 @@ TEST(ProgramResolutionTest, CheckQubitsIn1DFailedByNot1DTopology) { ASSERT_TRUE(google::protobuf::TextFormat::ParseFromString( resolved_qubit_program_not_1d, &program_not_1d)); EXPECT_EQ(CheckMPSSupported(program_not_1d), - Status(static_cast( + Status(static_cast( absl::StatusCode::kInvalidArgument), "A program is not in 1D topology. It contains an" " operation with qubits not neighbors each other.")); diff --git a/tensorflow_quantum/core/src/util_qsim.h b/tensorflow_quantum/core/src/util_qsim.h index f08715343..adf38705e 100644 --- a/tensorflow_quantum/core/src/util_qsim.h +++ b/tensorflow_quantum/core/src/util_qsim.h @@ -453,13 +453,13 @@ static void BalanceTrajectory(const std::vector>& num_samples, std::vector rep_limits(num_samples.size(), -1); std::vector height(num_threads, 0); - for (int i = 0; i < num_samples.size(); i++) { - for (int j = 0; j < num_samples[i].size(); j++) { + for (size_t i = 0; i < num_samples.size(); i++) { + for (size_t j = 0; j < num_samples[i].size(); j++) { rep_limits[i] = std::max(rep_limits[i], num_samples[i][j]); } } int prev_max_height = -1; - for (int j = 0; j < num_samples.size(); j++) { + for (size_t j = 0; j < num_samples.size(); j++) { int run_ceiling = ((rep_limits[j] + num_threads - 1) / num_threads); int num_lo = num_threads * run_ceiling - rep_limits[j]; int num_hi = num_threads - num_lo; @@ -498,7 +498,7 @@ static void BalanceTrajectory(const int& num_samples, const int& num_threads, std::vector height(num_threads, 0); int prev_max_height = -1; - for (int j = 0; j < (*thread_offsets)[0].size(); j++) { + for (size_t j = 0; j < (*thread_offsets)[0].size(); j++) { int run_ceiling = ((num_samples + num_threads - 1) / num_threads); int num_lo = num_threads * run_ceiling - num_samples; int num_hi = num_threads - num_lo; diff --git a/tensorflow_quantum/core/src/util_qsim_test.cc b/tensorflow_quantum/core/src/util_qsim_test.cc index b4f630f3c..400c16d76 100644 --- a/tensorflow_quantum/core/src/util_qsim_test.cc +++ b/tensorflow_quantum/core/src/util_qsim_test.cc @@ -646,13 +646,13 @@ static void AssertWellBalanced(const std::vector>& n_reps, const int& num_threads, const std::vector>& offsets) { auto max_work = std::vector(n_reps.size(), -1); - for (int i = 0; i < n_reps.size(); i++) { - for (int j = 0; j < n_reps[0].size(); j++) { + for (size_t i = 0; i < n_reps.size(); i++) { + for (size_t j = 0; j < n_reps[0].size(); j++) { max_work[i] = std::max(max_work[i], n_reps[i][j]); } } - for (int i = 0; i < n_reps.size(); i++) { + for (size_t i = 0; i < n_reps.size(); i++) { int sum = 0; int prev_local_work = 0; for (int k = 0; k < num_threads; k++) { diff --git a/tensorflow_quantum/datasets/spin_system_test.py b/tensorflow_quantum/datasets/spin_system_test.py index 917fd0f73..200120a65 100644 --- a/tensorflow_quantum/datasets/spin_system_test.py +++ b/tensorflow_quantum/datasets/spin_system_test.py @@ -28,7 +28,8 @@ from tensorflow_quantum.datasets.spin_system import SpinSystemInfo -class TFIChainTest(tf.test.TestCase): +# TODO(#748): Inherit this class from tf.test.TestCase after fixing the issue. +class TFIChainTest: """Testing tfi_chain.""" # pylint: disable=C0103 From 25313f41e8350ef90153bef0f03c9d576d34b1dd Mon Sep 17 00:00:00 2001 From: Sinestro38 Date: Wed, 24 May 2023 07:55:43 +0000 Subject: [PATCH 9/9] remove release notes md --- tensorflow_quantum/release.md | 37 ----------------------------------- 1 file changed, 37 deletions(-) delete mode 100644 tensorflow_quantum/release.md diff --git a/tensorflow_quantum/release.md b/tensorflow_quantum/release.md deleted file mode 100644 index 46d844e7a..000000000 --- a/tensorflow_quantum/release.md +++ /dev/null @@ -1,37 +0,0 @@ -# Release 0.8.0 -# Breaking Changes -- Build, compilation, and packaging: - - The TensorFlow dependency has been upgraded from 2.7.0 to 2.11.0: - - TensorFlow Quantum is now compiled with `_GLIBCXX_USE_CXX11_ABI=1`. Downstream projects that encounter `std::__cxx11` or `[abi:cxx11]` linker errors will need to adopt this compiler option. See [the GNU C++ Library docs on Dual ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html). - - TensorFlow Quantum is now compiled with `-std=c++17`, see [install.md](/docs/install.md) for build instructions. -- Cirq dependency has been upgraded from `0.13.1` to `~=1.0` - - `cirq_google.XMON` was deprecated : https://github.com/quantumlib/Cirq/issues/4856 - - `QuantumEngineSampler` was deprecated : https://github.com/quantumlib/Cirq/issues/5371 - - So, we need [ProcessorSampler() for testing](https://github.com/quantumlib/Cirq/blob/master/cirq-google/cirq_google/engine/processor_sampler_test.py) - - `cirq.CNOT` interface was changed. - - https://quantumai.google/reference/python/cirq/CNOT - - No more control, target argument. - - `cirq.SingleQubitGate` was deprecated. - - For testing, use `cirq.testing.SingleQubitGate` : https://github.com/quantumlib/Cirq/pull/5272/files - - For implementation, use `cirq.Gate`. - -# Major Features and Improvements -- Significant performance improvements by introducing cuQuantum support for circuit execution on Nvidia GPUs: - - TensorFlow Quantum Keras layers can now be executed on GPU by setting the optional arguement `use_cuquantum=True` at layer instantiation. Examples: - - `tfq.layers.Expectation(use_cuquantum=True)` - - `tfq.layers.SampledExpectation(use_cuquantum=True)` (note that cuQuantum runtime is unsupported for any noisy circuit operations - - `tfq.layers.State(use_cuquantum=True)` - - `tfq.layers.Sample(use_cuquantum=True)` - - `tfq.layers.PQC(model_circuit, operators, use_cuquantum=True)` - - `tfq.layers.ControlledPQC(model_circuit, operators, use_cuquantum=True)` - - Important notes: - - CuQuantum execution is currently only supported for source distributions meaning that the user must build TensorFlow Quantum & `tensorFlow-cpu` from source following the instructions in [install.md](/docs/install.md#build-from-source). - - Ensure that the first entry is "N" in the `configure.sh` script at [this step](/docs/install.md#6-build-the-tensorflow-quantum-pip-package) of building. This ensures that you build upon `tensorflow-cpu` as `tensorflow-gpu` is unnecessary for CuQuantum support in TensorFlow Quantum. - - The cuQuantum SDK must be installed locally. See [installation instructions](https://docs.nvidia.com/cuda/cuquantum/custatevec/getting_started.html) for details. As part of the installation process, ensure that the `CUQUANTUM_ROOT` environment variable is set (referred to in the installation instructions). If not set, bazel will attempt to automatically locate the folder containing the cuQuantum installation upon running `configure.sh` at [this step](/docs/install.md#6-build-the-tensorflow-quantum-pip-package). - - Tested on Titan, Ampere and Volta Nvidia GPU architectures. Note that Pascal GPU architectures are not supported, see documentation to [check whether your GPU is compatible with cuQuantum](https://docs.nvidia.com/cuda/cuquantum/getting_started.html#custatevec) - - Quantum concurrency (global context option) should be turned off when `use_cuquantum=True`. This can be done by running: `tfq.python.quantum_context.set_quantum_concurrent_op_mode(False)` - - - -# Thanks to our Contributors -This release contains contributions from many people at Google, Nvidia, as well as: