Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Unitary and TensorProduct block encodings #1094

Merged
merged 12 commits into from
Jul 2, 2024
2 changes: 2 additions & 0 deletions dev_tools/autogenerate-bloqs-notebooks-v2.py
Original file line number Diff line number Diff line change
Expand Up @@ -570,6 +570,8 @@
qualtran.bloqs.block_encoding.lcu_block_encoding._LCU_BLOCK_ENCODING_DOC,
qualtran.bloqs.block_encoding.lcu_block_encoding._LCU_ZERO_STATE_BLOCK_ENCODING_DOC,
qualtran.bloqs.block_encoding.chebyshev_polynomial._CHEBYSHEV_BLOQ_DOC,
qualtran.bloqs.block_encoding.unitary._UNITARY_DOC,
qualtran.bloqs.block_encoding.tensor_product._TENSOR_PRODUCT_DOC,
],
directory=f'{SOURCE_DIR}/bloqs/block_encoding/',
),
Expand Down
2 changes: 2 additions & 0 deletions qualtran/bloqs/block_encoding/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,3 +21,5 @@
LCUBlockEncoding,
LCUBlockEncodingZeroState,
)
from qualtran.bloqs.block_encoding.tensor_product import TensorProduct
from qualtran.bloqs.block_encoding.unitary import Unitary
298 changes: 297 additions & 1 deletion qualtran/bloqs/block_encoding/block_encoding.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -594,6 +594,302 @@
"show_call_graph(chebyshev_poly_g)\n",
"show_counts_sigma(chebyshev_poly_sigma)"
]
},
{
"cell_type": "markdown",
"id": "700368a9",
"metadata": {
"cq.autogen": "Unitary.bloq_doc.md"
},
"source": [
"## `Unitary`\n",
"Trivial block encoding of a unitary operator.\n",
"\n",
"Builds the block encoding as\n",
"$\n",
" B[U] = U\n",
"$\n",
"where $U$ is a unitary operator. Here, $B[U]$ is a $(1, 0, 0)$-block encoding of $U$.\n",
"\n",
"#### Parameters\n",
" - `U`: The unitary operator to block-encode.\n",
" - `alpha`: The normalization factor (default 1).\n",
" - `ancilla_bitsize`: The number of ancilla bits (default 0).\n",
" - `resource_bitsize`: The number of resource bits (default 0).\n",
" - `epsilon`: The precision parameter (default 0). \n",
"\n",
"#### Registers\n",
" - `system`: The system register.\n",
" - `ancilla`: The ancilla register (present only if bitsize > 0).\n",
" - `resource`: The resource register (present only if bitsize > 0).\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "69e80c37",
"metadata": {
"cq.autogen": "Unitary.bloq_doc.py"
},
"outputs": [],
"source": [
"from qualtran.bloqs.block_encoding import Unitary"
]
},
{
"cell_type": "markdown",
"id": "853ccd95",
"metadata": {
"cq.autogen": "Unitary.example_instances.md"
},
"source": [
"### Example Instances"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "af1f5871",
"metadata": {
"cq.autogen": "Unitary.unitary_block_encoding"
},
"outputs": [],
"source": [
"from qualtran.bloqs.basic_gates import TGate\n",
"\n",
"unitary_block_encoding = Unitary(TGate())"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "421a0137",
"metadata": {
"cq.autogen": "Unitary.unitary_block_encoding_override"
},
"outputs": [],
"source": [
"from attrs import evolve\n",
"\n",
"from qualtran.bloqs.basic_gates import TGate\n",
"\n",
"unitary_block_encoding_override = evolve(\n",
" Unitary(TGate()), alpha=0.5, ancilla_bitsize=2, resource_bitsize=1, epsilon=0.01\n",
")"
]
},
{
"cell_type": "markdown",
"id": "85b9c065",
"metadata": {
"cq.autogen": "Unitary.graphical_signature.md"
},
"source": [
"#### Graphical Signature"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "47675855",
"metadata": {
"cq.autogen": "Unitary.graphical_signature.py"
},
"outputs": [],
"source": [
"from qualtran.drawing import show_bloqs\n",
"show_bloqs([unitary_block_encoding, unitary_block_encoding_override],\n",
" ['`unitary_block_encoding`', '`unitary_block_encoding_override`'])"
]
},
{
"cell_type": "markdown",
"id": "36167a23",
"metadata": {
"cq.autogen": "Unitary.call_graph.md"
},
"source": [
"### Call Graph"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "97a218a8",
"metadata": {
"cq.autogen": "Unitary.call_graph.py"
},
"outputs": [],
"source": [
"from qualtran.resource_counting.generalizers import ignore_split_join\n",
"unitary_block_encoding_g, unitary_block_encoding_sigma = unitary_block_encoding.call_graph(max_depth=1, generalizer=ignore_split_join)\n",
"show_call_graph(unitary_block_encoding_g)\n",
"show_counts_sigma(unitary_block_encoding_sigma)"
]
},
{
"cell_type": "markdown",
"id": "2c274284",
"metadata": {
"cq.autogen": "TensorProduct.bloq_doc.md"
},
"source": [
"## `TensorProduct`\n",
"Tensor product of a sequence of block encodings.\n",
"\n",
"Builds the block encoding as\n",
"$$\n",
" B[U_1 ⊗ U_2 ⊗ \\cdots ⊗ U_n] = B[U_1] ⊗ B[U_2] ⊗ \\cdots ⊗ B[U_n]\n",
"$$\n",
"\n",
"When each $B[U_i]$ is a $(\\alpha_i, a_i, \\epsilon_i)$-block encoding of $U_i$, we have that\n",
"$B[U_1 ⊗ \\cdots ⊗ U_n]$ is a $(\\prod_i \\alpha_i, \\sum_i a_i, \\sum_i \\alpha_i \\epsilon_i)$-block\n",
"encoding of $U_1 ⊗ \\cdots ⊗ U_n$.\n",
"\n",
"#### Parameters\n",
" - `block_encodings`: A sequence of block encodings. \n",
"\n",
"#### Registers\n",
" - `system`: The system register.\n",
" - `ancilla`: The ancilla register (present only if bitsize > 0).\n",
" - `resource`: The resource register (present only if bitsize > 0). \n",
"\n",
"#### References\n",
" - [Quantum algorithms: A survey of applications and end-to-end complexities](https://arxiv.org/abs/2310.03011). Dalzell et al. (2023). Ch. 10.2.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0940792a",
"metadata": {
"cq.autogen": "TensorProduct.bloq_doc.py"
},
"outputs": [],
"source": [
"from qualtran.bloqs.block_encoding import TensorProduct"
]
},
{
"cell_type": "markdown",
"id": "9199daa6",
"metadata": {
"cq.autogen": "TensorProduct.example_instances.md"
},
"source": [
"### Example Instances"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "488f9bb9",
"metadata": {
"cq.autogen": "TensorProduct.tensor_product_block_encoding"
},
"outputs": [],
"source": [
"from qualtran.bloqs.basic_gates import Hadamard, TGate\n",
"from qualtran.bloqs.block_encoding.unitary import Unitary\n",
"\n",
"tensor_product_block_encoding = TensorProduct((Unitary(TGate()), Unitary(Hadamard())))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1b1b4b8c",
"metadata": {
"cq.autogen": "TensorProduct.tensor_product_block_encoding_override"
},
"outputs": [],
"source": [
"from attrs import evolve\n",
"\n",
"from qualtran.bloqs.basic_gates import CNOT, TGate\n",
"from qualtran.bloqs.block_encoding.unitary import Unitary\n",
"\n",
"u1 = evolve(Unitary(TGate()), alpha=0.5, ancilla_bitsize=2, resource_bitsize=1, epsilon=0.01)\n",
"u2 = evolve(Unitary(CNOT()), alpha=0.5, ancilla_bitsize=1, resource_bitsize=1, epsilon=0.1)\n",
"tensor_product_block_encoding_override = TensorProduct((u1, u2))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e41f94fa",
"metadata": {
"cq.autogen": "TensorProduct.tensor_product_block_encoding_symb"
},
"outputs": [],
"source": [
"import sympy\n",
"\n",
"from qualtran.bloqs.basic_gates import Hadamard, TGate\n",
"from qualtran.bloqs.block_encoding.unitary import Unitary\n",
"\n",
"alpha1 = sympy.Symbol('alpha1')\n",
"a1 = sympy.Symbol('a1')\n",
"eps1 = sympy.Symbol('eps1')\n",
"alpha2 = sympy.Symbol('alpha2')\n",
"a2 = sympy.Symbol('a2')\n",
"eps2 = sympy.Symbol('eps2')\n",
"tensor_product_block_encoding_symb = TensorProduct(\n",
" (\n",
" Unitary(TGate(), alpha=alpha1, ancilla_bitsize=a1, epsilon=eps1),\n",
" Unitary(Hadamard(), alpha=alpha2, ancilla_bitsize=a2, epsilon=eps2),\n",
" )\n",
")"
]
},
{
"cell_type": "markdown",
"id": "605e1129",
"metadata": {
"cq.autogen": "TensorProduct.graphical_signature.md"
},
"source": [
"#### Graphical Signature"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "23f6bacf",
"metadata": {
"cq.autogen": "TensorProduct.graphical_signature.py"
},
"outputs": [],
"source": [
"from qualtran.drawing import show_bloqs\n",
"show_bloqs([tensor_product_block_encoding, tensor_product_block_encoding_override, tensor_product_block_encoding_symb],\n",
" ['`tensor_product_block_encoding`', '`tensor_product_block_encoding_override`', '`tensor_product_block_encoding_symb`'])"
]
},
{
"cell_type": "markdown",
"id": "c6b0e68e",
"metadata": {
"cq.autogen": "TensorProduct.call_graph.md"
},
"source": [
"### Call Graph"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bfab456d",
"metadata": {
"cq.autogen": "TensorProduct.call_graph.py"
},
"outputs": [],
"source": [
"from qualtran.resource_counting.generalizers import ignore_split_join\n",
"tensor_product_block_encoding_g, tensor_product_block_encoding_sigma = tensor_product_block_encoding.call_graph(max_depth=1, generalizer=ignore_split_join)\n",
"show_call_graph(tensor_product_block_encoding_g)\n",
"show_counts_sigma(tensor_product_block_encoding_sigma)"
]
}
],
"metadata": {
Expand All @@ -612,7 +908,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.11.2"
}
},
"nbformat": 4,
Expand Down
26 changes: 26 additions & 0 deletions qualtran/bloqs/block_encoding/block_encoding_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@

from qualtran import Bloq, BloqDocSpec, Register
from qualtran.bloqs.block_encoding.lcu_select_and_prepare import PrepareOracle
from qualtran.symbolics import SymbolicFloat, SymbolicInt


class BlockEncoding(Bloq):
Expand Down Expand Up @@ -67,6 +68,31 @@ class `BlockEncoding` bloq, which expects values for $\alpha$, $\epsilon$,
def pretty_name(self) -> str:
return 'B[H]'

@property
def alpha(self) -> SymbolicFloat:
"""The normalization constant."""
raise NotImplementedError

@property
def system_bitsize(self) -> SymbolicInt:
"""The number of qubits that represent the system being block encoded."""
raise NotImplementedError

@property
def ancilla_bitsize(self) -> SymbolicInt:
"""The number of ancilla qubits."""
raise NotImplementedError

@property
def resource_bitsize(self) -> SymbolicInt:
"""The number of resource qubits not counted in ancillas."""
raise NotImplementedError

@property
def epsilon(self) -> SymbolicFloat:
"""The precision to which the block encoding is to be prepared."""
raise NotImplementedError
Comment on lines +91 to +94
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make these abc.abstractmethod and make the base class derive from metaclass=abc.ABCMeta instead of raising NotImplementedError

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My concern is that existing instances of BlockEncoding then could not be instantiated unless we changed them now to implement these methods. Do you agree?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, that makes sense. We can open an issue so we can do this change after the migration is completed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will open this issue at merge time.


@property
@abc.abstractmethod
def selection_registers(self) -> Tuple[Register, ...]:
Expand Down
Loading
Loading