Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Constrained minimizer #258

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 48 additions & 0 deletions bqskit/ir/opt/constrained.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
"""This module implements the Minimizer base class."""
from __future__ import annotations

import abc
from typing import TYPE_CHECKING

import numpy as np
import numpy.typing as npt

from bqskit.qis.unitary.unitary import RealVector

if TYPE_CHECKING:
from bqskit.ir.opt.cost.function import CostFunction


class ConstrainedMinimizer(abc.ABC):
"""
The Minimizer class.

An minimizer finds the parameters for a circuit template that minimizes some
CostFunction.
mtweiden marked this conversation as resolved.
Show resolved Hide resolved
"""

@abc.abstractmethod
def constrained_minimize(
self,
cost: CostFunction,
cstr: CostFunction,
Copy link
Contributor

@alonkukl alonkukl Jul 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the difference between the constraint to the cost? Should we minimize both of them? Maybe the constraint should be a Boolean function meaning that it returns true or false?
Should add more comments about what type of constraints can be implemented.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cstr cost function will almost always be a unitary distance/fidelity based cost, while the cost cost function quantifies some desirable quality of the output circuit to optimize (i.e. it will not be distance based).

The choice to minimize them both comes from the general form of constrained optimization problems. It still makes sense to use a scalar valued cost for the constraint because it's likely we would want to know how far off from being constrained a solution is (or if we want to shift some success threshold).

I've added some details in the docstring about what the constraint should look like. The cost function is left more open ended.

Copy link
Contributor

@alonkukl alonkukl Jul 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sorry, I'm still confused.

This abstract function tries to minimize the cost, the constraint, or both?
I assume both, but then why would one want two different cost functions to minimize? do they have different weights? why not combine them into a single cost function?

When you say "satisfying some 'constraint' CostFunction" it means that it is a boolean function (you can have a distance with a threshold, that will create a boolean value...)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to preface by saying that these design choices are based on how off the shelf constrained optimizers work. The wikipedia page on the KKT conditions and the documentation for scipy are good reads for this.

The cost is minimized given that some constraint inequality is satisfied (i.e. (cstr(x) - epsilon) <= 0). The cost and constraint functions are likely to check for very different things, so from an interpretability standpoint it does not make sense to combine them at this level. Also, they are combined into one cost function under the hood but this is the responsibility of the actual optimizer used, not the ConstrainedMinimizer class. Combining them here means that off the shelf constrained optimizers cannot be used.

While satisfying constraints does mean passing some boolean condition, treating them as discrete tests means we can't apply optimization tools that expect continuous and differentiable functions (which is an expectation of most constrained optimizers).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, so how does the user specifies the constraint inequality ? in the scipy documentation there is an upper and a lower bound. Do you think there should one here as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point. It's not passed here because it fits better with how the Minimizer.minimize function call looks. In the current implementation, actual instances of the ConstrainedMinimizer have the success threshold as a parameter in __init__. There are different ways to pass the success threshold given the optimizer, so I think handling this internally still makes sense.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it makes a difference, I think keeping the instantiation API open for now is valuable. So if you think there is a modification to Minimizer API that makes better sense, we should discuss it.

x0: RealVector,
) -> npt.NDArray[np.float64]:
"""
Minimize `cost` starting from initial point `x0` while obeying `cstr`.

Args:
cost (CostFunction): The CostFunction to minimize.

cstr (CostFunction): The CostFunction used to constrain solutions.

x0 (np.ndarray): The initial point.

Returns:
(np.ndarray): The inputs that best minimizes the cost while obeying
the constraints.

Notes:
This function should be side-effect free. This is because many
calls may be running in parallel.
"""
Loading