-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Constrained minimizer #258
base: main
Are you sure you want to change the base?
Conversation
def constrained_minimize( | ||
self, | ||
cost: CostFunction, | ||
cstr: CostFunction, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the difference between the constraint to the cost? Should we minimize both of them? Maybe the constraint should be a Boolean function meaning that it returns true or false?
Should add more comments about what type of constraints can be implemented.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cstr
cost function will almost always be a unitary distance/fidelity based cost, while the cost
cost function quantifies some desirable quality of the output circuit to optimize (i.e. it will not be distance based).
The choice to minimize them both comes from the general form of constrained optimization problems. It still makes sense to use a scalar valued cost for the constraint because it's likely we would want to know how far off from being constrained a solution is (or if we want to shift some success threshold).
I've added some details in the docstring about what the constraint should look like. The cost function is left more open ended.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm sorry, I'm still confused.
This abstract function tries to minimize the cost, the constraint, or both?
I assume both, but then why would one want two different cost functions to minimize? do they have different weights? why not combine them into a single cost function?
When you say "satisfying some 'constraint' CostFunction" it means that it is a boolean function (you can have a distance with a threshold, that will create a boolean value...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to preface by saying that these design choices are based on how off the shelf constrained optimizers work. The wikipedia page on the KKT conditions and the documentation for scipy are good reads for this.
The cost is minimized given that some constraint inequality is satisfied (i.e. (cstr(x) - epsilon) <= 0
). The cost and constraint functions are likely to check for very different things, so from an interpretability standpoint it does not make sense to combine them at this level. Also, they are combined into one cost function under the hood but this is the responsibility of the actual optimizer used, not the ConstrainedMinimizer
class. Combining them here means that off the shelf constrained optimizers cannot be used.
While satisfying constraints does mean passing some boolean condition, treating them as discrete tests means we can't apply optimization tools that expect continuous and differentiable functions (which is an expectation of most constrained optimizers).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, so how does the user specifies the constraint inequality ? in the scipy documentation there is an upper and a lower bound. Do you think there should one here as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good point. It's not passed here because it fits better with how the Minimizer.minimize
function call looks. In the current implementation, actual instances of the ConstrainedMinimizer
have the success threshold as a parameter in __init__
. There are different ways to pass the success threshold given the optimizer, so I think handling this internally still makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it makes a difference, I think keeping the instantiation API open for now is valuable. So if you think there is a modification to Minimizer API that makes better sense, we should discuss it.
This pull request adds support for constrained numerical optimization in numerical instantiation.