-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable Optimizer state and Multi-Fidelity passthrough in SMAC #751
Conversation
@jsfreischuetz I just merged #738 from @motus. |
|
||
(all_configs, all_scores, all_contexts) = optimizer.get_observations() | ||
assert isinstance(all_configs, pd.DataFrame) | ||
assert isinstance(all_scores, pd.DataFrame) | ||
assert all_contexts is None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we assert that it is something expected now instead of just removing the check?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, because only SMAC uses context, where there other optimizers do not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then check for that too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made the associated changes
@@ -90,7 +89,6 @@ def objective(point: pd.DataFrame) -> pd.DataFrame: | |||
(all_configs, all_scores, all_contexts) = optimizer.get_observations() | |||
assert isinstance(all_configs, pd.DataFrame) | |||
assert isinstance(all_scores, pd.DataFrame) | |||
assert all_contexts is None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here. Can we assert that the contexts are something useful?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above, unless we split out the SMAC tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't need to split them out, just add a branch in the test for them.
I want to make sure we're testing expected functionality in both directions.
It also makes it more obvious what needs to be done when we extend it elsewhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made the associated changes
scores = pd.DataFrame({'score': [1]}) | ||
context = pd.DataFrame([["something"]]) | ||
|
||
with pytest.raises(UserWarning): | ||
optimizer.register(suggestion, scores, context=context) | ||
|
||
with pytest.raises(UserWarning): | ||
optimizer.suggest(context=context) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, so we can register context, but not suggest with it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's more about not sending the warning not being being thrown anymore
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand what the code in this test is doing. I'm asking why we aren't supporting suggestions with context yet when we're adding support for registering with context. It makes it seem like a blackhole of information or only partial support somehow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does context for a suggestion mean?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed offline and is documented in the suggestion in the README.md above now.
""" | ||
if context is not None: | ||
# not sure how that works here? | ||
warn(f"Not Implemented: Ignoring context {list(context.columns)}", UserWarning) | ||
return pd.DataFrame(dict(self.optimizer_parameter_space.sample_configuration()), index=[0]) | ||
return pd.DataFrame(dict(self.optimizer_parameter_space.sample_configuration()), index=[0]), None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why should this return None
for context?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't returning the context
that was passed in make more sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have interpreted the context that is passed in to the suggest to be different than the context that is returned. I am not exactly sure what the context in is supposed to mean
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's what I expect:
- context is composed of some set of metrics describing the execution environment (e.g., vm ram, vm vcore count, workload descriptor, etc.)
- upon registering the optimizer is able to build a model for how different config+context pairs maps to different output metrics
- when requesting a new suggestion, possible for a different execution environment (e.g., larger vm size), described in the context, the optimizer should be able to use all of that information to provide a new config suggestion for that new context
Now, it's possible that the optimizer stinks at that part initially because it doesn't know how to compute suggestions for unseen contexts (future research work probably).
One option could be that we return a variable length (user specified) list of items, each stemming from different context of "known" items (ranked by higher confidence), instead of for a the new context and then the caller needs to sift through and decide what to do, or else we return a random config (explore) + the provided context and they could start trying them to help fill in that knowledge.
I think the exact specifics of that part is to be determined.
But high level, if someone asks for a suggestion for a given context, I think we should return something within context, not something for anywhere in the space.
Make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
context is composed of some set of metrics describing the execution environment (e.g., vm ram, vm vcore count, workload descriptor, etc.)
As far as I am aware this is not supported by any of the optimizers when asking for a suggestion. SMAC supports the idea of these values being defined on optimizer initialization, but which context to evaluate is determined by the optimizer, and there is still not support for requiring a specific context.
But high level, if someone asks for a suggestion for a given context, I think we should return something within context, not something for anywhere in the space.
I think there is a problem with this. There are two ways to solve for this:
- If we simply request configurations until we find a context that matches, we now have a long list of pending configurations that should be evaluated according to the optimizer. If you ignore them, the optimizer will not be evaluating in this region as it believes that they are still pending. If you process them late, they are stale.
- Alternatively, we can force the context to match, regardless of what the optimizer returns, but this is not actually sampling efficiently according to the optimizer.
I think realistically, we should remove the argument from suggest, but I tried to keep the changes as minimal as possible. Someone else before me also left a comment suggesting this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed offline about the discrepency in terminology.
SMAC's notion of "context" is different from that in MLOS, which @jsfreischuetz rightly pointed out may or may not be supported directly yet.
SMAC's is much more about passing internal optimizer "state".
For now, we decided to
- document the difference in the README.md above.
- rename the SMAC required "state" to either
metadata
orstate
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please restore context
and add metadata
as new optional parameters.
|
||
self.trial_info_df: pd.DataFrame = pd.DataFrame( | ||
columns=["Configuration", "Context", "TrialInfo", "TrialValue"] | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: look into abstracting this out so multiple backend optimizers can reuse it (e.g., FLAML, LlamaTune)
mlos_core/mlos_core/optimizers/bayesian_optimizers/smac_optimizer.py
Outdated
Show resolved
Hide resolved
@@ -224,7 +264,41 @@ def n_random_init(self) -> int: | |||
return self.base_optimizer._initial_design._n_configs | |||
|
|||
@staticmethod | |||
def _dummy_target_func(config: ConfigSpace.Configuration, seed: int = 0) -> None: | |||
def _filter_kwargs(function: Callable, **kwargs: Any) -> Dict[str, Any]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move to util.py
?
@@ -86,7 +86,7 @@ def __init__(self, *, # pylint: disable=too-many-arguments | |||
self._suggested_config: Optional[dict] | |||
|
|||
def _register(self, configurations: pd.DataFrame, scores: pd.DataFrame, | |||
context: Optional[pd.DataFrame] = None) -> None: | |||
metadata: Optional[pd.DataFrame] = None) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please restore context
.
The idea was to add metadata
in addition to it, not replace it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(requires associated changes throughout)
""" | ||
if context is not None: | ||
# not sure how that works here? | ||
warn(f"Not Implemented: Ignoring context {list(context.columns)}", UserWarning) | ||
return pd.DataFrame(dict(self.optimizer_parameter_space.sample_configuration()), index=[0]) | ||
return pd.DataFrame(dict(self.optimizer_parameter_space.sample_configuration()), index=[0]), None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed offline about the discrepency in terminology.
SMAC's notion of "context" is different from that in MLOS, which @jsfreischuetz rightly pointed out may or may not be supported directly yet.
SMAC's is much more about passing internal optimizer "state".
For now, we decided to
- document the difference in the README.md above.
- rename the SMAC required "state" to either
metadata
orstate
scores = pd.DataFrame({'score': [1]}) | ||
context = pd.DataFrame([["something"]]) | ||
|
||
with pytest.raises(UserWarning): | ||
optimizer.register(suggestion, scores, context=context) | ||
|
||
with pytest.raises(UserWarning): | ||
optimizer.suggest(context=context) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed offline and is documented in the suggestion in the README.md above now.
…zer.py Co-authored-by: Brian Kroth <[email protected]>
Co-authored-by: Brian Kroth <[email protected]>
Co-authored-by: Brian Kroth <[email protected]>
This also involves modifying the test cases that interact with this interfaces
""" | ||
# Do some input validation. | ||
assert set(scores.columns) == set(self._optimization_targets), \ | ||
"Mismatched optimization targets." | ||
if type(self._optimization_targets) is str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if type(self._optimization_targets) is str: | |
assert self._optimization_targets, "Missing or invalid optimization targets" | |
if type(self._optimization_targets) is str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also assert not empty (see also comment above about accepting None
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this makes sense given my comment above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, but separate PR for that one please
Co-authored-by: Brian Kroth <[email protected]>
Co-authored-by: Brian Kroth <[email protected]>
self._has_context = context is not None | ||
|
||
if self._space_adapter: | ||
configurations = self._space_adapter.inverse_transform(configurations) | ||
assert configurations.shape[1] == len(self.optimizer_parameter_space.values()), \ | ||
"Mismatched configuration shape after inverse transform." | ||
return self._register(configurations, scores, context) | ||
return self._register(configurations, scores, metadata, context) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here
self._has_context = context is not None | ||
|
||
if self._space_adapter: | ||
configurations = self._space_adapter.inverse_transform(configurations) | ||
assert configurations.shape[1] == len(self.optimizer_parameter_space.values()), \ | ||
"Mismatched configuration shape after inverse transform." | ||
return self._register(configurations, scores, context) | ||
return self._register(configurations, scores, metadata, context) | ||
|
||
@abstractmethod | ||
def _register(self, configurations: pd.DataFrame, scores: pd.DataFrame, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def _register(self, configurations: pd.DataFrame, scores: pd.DataFrame, | |
def _register(self, *, configurations: pd.DataFrame, scores: pd.DataFrame, |
Can force the args to be named to help avoid param ordering mistakes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same for the elsewhere (e.g., public methods and suggest), though this might be a larger API change that needs its own PR first in prepration for this one since callers will also be affected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree. Let's fix _register
now and update the public register
in the next PR
configuration: pd.DataFrame = config_to_dataframe( | ||
self.parameter_space.get_default_configuration() | ||
) | ||
metadata = self.delayed_metadata |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: when creating PRs - try to keep your changes smaller. It's easier to review and debug.
If the order of this one didn't really matter you could have left the first line alone and only added the two new ones
Co-authored-by: Brian Kroth <[email protected]>
Co-authored-by: Brian Kroth <[email protected]>
c29405c
to
8d2a894
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a few comments. Let's have a meeting and discuss the changes. I am a bit confused about the purpose of the metadata and how it is different from the context
@@ -56,9 +56,11 @@ def __init__(self, *, | |||
raise ValueError("Number of weights must match the number of optimization targets") | |||
|
|||
self._space_adapter: Optional[BaseSpaceAdapter] = space_adapter | |||
self._observations: List[Tuple[pd.DataFrame, pd.DataFrame, Optional[pd.DataFrame]]] = [] | |||
self._observations: List[Tuple[pd.DataFrame, pd.DataFrame, Optional[pd.DataFrame], Optional[pd.DataFrame]]] = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this gets a bit unwieldy. Should we have a named tuple for _pending_observations
, e.g.,
class PendingObservation(NamedTuple):
"""A named tuple representing a pending observation."""
configurations: pd.DataFrame
context: Optional[pd.DataFrame]
meta: Optional[pd.DataFrame]
and do the same for _observations
? (not sure we can inherit NamedTuple
s - most likely, we can't)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or, maybe, have _observed_configs
, _observed_scores
, _observed_contexts
etc. and concatenate the dataframes instead of having a list of dataframes. I am pretty sure the schemas are the same from one _register
call to the next
self.delayed_config: Optional[pd.DataFrame] = None | ||
self.delayed_metadata: Optional[pd.DataFrame] = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.delayed_config: Optional[pd.DataFrame] = None | |
self.delayed_metadata: Optional[pd.DataFrame] = None | |
self._delayed_config: Optional[pd.DataFrame] = None | |
self._delayed_metadata: Optional[pd.DataFrame] = None |
probably should be private, right?
self._has_context = context is not None | ||
|
||
if self._space_adapter: | ||
configurations = self._space_adapter.inverse_transform(configurations) | ||
assert configurations.shape[1] == len(self.optimizer_parameter_space.values()), \ | ||
"Mismatched configuration shape after inverse transform." | ||
return self._register(configurations, scores, context) | ||
return self._register(configurations, scores, metadata, context) | ||
|
||
@abstractmethod | ||
def _register(self, configurations: pd.DataFrame, scores: pd.DataFrame, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree. Let's fix _register
now and update the public register
in the next PR
|
||
Returns | ||
------- | ||
observations : Tuple[pd.DataFrame, pd.DataFrame, Optional[pd.DataFrame]] | ||
A triplet of (config, score, context) DataFrames of observations. | ||
A triplet of (config, score, metadata) DataFrames of observations. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A triplet of (config, score, metadata) DataFrames of observations. | |
A 4-tuple of (config, score, context, metadata) DataFrames of observations. |
(or, better yet, a NamedTuple
)
configs = pd.concat([config for config, _, _ in self._observations]).reset_index(drop=True) | ||
scores = pd.concat([score for _, score, _ in self._observations]).reset_index(drop=True) | ||
configs = pd.concat([config for config, _, _, _ in self._observations]).reset_index(drop=True) | ||
scores = pd.concat([score for _, score, _, _ in self._observations]).reset_index(drop=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, let's have a List[Observation]
NamedTuples - or, better yet, concatenate the dataframes right there in _register
and forget about List
and NamedTuple
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like we don't use this module anywhere - maybe remove it from this PR?
assert best_context is None | ||
assert set(best_config.columns) == {'x', 'y'} | ||
assert set(best_score.columns) == {'main_score', 'other_score'} | ||
assert best_config.shape == (1, 2) | ||
assert best_score.shape == (1, 2) | ||
|
||
(all_configs, all_scores, all_contexts) = optimizer.get_observations() | ||
(all_configs, all_scores, all_metadata, best_context) = optimizer.get_observations() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why best_context
?? shouldn't it be all_contexts
instead?
assert isinstance(all_metadata, pd.DataFrame) or all_metadata is None | ||
else: | ||
assert all_metadata is None | ||
assert best_context is None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assert best_context is None | |
assert all_contexts is None |
?
@@ -48,7 +48,7 @@ def test_create_optimizer_and_suggest(configuration_space: CS.ConfigurationSpace | |||
|
|||
assert optimizer.parameter_space is not None | |||
|
|||
suggestion = optimizer.suggest() | |||
suggestion, _ = optimizer.suggest() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here and in other places:
suggestion, _ = optimizer.suggest() | |
suggestion, _metadata = optimizer.suggest() |
it's better to spell out things
(best_config, best_score, _context) = best_observation | ||
(llamatune_best_config, llamatune_best_score, _context) = llamatune_best_observation | ||
(best_config, best_score, _, _) = best_observation | ||
(llamatune_best_config, llamatune_best_score, _metadata) = llamatune_best_observation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am confused. shouldn't it be
(llamatune_best_config, llamatune_best_score, _metadata) = llamatune_best_observation | |
(llamatune_best_config, llamatune_best_score, _context, _metadata) = llamatune_best_observation |
?
this should fail in tests
It should be in the README I asked for and also some internal chats we had, but essentially, |
This is a simple PR that makes all arguments explicit for optimizer-related function calls in preparation to add additional arguments in #751 and make it easier to review. --------- Co-authored-by: Brian Kroth <[email protected]> Co-authored-by: Brian Kroth <[email protected]>
Adds metadata to respond from suggest, and be passable into register. This is in support of adding multi-fidelity support (#751) --------- Co-authored-by: Brian Kroth <[email protected]> Co-authored-by: Brian Kroth <[email protected]>
Enables changing the defaults in the SMAC optimizer to enable state passing (what SMAC internally calls context, but that doesn't match MLOS's notion of context) and multi-fidelity tuning.
This requires changes across the project to support the concept of state, as well as updating a few test cases.
This PR replaces the previous one (#741)