Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

batch of suggestions #2437

Open
Fa20 opened this issue May 8, 2024 · 14 comments
Open

batch of suggestions #2437

Fa20 opened this issue May 8, 2024 · 14 comments
Assignees

Comments

@Fa20
Copy link

Fa20 commented May 8, 2024

is it possible to get multiple trials at once and then evaluate them together.

@mgrange1998
Copy link
Contributor

Hi,

Is a batch trial what you are looking for? It allows you to attach multiple arms to one trial and deploy and evaluate them together. (See https://ax.dev/api/core.html#batchtrial for more details)

In terms of trials running together, the GenerationStrategy objects allows you to specify "max_parallelism" which controls the number of trials which can be running at once. See the generation strategy wiki here for more details on that https://ax.dev/tutorials/generation_strategy.html#1.-Quick-start-examples

Let me know if this helps or if you have any other questions.

@Fa20
Copy link
Author

Fa20 commented May 8, 2024

Thanks ffor your answer. yes attach mutiple arms to one trial. I have tried to use it with the follwoing example def create_experiment():
ax_client = AxClient()
ax_client.create_experiment(
name="multi_objective_optimization",
parameters=[
{"name": "x1", "type": "range", "bounds": [-1.0, 3.0]},
{"name": "x2", "type": "range", "bounds": [-2.0, 2.0]},
{"name": "x3", "type": "range", "bounds": [-2.0, 2.0]},
{"name": "x4", "type": "range", "bounds": [-20.0, 20.0]}
],
objectives={
"objective_1": ObjectiveProperties(minimize=True),
"objective_2": ObjectiveProperties(minimize=False)
},
overwrite_existing_experiment=True
)
return ax_client

def evaluate_batch_trial(ax_client, batch_trial):
for arm_name, arm in batch_trial.arms_by_name.items():
objective_1_result = arm.parameters['x1'] * 2
objective_2_result = 100 - arm.parameters['x2']

    ax_client.attach_trial_data(
        trial_index=batch_trial.index,
        arm_name=arm_name,
        results={
            "objective_1": (objective_1_result, 0.0),  
            "objective_2": (objective_2_result, 0.0)
        }
    )
batch_trial.mark_completed()  

ax_client = create_experiment()

batch_trial = ax_client.create_batch_trial()
for _ in range(10):
parameters = ax_client.get_next_trial().parameters
batch_trial.add_arm(parameters)

evaluate_batch_trial(ax_client, batch_trial) but it does not work correct? is there any tutorials which explain how mutiple arms can be used with example

@mgrange1998
Copy link
Contributor

For a tutorial, you can follow the generation strategy tutorial https://ax.dev/tutorials/generation_strategy.html
Just set "use_batch_trials=True" when calling choose generation strategy.

For your example, could you please provide error logs and more details about why it is failing?

@Fa20
Copy link
Author

Fa20 commented May 9, 2024


AttributeError Traceback (most recent call last)
Cell In[57], line 43
40 ax_client = create_experiment()
42 # Request a batch of 10 trials at once
---> 43 batch_trial = ax_client.create_batch_trial() # Create a new batch trial
44 for _ in range(10): # Add 10 arms to the batch
45 parameters = ax_client.get_next_trial().parameters

AttributeError: 'AxClient' object has no attribute 'create_batch_trial' , I have tried in the above code to use batch trail to get batch ot trials and then evaluate them but it seems that I did it wrong

@mgrange1998
Copy link
Contributor

See https://ax.dev/api/service.html#module-ax.service.ax_client
"Note: AxClient expects to only propose 1 arm (suggestion) per trial; support for use cases that require use of batches is coming soon."

AxClient as of yet does not support batch trial, you'll need to follow the generation strategy tutorial to test out batch trials.

@Fa20
Copy link
Author

Fa20 commented May 9, 2024

https://ax.dev/tutorials/generation_strategy.html but this toutorial I think for just one objective function ? can I use it for batch trials in case of Multi-objective functions?

@mgrange1998
Copy link
Contributor

Yes, try the "use_batch_trials" flag in the "choose_generation_strategy" method call.

use_batch_trials: bool = False,

@Fa20
Copy link
Author

Fa20 commented May 9, 2024

gs = choose_generation_strategy(
search_space=get_branin_search_space(),
use_batch_trials=True,
)

Main optimization loop

for _ in range(2):
parameters, trial_index = ax_client.get_next_trial()
ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate_parameters(parameters))
but how can run the main optmization to get the multiply sugesstion of the papameters values and not sequential based on the above code I'm still geeting one sugesstion which is evaluted and then the nex etc.

@Balandat
Copy link
Contributor

but how can run the main optmization to get the multiply sugesstion of the papameters values and not sequential based on the above code I'm still geeting one sugesstion which is evaluted and then the nex etc.

You can just call .get_next_trial() multiple times without calling complete_trial() in between. AxClient will be aware that the previous trials are "pending" and account for that in subsequent candidate suggestion(s).

@Fa20
Copy link
Author

Fa20 commented May 10, 2024

@Balandat so in this case I do not need to use choose_generation_strateg and no need to set the use_batch_trials=True if I understand you correct?
my second question: is there any difference bewteen using get_next_trial() multiple times and then evalute the objective on them and use choose_generation_strategy(
search_space=get_branin_search_space(),
use_batch_trials=True,
) ? are both give same accuracy of sugesstions for the value of the parameters ?

@Balandat
Copy link
Contributor

in this case I do not need to use choose_generation_strateg and no need to set the use_batch_trials=True if I understand you correct?

Yes.

are both give same accuracy of sugesstions for the value of the parameters ?

In a nutshell, yes. There are some subtleties about how exactly they are generated, but for all intents and purposes you can see them as equivalent in terms of generation.

The main discerning factor of a BatchTrial is how the arms are being evaluated. You can read more about this here: https://github.com/facebook/Ax/blob/main/ax/core/batch_trial.py?fbclid=IwAR2UtrpXZwIWmylR2zZrq54v50RTtvs_r6pcJDWcG1a85riyjYGK0fGDL1g#L103-L115. If that's not the setting you're in you are fine with just callingget_next_trial() repeatedly.

@Fa20
Copy link
Author

Fa20 commented May 16, 2024

for the of MOO (in my case differnt than the tutorial since I do not have the refernce point). should I follow the same steps since there are more detaials and also sobol sampling inclused etc. or should I follow

def create_experiment():
    ax_client = AxClient()
    ax_client.create_experiment(
        name="multi_objective_optimization",
        parameters=[
            {"name": "x1", "type": "range", "bounds": [-1.0, 3.0]},
            {"name": "x2", "type": "range", "bounds": [-2.0, 2.0]},
            {"name": "x3", "type": "range", "bounds": [-2.0, 2.0]},
            {"name": "x4", "type": "range", "bounds": [-20.0, 20.0]}
        ],
        objectives={
            "objective_1": ObjectiveProperties(minimize=True),
            "objective_2": ObjectiveProperties(minimize=False)
        },
        overwrite_existing_experiment=True
    )
    return ax_client

def evaluate_batch_trial(ax_client, batch_trial):
    for arm_name, arm in batch_trial.arms_by_name.items():
        objective_1_result = arm.parameters['x1'] * 2
        objective_2_result = 100 - arm.parameters['x2']
    
        ax_client.attach_trial_data(
            trial_index=batch_trial.index,
            arm_name=arm_name,
            results={
                "objective_1": (objective_1_result, 0.0),  
                "objective_2": (objective_2_result, 0.0)
            }
        )
    batch_trial.mark_completed()  

ax_client = create_experiment()

batch_trial = ax_client.create_batch_trial()
for _ in range(10):
    parameters = ax_client.get_next_trial().parameters
    batch_trial.add_arm(parameters)

which is easy to understand it but in this case I will be not able to evaluate the results using one of the two algoritms explained in the tutorial(https://ax.dev/versions/0.1.18/tutorials/multiobjective_optimization.html) and If I folllow this tutorial how can I define the primary and secondary objective functions

@Balandat
Copy link
Contributor

You can follow the same steps for MOO, no need to use BatchTrials for MOO.

which is easy to understand it but in this case I will be not able to evaluate the results using one of the two algoritms explained in the tutorial

You mean b/c the tutorial uses the developer API to evaluate the different algorithms? It is possible to also use different algorithms than the defaults in AxClient by passing a GenerationStrategy to the AxClient() instantiation: https://ax.dev/tutorials/generation_strategy.html

You also want to look at the current version of the tutorial if you're on a newer version of Ax (0.1.18 as in the tutorial is ancient): https://ax.dev/tutorials/multiobjective_optimization.html

@Fa20
Copy link
Author

Fa20 commented May 16, 2024

@Balandat Thanks for your answer: my problem like the following I have two objective functions that I want to find the solution for them with non linear constrained and I need to get the values of the sugested parameters then use them to calaculate the objective functions but since get one sugesstion by one will requried long time I need to use the batch trial to get batch of sugesstions. I found the follwoing toutorial which looks not the new version :
https://ax.dev/versions/0.1.18/tutorials/multiobjective_optimization.html

and the one which you sent me:
https://ax.dev/tutorials/multiobjective_optimization.html(I think I should follow this one ???)

my second question : in case that I will follow the implementation in the tutorial you share it (second link) how can I get batch of trials ?

my third question : objectives = ax_client.experiment.optimization_config.objective.objectives
frontier = compute_posterior_pareto_frontier(
experiment=ax_client.experiment,
data=ax_client.experiment.fetch_data(),
primary_objective=objectives[1].metric,
secondary_objective=objectives[0].metric,
absolute_metrics=["a", "b"],
num_points=20,
)
render(plot_pareto_frontier(frontier, CI_level=0.90)) this part form the tutorial which Algorithm is used here to evaluate the results (qEHVI) or different one? does the 20 point ploted represent the solution of the problem so that some one can print these values and consider them as solution for the problem in the tutorial? . The part start from "Deep Dive" it is really not clear to me do I need It for my problem? because there are more detials . another question how can we choose number of iterations (in the tutorial was 25 I think but I'm not sure if in my case will need # iterations>25 or it is just by trial and error?)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants