Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mlos_bench_service #732

Open
7 tasks
bpkroth opened this issue May 10, 2024 · 10 comments
Open
7 tasks

mlos_bench_service #732

bpkroth opened this issue May 10, 2024 · 10 comments

Comments

@bpkroth
Copy link
Contributor

bpkroth commented May 10, 2024

  • (Storage) APIs to
  • new script (mlos_benchd) to manage those actions
    • it would run in a tight loop on the "runner VM(s)"
    • as Experiments become runnable in the queue, it would create an mlos_bench process for them and monitor the child process, changing that Experiment state in the database as necessary corresponding to the child process exit code
  • notifications on errors and/or monitoring dashboard on Experiment status, interacting mostly with the Storage APIs
@bpkroth
Copy link
Contributor Author

bpkroth commented May 10, 2024

@eujing

@bpkroth
Copy link
Contributor Author

bpkroth commented May 10, 2024

May want to split some of these tasks out to separate issues later on

@yshady
Copy link

yshady commented Aug 2, 2024

@bpkroth should be easy I have examples of interactive notebooks in my internal repo as well as the streamlit being quite a transferable process to a notebook experience!

@yshady
Copy link

yshady commented Aug 2, 2024

can follow similar workflow as the sidemenu

@yshady
Copy link

yshady commented Aug 2, 2024

Requirement would be a user can run at least one experiment manually first

@eujing
Copy link
Contributor

eujing commented Oct 3, 2024

From the mysql side, we currently have something similar that I can work on generalizing. It is basically a FastAPI app (we have been calling it a "runner") with the following endpoints.

Experiment-related:

  • GET /experiments -> Listing experiments. A combination of docker ps --filter "name=mlos-experiment-" and listing generated experiment config files
  • POST /experiments/start -> JSON body (and associated pydantic model for validation) to create experiment config files, and essentially does docker run {mlos_bench image} with relevant arguments
  • POST /experiments/stop/{experiment_id} -> No body, but essentially does docker stop {image name}

Front-end related, mainly for populating the JSON body to POST /experiments/start:

  • GET /options/mlos_configs -> List CLI config files for use with mlos_bench --config's value.
  • GET /options/benchbase_configs -> List benchbase XML config files
  • GET /options/tunable_groups -> List tunable group names, for selection to include in an experiment
  • GET /options/client_skus -> List available client VM SKUs for the subscription (makes an az API call)
  • GET /options/server_skus -> List available server SKUs (read off a curated CSV file)

We have 3 docker images, for this runner, the dashboard that uses it, and mlos_bench itself.
The first two are started with docker compose, along with nginx to help with some MSAL auth and HTTPS access to the dashboard.
Mounts for the runner container:

  • The base directory of the repo for our MLOS configs. This is also mounted to each "child" mlos_bench container that is started, including the relevant generated experiment config file.
  • The host's docker socket, to allow management of multiple mlos_bench containers.

@yshady
Copy link

yshady commented Oct 3, 2024

Im really glad you guys are still using FastAPI from summer, and even building it out. Happy to see it!

@bpkroth
Copy link
Contributor Author

bpkroth commented Oct 3, 2024

Portions of this make sense, but I'd rather have it do more iteraction with the storage backend, particularly on the runner side of things.

Right now there's basically an assumption that POST /experiments/start/experiment_id directly invokes a docker run {mlos_bench image} but that will cause scaling issues, especially if training the model remains local to the mlos_bench process as it is now.

If instead, the POST /experiments/start/experiment_id simply changes the state of the Experiment to "Runnable", then any number of Runners polling the storage backend can attempt to grab run a transaction to grab the Experiment, assign itself to it, and change it's state to "Running", and then invoke it. If the transaction fails, then it can retry the polling operation and either see that another Runner "won" and started the Experiment are else something failed.

With that change, then all of the REST operations can happen on the frontend(s) (which can also be more than one), and all of the execution operations can happen elsewhere.

The Storage layer becomes the only source of truth and everything else can scale by communicating with it.

Also note that the frontends could continue to be notebooks in this case as well.

It basically frees us to implement different things in the web UI (#838).

Does it make sense?

@yshady
Copy link

yshady commented Oct 3, 2024

"Also note that the frontends could continue to be notebooks in this case as well."

This is very true. Most of the code can be directly used in a notebook in the same way. I prefer a clean frontend but I know some people who are super hacky and sciency will stick to notebooks. We've discussed this a lot this summer.

Anyways this is great stuff, made my day that the team(s) are building on my work from summer, democratizing autotuning will probably mean turning MLOS into a simple frontend web application, but this is again just my opinion.

@eujing
Copy link
Contributor

eujing commented Oct 3, 2024

Portions of this make sense, but I'd rather have it do more iteraction with the storage backend, particularly on the runner side of things.

Right now there's basically an assumption that POST /experiments/start/experiment_id directly invokes a docker run {mlos_bench image} but that will cause scaling issues, especially if training the model remains local to the mlos_bench process as it is now.

If instead, the POST /experiments/start/experiment_id simply changes the state of the Experiment to "Runnable", then any number of Runners polling the storage backend can attempt to grab run a transaction to grab the Experiment, assign itself to it, and change it's state to "Running", and then invoke it. If the transaction fails, then it can retry the polling operation and either see that another Runner "won" and started the Experiment are else something failed.

With that change, then all of the REST operations can happen on the frontend(s) (which can also be more than one), and all of the execution operations can happen elsewhere.

The Storage layer becomes the only source of truth and everything else can scale by communicating with it.

I see, I think I understand. In this case, would we queue up experiments into storage via its API in the "Runnable" state, and a pool of runners would treat this as a queue in some order to invoke experiments, for scalability across multiple runners.

This would require us to store all the information needed to execute an experiment into the storage for runners to access. The current schema for ExperimentData has git_repo and git_commit fields, so the data flow of using git to access the configs seems the most direct.

My issue with this is having to push potentially sensitive experiment parameters (usually in the global files) to a git repo to run an experiment (e.g. connection string info) Should we consider expanding the ExperimentData schema to serialize this data?

I could imagine a first pass might be adding JSON fields representing values for mlos_bench --config <config_json> --globals <globals_json> <maybe key values for other simpler cli args>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants