-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
documentation for run #169
Comments
Here we go: https://stephenslab.github.io/dsc-wiki/reference/DSC_Execution.html#example-4-named-pipelines
No, it is optional. See other examples on that page above involving multiple pipelines yet do not have this requirement
DSC always build one DAG by consolidating all pipelines. So it does figure out the possible "sharing" and avoid rerunning or queuing stuff. Is this your question?
I'm not sure ... things in DSC can be "my practice" not necessarily the best practice. I'm always open to suggestions. But this current setup seems to work for my purpose. |
all the intro examples i looked at seem to run just one benchmark
eg
run: simulate * analyze * score
but in practice i see definitions like this:
run:
default: data * sim_gaussian * get_sumstats * ((susie_z, susie_oracle) * (score_susie, plot_susie), dap_z * (score_dap, plot_dap), finemap * (score_finemap, plot_finemap))
null: data * sim_gaussian_null * get_sumstats * ((susie_z, susie_oracle) * (score_susie, plot_susie), finemap * (score_finemap, plot_finemap))
presumably to make it easier to run more than one "benchmark" (collection of pipelines).
Some questions:
Does DSC automatically share results across different benchmarks like this when it can?
The text was updated successfully, but these errors were encountered: