forked from spack/spack
-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge from upstream/develop and Make It Go #229
Closed
greenc-FNAL
wants to merge
435
commits into
FNALssi:fnal-develop
from
greenc-FNAL:maintenance/fnal-develop-update
Closed
Merge from upstream/develop and Make It Go #229
greenc-FNAL
wants to merge
435
commits into
FNALssi:fnal-develop
from
greenc-FNAL:maintenance/fnal-develop-update
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Some providers must provide virtuals "together", i.e. if they provide one virtual of a set, they must be the providers also of the others. There was a bug though, where we were not checking if the other virtuals in the set were needed at all in the DAG. This commit fixes the bug.
0.5.0 tarball now has the 'v' removed from the name
Don't encourage use of default value if propagating a multivalued variant.
Looking at the memory profiles of concurrent solves for environment with unify:false, it seems memory is only ramping up. This exchange in the potassco mailing list: https://sourceforge.net/p/potassco/mailman/potassco-users/thread/b55b5b8c2e8945409abb3fa3c935c27e%40lohn.at/#msg36517698 Seems to suggest that clingo doesn't release memory until end of the application. Since when unify:false we distribute work to processes, here we give a maxtaskperchild=1, so we clean memory after each solve.
The download URL for linux-headers was hardcoded to 4.x; we need to derive the correct URL from the version number.
Currently module globals aren't set before running `setup_[dependent_]run_environment` to compute environment modifications for module files. This commit fixes that.
This PR adds support for including separate definitions from `spack.yaml`. Supporting the inclusion of files with definitions enables user to make curated/standardized collections of packages that can re-used by others.
This completes to `spack concretize`: ``` spack conc<tab> ``` but this still gets hung up on the difference between `concretize` and `concretise`: ``` spack -e . conc<tab> ``` We were checking `"$COMP_CWORD" = 1`, which tracks the word on the command line including any flags and their args, but we should track `"$COMP_CWORD_NO_FLAGS" = 1` to figure out if the arg we're completing is the first real command.
* Add command suggestions This adds suggestions of similar commands in case users mistype a command. Before: ``` $ spack spack ==> Error: spack is not a recognized Spack command or extension command; check with `spack commands`. ``` After: ``` $ spack spack ==> Error: spack is not a recognized Spack command or extension command; check with `spack commands`. Did you mean one of the following commands? spec patch ``` * Add package name suggestions * Remove suggestion to run spack clean -m
spack#40756) This PR implements the concept of "default environment", which doesn't have to be created explicitly. The aim is to lower the barrier for adopting environments. To (create and) activate the default environment, run ``` $ spack env activate ``` This mimics the behavior of ``` $ cd ``` which brings you to your home directory. This is not a breaking change, since `spack env activate` without arguments currently errors. It is similar to the already existing `spack env activate --temp` command which always creates an env in a temporary directory, the difference is that the default environment is a managed / named environment named `default`. The name `default` is not a reserved name, it's just that `spack env activate` creates it for you if you don't have it already. With this change, you can get started with environments faster: ``` $ spack env activate [--prompt] $ spack install --add x y z ``` instead of ``` $ spack env create default ==> Created environment 'default in /Users/harmenstoppels/spack/var/spack/environments/default ==> You can activate this environment with: ==> spack env activate default $ spack env activate [--prompt] default $ spack install --add x y z ``` Notice that Spack supports switching (but not stacking) environments, so the parallel with `cd` is pretty clear: ``` $ spack env activate named_env $ spack env status ==> In environment named_env $ spack env activate $ spack env status ==> In environment default ```
* qt: new version 5.15.11 * qt: open end patch for qtlocation when gcc-10:
* mpich: remove unnecessary tuples * remove redundant :3.3.99 upperbound
Co-authored-by: vkallesh <[email protected]>
Co-authored-by: jmcarcell <[email protected]>
Create chains of causation for error messages. The current implementation is only completed for some of the many errors presented by the concretizer. The rest will need to be filled out over time, but this demonstrates the capability. The basic idea is to associate conditions in the solver with one another in causal relationships, and to associate errors with the proximate causes of their facts in the condition graph. Then we can construct causal trees to explain errors, which will hopefully present users with useful information to avoid the error or report issues. Technically, this is implemented as a secondary solve. The concretizer computes the optimal model, and if the optimal model contains an error, then a secondary solve computes causation information about the error(s) in the concretizer output. Examples: $ spack solve hdf5 ^[email protected] ==> Error: concretization failed for the following reasons: 1. Cannot satisfy '[email protected]' 2. Cannot satisfy '[email protected]' required because hdf5 ^[email protected] requested from CLI 3. Cannot satisfy '[email protected]:' and '[email protected] required because hdf5 ^[email protected] requested from CLI required because hdf5 depends on [email protected]: when @1.13: required because hdf5 ^[email protected] requested from CLI 4. Cannot satisfy '[email protected]:' and '[email protected] required because hdf5 depends on [email protected]: required because hdf5 ^[email protected] requested from CLI required because hdf5 ^[email protected] requested from CLI $ spack spec cmake ^curl~ldap # <-- with curl configured non-buildable and an external with `+ldap` ==> Error: concretization failed for the following reasons: 1. Attempted to use external for 'curl' which does not satisfy any configured external spec 2. Attempted to build package curl which is not buildable and does not have a satisfying external attr('variant_value', 'curl', 'ldap', 'True') is an external constraint for curl which was not satisfied 3. Attempted to build package curl which is not buildable and does not have a satisfying external attr('variant_value', 'curl', 'gssapi', 'True') is an external constraint for curl which was not satisfied 4. Attempted to build package curl which is not buildable and does not have a satisfying external 'curl+ldap' is an external constraint for curl which was not satisfied 'curl~ldap' required required because cmake ^curl~ldap requested from CLI $ spack solve yambo+mpi ^hdf5~mpi ==> Error: concretization failed for the following reasons: 1. 'hdf5' required multiple values for single-valued variant 'mpi' 2. 'hdf5' required multiple values for single-valued variant 'mpi' Requested '~mpi' and '+mpi' required because yambo depends on hdf5+mpi when +mpi required because yambo+mpi ^hdf5~mpi requested from CLI required because yambo+mpi ^hdf5~mpi requested from CLI 3. 'hdf5' required multiple values for single-valued variant 'mpi' Requested '~mpi' and '+mpi' required because netcdf-c depends on hdf5+mpi when +mpi required because netcdf-fortran depends on netcdf-c required because yambo depends on netcdf-fortran required because yambo+mpi ^hdf5~mpi requested from CLI required because netcdf-fortran depends on [email protected]: when @4.5.3: required because yambo depends on netcdf-fortran required because yambo+mpi ^hdf5~mpi requested from CLI required because yambo depends on netcdf-c required because yambo+mpi ^hdf5~mpi requested from CLI required because yambo depends on netcdf-c+mpi when +mpi required because yambo+mpi ^hdf5~mpi requested from CLI required because yambo+mpi ^hdf5~mpi requested from CLI Future work: In addition to fleshing out the causes of other errors, I would like to find a way to associate different components of the error messages with different causes. In this example it's pretty easy to infer which part is which, but I'm not confident that will always be the case. See the previous PR spack#34500 for discussion of how the condition chains are incomplete. In the future, we may need custom logic for individual attributes to associate some important choice rules with conditions such that clingo choices or other derivations can be part of the explanation. --------- Co-authored-by: Massimiliano Culpo <[email protected]>
This adds a rather trivial context manager that lets you deduplicate repeated arguments in directives, e.g. ```python depends_on("py-x@1", when="@1", type=("build", "run")) depends_on("py-x@2", when="@2", type=("build", "run")) depends_on("py-x@3", when="@3", type=("build", "run")) depends_on("py-x@4", when="@4", type=("build", "run")) ``` can be condensed to ```python with default_args(type=("build", "run")): depends_on("py-x@1", when="@1") depends_on("py-x@2", when="@2") depends_on("py-x@3", when="@3") depends_on("py-x@4", when="@4") ``` The advantage is it's clear for humans, the downside it's less clear for type checkers due to type erasure.
This reverts commit d75a9e7.
Add tests
* New variants: * `cuda` * `cudnn` * `tmva-cpu` * `tmva-gpu` * `tmva-pymva` * `tmva-sofie` * Patch to support newer `protobuf` with its own CMake config file.
Too many changes. Will separate and reconcile. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
scientific7
)"fix_python_path()
at build-system-specific phasescuda
,cudnn
,tmva-X