-
Notifications
You must be signed in to change notification settings - Fork 1
ONBOARDING LABS
Important intersections with the code occur at devconfig.py, interfaces, google drive This page aims to provide a step-by-step on how to 'escalate' a new lab using V2.
This site is a clone of 'ONBOARDING-LABS.md' from ESCALATE_Capture. Last updated on May 26th, 2020. Report V1.11
It is vital that old experiments can still be handled regardless of changes to the code.
Changes to inventories, folders, devconfig, code, etc should be validated using pytest!
- Create a google api access keys using google developer console
- Create a google drive directory architecture (place to store things)
-
Set permissions on folders appropriate for the labs needs - this includes giving access to the api keys from step 1 above
-
Create the following:
-
template chemical inventory (headers are vital here! - place in 0-Inventory)
-
capture interfaces - these are recommended to go in a 'Templates' folder inside of 1-Dev or 0-Inventory folder UID will be used in devconfig default examples
-
-
Update the
devconfig.py
with the new lab folder locations, UIDs can be found at the end of the folder web addresses. -
Test the configuration. Once working, submit a merge request to have the
devconfig.py
updated!
- Create an example run submission template (language matching what the user needs)
- New labs should ALWAYS be based on HC/LBL template, MIT has special conditions in the code
- See expworkup.createjson.parse_exp_volumes function to generalize
- Test the run generation in the 1-Dev folder
- Move to the 4-Data folder
- Options to update naming schemes in
dataset_rename.json
, toggle on --raw 1 to see what columns are generated, adjust naming here - Options for feature generation in
type_command.csv
. This is tricky, change one thing at a time until you figure out how it works. A 'failure' is when the column headers don't auto rename. Be sure to enable--raw 1
when developing to see columns that aren't fitting the_feat_
header namespaces. - Ensure validation. This will avoid the message: 'Files failed to validate'.
- This error is OK. This means that some data did not meet a more rigorous quality check. The exact validation infractions can be be found in the logfile of the run (
/logging/REPORT_LOG.txt
) by searching for "tests.validation.validation". The error message is a python cerberus schema validation. - This cannot be disabled (except by commenting out validation code), but adding the correct validation to
./tests/validation/schemas.py
can resolve the issue. Please see included examples for more information.
- This error is OK. This means that some data did not meet a more rigorous quality check. The exact validation infractions can be be found in the logfile of the run (
Documentation of the example folders considered during development should be maintained to ensure that the report code can handle older experimental variants as well as new ones. Run pytest
in the report branch to see how this should work before making changes to the 4-DebugData folder.
There are rudimentary validation processes in place using 'pytest'. Running pytest will use the experiments in 4-Data dev folders -- this also will include using the referenced chemical inventories in devconfig.py
. Adding a new experiment type should include updating the documentation of the example folders as well as adding the example experiment to 4-DebugData folder. A brief overview of what is needed:
-
Add description of what the example experiment is testing to the documentation
-
Update the
devconfig.py
to ensure that the correct inventory and details for the associated lab is being described -
Run
pytest
and ensure that historical runs are unaffected by changes todevconfig.py
-
Add COMPLETED experiment to the 4-DebugData folder
-
Make any changes to code to parse the new experiments. Run
pytest
and ensure that old runs are unaffected. -
Additional debugging will likely be required if new files are being parsed. Ensure that no errors are detected before updating progressing
-
Run report to generate a new
REPORT_DEV.csv
, update thedevreport_<date>.csv
file in./tests/
to include the new experiments.python runme.py dev -d dev --raw 1 --debug 1
Schemas in ESCALATE_report/tests/validation/
can be updated to update some of the inline validation. A document describing general ESCALATE data structures can be found here
Be sure to add any new functionality which should be tested to the dev dataset. Existing notes are available on Using-Development-Environment (i.e., making sure that both capture and report continue to support older data and new data).
Adding support for new lab is still a cumbersome process. There are a few places where the notation remains unclear and where the devconfig appears to be redundant. Please feel free to create and issue to resolve this problem.
ESCALATE REPORT
Try a search on this wiki!