This is a suite of scalability tests for the Data Storage System, which is the storage platform used by the Human Cell Atlas.
The tests use locust the load testing framework. Here are some resources for working with locust.
- locust-cloudwatch for adding locust metrics to cloudwatch.
- A locust-docker image.
- A more advanced Docker Locust image
- Tips on How do I Loucst
- different ways to scale out
To run using docker you:
-
configure
DSS-scalability/locust-docker/locust.config.json
with the host and the Users to run. -
copy
DSS-scalability/locustfiles
toDSS-scalability/locust-docker/scripts
directory. -
run
docker build -t loctest .
-
run
docker run -it --rm -v ./DSS-scalability/scripts:/scripts loctest
-
Build docker image
-
make it run from command line. Take host as a parameter
-
deploy dashboards if you have permission. Do this in DSS repo.
-
deploy lambdas using make deploy
-
setup environment variables
Using Python 3.6:
$ pip install -r requirements.txt
$ locust -f ./scale_tests/upload_cloud.py --host=$HOST --no-web --client=100 --hatch-rate=50 --run-time=10s --csv=./scale_tests/upload
Where $HOST
is the base URL of the DSS endpoint you want to hit, e.g. https://dss.dev.data.humancellatlas.org/v1/
. (The tests will look for ${HOST}swagger.yaml
.)
TARGET_URL
- specifies the endpoint for unittests to target (e.g.https://dss.dev.data.humancellatlas.org/v1/
).GOOGLE_APPLICATION_CREDENTIALS
- A file path to google application credentials Used to access endpoints that require Auth.
Preconfigured test are located in ./scale_tests
. Additional scale test should be added to this directory.
If new python module is required, added the requirement to DSS-scalability/requirements.txt
.
If using Elastic Beanstalk add the new requirements to DSS-scalability/eb-locustio-sample/.ebextensions/setup.config
under locust36
.
downloads the packages for aws lambda env
docker run -it --volume=$PWD:/pp python:3.6 bash -c "pip download invokust locustio==0.8.1 gevent==1.2.2 git+git://github.com/HumanCellAtlas/dcp-cli#egg=hca --dest=/pp"
How to build wheels for the aws lambda env
- Run
docker run -it --volume=$PWD:/chalice python:3.6 bash -c “pip wheel —wheel-dir=chalice/py_pkgs -r /requirements.txt"
- follow instructions after "The cryptography wheel file has been built" -> chalice requirements example
Some macOS users might see an error like:
Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known
This is because newer version of macOS impose a much stricter limit on the maximum amount of open file descriptors. This issue can be addressed by doing
$ ulimit -S -n 10240
before running tests, which should increase the open file descriptor limit to 10240 until the next reboot.