KnowWhereGraph's deployment system
KnowWhereGraph's reference architecture is a Monolith. It consists of several networked services and static sites, all intertwined and dependant on each other. The stack is generally brought up all at once. Individual services can be updated in isolation, but there will be downtime for end users. For an overview of the architecture and services involved, visit the architecture page.
At a high level, each service has its own folder. Within each folder, there's at least one docker compose file and a Dockerfile when container customization is needed. For each environment (stage, prod, local) there's a corresponding docker-compose file that has configurations specific for said environment.
Some services are coupled with prometheus logging scrapers. In these cases, the associated scraper is included in the services' docker-compose file.
For monitoring the deployment, refer to the Grafana Readme.
There are a number of convenience commands in the makefile to manage the deployment. The deployment has three modes:
- Local: When the stack is brought up on your own machine
- Stage: When the stack is brought up on staging.knowwheregraph.org
- Prod: When the stack is brought up on stko-kwg.geog.ucsb.edu
For a complete list of commands, run
make help
The following steps aren't automated and will need to be done before bringing the stack online.
- Run
make repository-setup
to retrieve the web-applications and API - Build the faceted search files
- Build the node browser files
- Put the ssl certificates in
nginx/local-certs
- Put the GraphDB license in
graphdb/license
- Modify
variables.env
to specify the name of the GraphDB repository thesparql/
endpoint should query - Modify
variables.env
with the Elasticsearch password - Modify
variables.env
with the server name - withouthttp
orwww
(localhost/staging.knowwheregraph.org/stko-kwg.geog.ucsb.edu) - On the bare metal server, install the loki docker plugin with
docker plugin install grafana/loki-docker-driver:main--alias loki --grant-all-permissions
. This scrapes the docker system for logs. - Set the Prometheus credentials (see readme file in
./prometheus/
) - Set the Grafana credentials (see readme file in
./grafana/
) - Set the Elasticsearch credentials (see readme file in
./elasticsearch/
) - Set the prometheus credentials through Grafana > Datasources > Prometheus
- Run the validation tool with
sh validate.sh
Running KnowWhereGraph requires a large vertically scaled system. The suggested specifications are shown below.
Component | Quantity |
---|---|
Cores | 15 |
Memory | 512 GB |
Disk | 14 TB |
The production docker-compose files are designed specifically to run on https://stko-kwg.geog.ucsb.edu. If this address ever changes, the production docker-compose files should be modified accordingly.
To bring up the KnowWhereGraph stack run
make start-prod
to bring down the stack,
make stop-prod
The staging environment is meant to be run on https://staging.knowwheregraph.org.
To run the staging stack,
make start-stage
to bring down the stack,
make stop-stage
Because of KnowWhereGraph's graph resource requirements, it's difficult to create an environment that mimics a production setting. To test, it's suggested that the system is scaled down to match the table below. The lower system requirements come at the expense of not being able to load much data into the graph. Adjust the settings as needed based on data testing needs.
Component | Quantity |
---|---|
Cores | 1 |
Memory | 8 GB |
Disk | 20 GB |
LetsEncrypt can't be used for local HTTPS . More information can be found on LetsEncrypt's website. This deployment architecture makes use of self signed certificates for localhost.
- Generate the local certs
- Name the
*.cert
filecert.cert
- Name the
*.key
filekey.key
- Place them in
./nginx/local-certs
GraphDB also needs its own set of certificates. These can be generated with keytool -genkey -alias graphdb -keyalg RSA
and should be placed in graphdb/nginx/local-certs/
.
Some evironmental variables are kept in the variables.env
. These variables are used across deployments and within NGINX; they can be injected into any container.
GRAPH_DB_HOSTNAME
: The name for the graphdb service
ES_HOSTNAME
: The name for the elasticsearch service
API_HOSTNAME
: The name for the KWG API service
SERVER_NAME
: The hostname where things are deployed (localhost | staging.knowwheregraph.org | stko-kwg.geog.ucsb.edu). Without http or https
CURRENT_REPOSITORY_NAME
: Used as the repository that /sparql
endpoint requests are sent to
Right now, when a single service is updated, the stack needs to be brought down, and then back up. This is inconvenient, and will be addressed in the future.
To update any of the webapps, use git pull
to update them from source and follow the repository readme for building. Restarting nxing isn't required for the changes to become live.
To make changes, issue a pull request to the main
branch. The deployment system mostly consists of Dockerfiles and configurations; both are linted on new commits.
New services should come with a README, a docker-compose.yaml, and an optional Dockerfile. The makefile should be refactored to include the new service. If the service is resource intensive or requires different behavior for different deployment locations, provide a additional docker-compose files (docker-compose.local.yaml. docker-compose.stage.yaml).
New web applications should be added as built html/js artifacts. These files should be placed in nginx/sites
and a corresponding nginx rule should be added to the default config file. Traffic should be routed to that application's folder.
When reviewing architecture changes
- Test the changes locally before approving
- Check
- Is the architecture diagram updated?
- Are there any changes to existing documentation that should be made?
- Is there new documentation that needs to be added?
- Will this change possible effect other services?
- Was the makefile updated?
- Will this work locally, on staging, and on production?
- Are there any stakeholders that might need to be notified of the change?