Instructions for the usage of the admin console can be found on the eReq Admin Console page.
Server side ring app:
- Static asset serving
- JSON REST API with echo endpoint for prototyping
- Hot code reloading
Client side re-frame app:
- Routing using bidi and pushy
- Event interceptor that validates app DB against spec in development
- Components structured into separate namespaces each with their own db spec, event handlers, subscriptions, and views. Namespaced using the re-frame synthetic namespace pattern
- Pages separated out with potential to use parallel structure to components as their complexity grows
To demonstrate things a homepage with a sign up form that POSTs to an API which just echo's back the response is provided.
Start figwheel-main:
$ lein fig:build
This should start a ring server and automatically building your application. Once it's ready, it should open a new browser window with the application for you.
Setup using this guide: Figwheel-main and NPM Modules
To add new modules, add them to npm
$ npm install --save <package>
And then import the package add it to the window in src/js/index.js. Before starting the webserver, run the following commands to update the bundle of external modules:
$ npm install
$ npx webpack --mode=development
Download Datomic Pro and run the following commands from within the unzipped folder.
Start the transactor with your license in the properties in one terminal.
./bin/transactor ../dev-transactor.properties
In s separate terminal, start the repl and delete any existing databases and recreate them. You can skip the delete-database step if this is your first time creating them.
./bin/repl
Clojure 1.10.1
user=> (require 'datomic.api)
nil
user=> (datomic.api/delete-database "datomic:dev://localhost:4334/ereq-dev")
true
user=> (datomic.api/create-database "datomic:dev://localhost:4334/ereq-dev")
true
user=> (datomic.api/delete-database "datomic:dev://localhost:4334/ereq-test")
true
user=> (datomic.api/create-database "datomic:dev://localhost:4334/ereq-test")
true
Start serving the databases in separate terminals.
./bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d ereq-dev,datomic:dev://localhost:4334/ereq-dev
./bin/run -m datomic.peer-server -h localhost -p 9119 -a myaccesskey,mysecret -d ereq-test,datomic:dev://localhost:4334/ereq-test
You can run the following command to transact the schema and add initial form data:
lein run test-setup
This project uses Datomic as its database. Datomic configuration defaults are stored in resources/config/datomic.edn
,
and can be overridden by environment variables defined in the same file. The database schema is stored
in src/clj/org/parkerici/sample_tracking/db/schema.clj
.
If you make changes to the schema, run lein run transact-schema
to generate a new Datomic schema file
at resources/schema.edn
and to transact the changes to the configured database.
To add new roles from the configuration files to the database run the following command. You only need to do this if you are adding new roles.
lein run create-roles
To add an admin user from the CLI run the following command.
lein run add-admin [email protected]
The test database ereq-test must be running for tests to run successfully.
Before running tests for the first time you must populate the test database.
lein with-profile test run test-setup
Sometimes with-profile doesn't work. If this is the case you can manually set the environment variables.
export DATOMIC_ENDPOINT=localhost:9119
export DATOMIC_DB_NAME=ereq-test
export SEND_MANIFEST_EMAILS=false
Once you've done this you can run the tests with the following command.
lein test
The process for creating new forms can be found here.
Make sure it's VPC native
https://console.cloud.google.com/sql/instances
Record name and password: sample-tracking /
This can take over a half hour to complete....but you can get IP address first
Give it a Private IP address Use Connections Tab, Turn on the required API. etc
See https://docs.datomic.com/on-prem/storage.html
Connect to the database:
$ gcloud sql connect <cloudsql-db-name> --user=postgres
Copy and paste the Postgres datomic postgres-db.sql
scripts into the prompt. You have to delete the TABLESPACE
argument from the db creation script.
After you create the database, connect to it by running \c datomic in the psql command line.
Next, run the postgres-table.sql
and the postgres-user.sql
in the datomic
db.
[ don՚t do this if you are restoring from backup! ]
Setup and create the transactor pod:
$ kubectl --namespace=default create secret generic datomic-transactor-properties --from-file=transactor.properties=./secrets/transactor.properties
$ kubectl apply -f ./deploy/k8s/datomic/transactor.yaml
Attach to the pod and create the DB in Datomic. Make sure to substitute the IP of the postgres instance – it should be same as in transactor.properties.
$ kubectl get pods
$ kubectl exec -it $(kubectl get pods --selector=app=datomic-transactor -o jsonpath={.items..metadata.name}) -- /bin/bash
$ bin/repl
> (require '[datomic.api :as d])
> (def db-uri "datomic:sql://sample-tracking?jdbc:postgresql://<DB-IP>:5432/datomic?user=datomic&password=datomic")
> (d/create-database db-uri)
The .circleci
folder contains the config.yaml
file that describes the deployment to the previously configured
cluster.
It requires a public IP for each environment ereq-dev
and ereq-prod
. Non master
branches will be deployed to dev
for every commit, and master
deploys to prod
.
Each environment requires the environment variables in CircleCI to be configured appropriately. These are in the CircleCI Contexts and Project Environment Variables.
The CI deploy uses Google managed certificates and a Google Ingress (as opposed to the Nginx Ingress)
The HTTP-to-HTTPS redirect feature of the Ingress is still in beta and only available in GKE 1.18+. 1.18+ is still on the Rapid release channel which can have some instability. To avoid that, we are using a manual partial LB . The summarized steps to setup this partial LB are:
- Ensure HTTP is not served on the Ingress using the annotiation
kubernetes.io/ingress.allow-http: "false"
on the Ingress - Manually create a load balancer on the same IP as the Ingress with the HTTP-to-HTTPS redirect as described in the linked doc above.
To build and package into Docker for dev:
$ npx webpack && lein package && docker build -t gcr.io/dev-project/sample-tracking:0.1.0 .
And for prod:
$ npx webpack && lein package && docker build -t gcr.io/production-project/sample-tracking:0.1.0 .
To push to GCR:
$ docker push <image-tag>
$ kubectl apply -f ./deploy/k8s/datomic/transactor-service.yaml
$ kubectl apply -f ./deploy/k8s/datomic/peer.yaml
$ kubectl apply -f ./deploy/k8s/datomic/peer-service.yaml
As of now this job transacts the schema to the database.
$ kubectl apply -f deploy/k8s/sample-tracking/deploy-job.yaml
To get the results of the job:
$kubectl get jobs
NAME COMPLETIONS DURATION AGE
deploy-tasks 1/1 21s 55s
To get the pod name or check on the logs:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
datomic-peer-cb5cfc5b6-5shhm 1/1 Running 0 51m
datomic-transactor-c69857949-6cj6m 1/1 Running 0 71m
deploy-tasks-gjqg4 1/1 Running 0 16s
$ kubectl logs deploy-tasks-gjqg4
[main] INFO org.eclipse.jetty.util.log - Logging initialized @5528ms to org.eclipse.jetty.util.log.Slf4jLog
20-03-04 00:53:00 deploy-tasks-gjqg4 INFO [org.parkerici.sample-tracking.cli:55] - Running with environment :default
20-03-04 00:53:00 deploy-tasks-gjqg4 INFO [org.parkerici.sample-tracking.db.schema:182] - Writing schema out to file.
20-03-04 00:53:00 deploy-tasks-gjqg4 INFO [org.parkerici.sample-tracking.db.schema:184] - Transacting schema.
Once it's successful, delete the job.
$ kubectl delete job deploy-tasks
Deploy the app and the basic service to the cluster.
$ kubectl apply -f ./deploy/k8s/sample-tracking/app.yaml
$ kubectl apply -f ./deploy/k8s/sample-tracking/app-basic-service.yaml
Get the IP address for the service.
$ kubectl get service/sample-tracking
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
sample-tracking LoadBalancer 10.110.5.220 34.82.204.132 80:31412/TCP 3m42s
Setup Helm locally.
$ brew install kubernetes-helm
Or make sure it's up to date if already installed.
$ brew upgrade kubernetes-helm
Reserve an ** unused/unbound** reserved regional external IP from GCP IP address for the nginx load balancer.
gcloud compute addresses create sample-tracking --region <CLUSTER-REGION>
Install the nginx-ingress chart with the custom static IP. If you are installing multiple ingresses in the same culster you must name them differently.
$ helm repo add stable https://kubernetes-charts.storage.googleapis.com
$ helm repo update
$ helm install nginx-ingress stable/nginx-ingress --set controller.service.loadBalancerIP=<RESERVED-IP>
We can use the following command to check when our static IP has been assigned to the load balancer.
$ kubectl get services -o wide nginx-ingress-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
nginx-ingress-nginx-ingress LoadBalancer 10.110.4.204 <RESERVED-IP> 80:31312/TCP,443:30326/TCP 85s app=controller
Once this is done, create the application, service, and ingress to be exposed by the load balancer.
kubectl apply -f ./deploy/k8s/sample-tracking/app.yaml
kubectl apply -f ./deploy/k8s/sample-tracking/app-service.yaml
kubectl apply -f ./deploy/k8s/sample-tracking/app-ingress.yaml
- Add test coverage
- Move all CircleCI environment variables into the Project Environment Variables.
Mantis Viewer is distributed under Apache 2 license. See the LICENSE file for details.