The branch that can be released as product.
main
The branch being developed for next release.
develop
The branch where features are developed.
feat/{feature-name}
The branch preparing for this release
release-{version}
The branch that fixes bugs encountered in the released version
hotfix-{version}
type | description |
---|---|
feat | New features |
fix | Bug fix |
build | Modifying build-related files, installing or deleting modules |
chore | Other minor changes |
docs | Document modifications |
style | Code style and Format |
refactor | Code refactoring |
test | Modifying for test code |
perf | Performance improvements |
[feat] {new feature}
[chore] {minor changes}
Rebase does not leave any commit history, so the commit history becomes cleaner.
To resolve conflicts that occur when trying to merge your branch into the main branch
### Now you are in {your branch}
### Move to main branch and pull
git checkout main
git pull
### Move to {your branch} and merge main
git checkout {your branch}
git merge main
### Resolve Conflicts on your code editor.
### Push your code
git push
Merge Pull request
Setting up the FastAPI development environment.
curl -sSL https://install.python-poetry.org | python3 -
Move from the project root path to the backend directory, create a virtual environment, and install the package.
cd backend
poetry install
Activate Virtual Environment
poetry shell
For other commands, refer to poetry official documentation
e.g) Add packages to pyproject.toml
and installs them.
poetry add pytest --group local
e.g) Manage Environments.
poetry env use 3.11
Create a .vscode/settings.json
file and paste the contents below.
{
"python.linting.mypyEnabled": true,
"python.linting.pylintEnabled": false,
"python.linting.enabled": true,
"python.linting.flake8Enabled": false,
"[python]": {
"editor.defaultFormatter": "ms-python.black-formatter"
}
}
The actual development environment and deployment environment are Dockerized.
Therefore, you must install Docker and run the docker compose
command.
For Docker compose v1, use the docker-compose
command.
In your local environment, Build and run using the docker-compose.local.yaml file. (only DB uses Docker)
docker compose -f docker-compose.local.yaml up -d --build
uvicorn app.main:spire_app --reload --log-level debug --host 0.0.0.0 --port 8000
### In ec2 environment. Build and run using the docker-compose.prod.yaml file.
docker compose -f docker-compose.prod.yaml up -d --build
docker compose up --build
: If there are code changes, rebuild and run the container.
docker compose up
: Runs a Docker container. If it is not built, build it once at first.
docker compose down
: Terminates the Docker container.docker compose down -v
: Terminates the Docker container and deletes the volume as well.
For detailed options, see link.
With the Docker db container running, run the command below to proceed with development. Local Dockerfile
poetry install
cd backend
poetry shell
uvicorn app.main:spire_app --reload --host 0.0.0.0
pytest
flake8 app
mypy app
black app
alembic revision --autogenerate -m "type your commit message"
Afterwards, the migration history is saved in ./backend/migrations/versions
.
This command has no practical use. It is set to run automatically when the docker compose up --build
command is executed.
alembic upgrade head
alembic downgrade -1
Download Dockerfile
in /inference_server
. Following command will build a docker image for our Triton inference server.
docker build -t spire_ai - < Dockerfile
You may need to run only one model in one container if you don't have sufficient memory.
docker run -it --name spire_stable_diffusion spire_ai /bin/sh
cd models
rm -rf open_seed
exit
docker commit spire_stable_diffusion spire_ai_stable_diffusion
docker run -it --name spire_open_seed spire_ai /bin/sh
cd models
rm -rf stable_diffusion
exit
docker commit spire_open_seed spire_ai_open_seed
Triton server can be deployed on various environment, but below is how we deployed this on our on-premise server. Please refer to https://github.com/triton-inference-server/server/tree/main for greater details. Our method is essentially adhoc since our permission on SNU GPU server is limited and we don't have any prior experience with k8s.
Download .yaml
files in /inference_server/k8s
.
First, following command will download all pretrained weights of our Triton inference server and initialize the server.
kubectl apply -f pod.yaml
If everything is okay and ready, it should be something like this.
Delete the pod before moving on.
kubectl delete pods --all
Second, these two command will create a deployment and a service. Whenever these are ready, server can get requests and send responses. You must fill in the NodePort numbers in service.yaml
in order to create a service.
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
You can check the status of deployments and services via following commands.
kubectl get deployments
kubectl get services
The address for inference server will be as below. <public-node-ip>
is node IP of SNU GPU server and <node-port>
will be the number you selected in service.yaml
for NodePort that corresponses to http
.
http://<public-node-ip>:<node-port>
Don't forget to delete all services, deployments and pods once you are done.
kubectl delete services --all
kubectl delete deployments --all