Skip to content

ghacupha/fastapi-ml-quickstart

Repository files navigation

fastAPI ML quickstart

This is a quickstart app for serving a model on API with fastapi

Environment

For purposes of creation of docker images configure the following variables in the GITHUB repo

  • DOCKERHUB_USERNAME
  • DOCKERHUB_TOKEN
  • DOCKERHUB_ORG
  • DOCKERHUB_REPO
  • DOCKERHUB_VER

Project setup

  1. Create the virtual environment.
virtualenv /path/to/venv --python=/path/to/python3

You can find out the path to your python3 interpreter with the command which python3.

  1. Activate the environment and install dependencies.
source /path/to/venv/bin/activate
pip install -r requirements.txt
  1. Launch the service
uvicorn api.main:app

Posting requests locally

When the service is running, try

127.0.0.1/docs

or

curl

Deployment with Docker

  1. Build the Docker image
docker build --file Dockerfile --tag fastapi-ml-quickstart .
  1. Running the Docker image
docker run -p 8000:8000 fastapi-ml-quickstart
  1. Entering into the Docker image
docker run -it --entrypoint /bin/bash fastapi-ml-quickstart

Deployment with Heroku

  1. Login to Heroku
$ heroku login
  1. Create heroku app on the heroku account and then run this
$ heroku git:remote -a fastapi-ml-quickstart
  1. Push the latest commit to heroku master
$ git push heroku master
  1. You can check logs on the following command

$ heroku logs --tail

docker-compose

  1. Launching the service
docker-compose up

This command looks for the docker-compose.yaml configuration file. If you want to use another configuration file, it can be specified with the -f switch. For example

  1. Testing
docker-compose -f docker-compose.test.yaml up --abort-on-container-exit --exit-code-from fastapi-ml-quickstart

Acknowledgments

The bulk of the source code in this illustration was created by Tivadar Danka

About

Sample project for deploying ml models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published