Skip to content

Latest commit

 

History

History
169 lines (122 loc) · 4.68 KB

01_create_api.md

File metadata and controls

169 lines (122 loc) · 4.68 KB

ToDo:

  • [x]

API: creation and deployment

API setup

Install all project requirements with pip:

pip install -r requirements.txt

How to run the API?

We'll be using Uvicorn, a fast ASGI server (it can run asynchronous code in a single process) to launch our application. Use the following command to start the server:

uvicorn app.api:app \
    --host 0.0.0.0 \
    --port 5000 \
    --reload \
    --reload-dir app \
    --reload-dir models

Or

uvicorn app.api:app  --host 0.0.0.0 --port 8000  --reload  --reload-dir deploy-GAISSA --reload-dir app 

In detail:

  • uvicorn app.api:app is the location of app (app directory > api.py script > app object);
  • --reload makes the server reload every time we update;
  • --reload-dir app makes it only reload on updates to the app/ directory;
  • --reload-dir models makes it also reload on updates to the models/ directory;

API running.

Now you can test the app:

We can now test that the application is working. These are some of the possibilities:

  • Visit localhost:5000

  • Use curl

    curl -X GET http://localhost:5000/
  • Access the API programmatically, e.g.:

    import json
    import requests
    
    response = requests.get("http://localhost:5000/")
    print (json.loads(response.text))
  • Use an external tool like Postman, which lets you execute and manage tests that can be saved and shared with others.

Visit Swagger UI in http://localhost:5000/docs for documentation endpoint and select one of the models. The documentation generated via Redoc is accessible at the /redoc endpoint.

API User Interface in localhost:5000/docs endpoint.

To make an inference, click on the "Try it out" button and click execute.

You should obtain a "200" code response after executing the POST method of the model:

API response on Swagger UI.

API response on terminal.

API creation

The API in this project is freely inspired by the Made with ML tutorial: "APIs for Machine Learning" and FastAPI Lab.

Follow the guide https://madewithml.com/courses/mlops/api/

How to add a new model in this API?

  1. Add new model class. File: ../app/models.py
    • Add a new class according to your new model, parent class is Model().
    • Make sure NewModel.predict() method is implemented according to the model.
    • Add ML_task
  2. Create a model schema. File: ../app/schemas.py
    • Create a schema with one example.
  3. Add endpoint. File: ../app/api.py
    • According to the new model information (Model Class and schema), add the endpoint to enable POST requests and make predictions using the model.

Configure a proxy server (Optional)

  1. Stop the API process. You can use tools like htop to find the process id (PID) and kill it.

  2. Install nginx on the virtual machine. For example, on Ubuntu:

sudo apt update
sudo apt install nginx
  1. Configure nginx
sudo vim /etc/nginx/sites-available/fastapi-app
  1. Copy the following configuration into the file and replace the X.X.X.X with the IP address of your server.
server {
    listen 80;

    # add here the ip address of your server
    # or a domain pointing to that ip (like example.com or www.example.com)
    server_name X.X.X.X;

    location / {
        proxy_pass http://localhost:5000;
    }
}

Then run the following command to add a soft link into the sites-enabled folder of nginx:

sudo ln -s /etc/nginx/sites-available/fastapi-app /etc/nginx/sites-enabled/fastapi-app
  1. Restart nginx
sudo systemctl restart nginx
  1. Relaunch the API
uvicorn app.api:app  --host 0.0.0.0 --port 5000

certbot