Skip to content

digitalghost-dev/premier-league

Repository files navigation

Premier League Data Pipeline

Overview

This repository contains a personal project designed to enhance my skills in Data Engineering. It focuses on developing data pipelines that extract, transform, and load data from various sources into diverse databases. Additionally, it involves creating a dashboard with visualizations using Streamlit.

Important

Many architectural choices and decisions in this project may not make the most efficent sense on purpose for the sake of practicing and learning.

Important Links

Infrastructure

Tools & Services

cloud streamlit terraform docker prefect dbt

Databases

firestore postgres bigquery

Code Quality

pre-commit

Security Linter Code Formatting Type Checking Code Linting
bandit ruff-format mypy ruff

Data and CI/CD Pipelines

Data Pipelines

Data Pipeline 1

Orchestrated with Prefect, a Python file is ran to extract stock data for Manchester United.

  1. Data from the Financial Modeling Prep API is extracted with Python using the /quote endpoint.
  2. The data is loaded directly into a PostgreSQL database hosted on Cloud SQL with no transformations.
  3. Once the data is loaded into PostgreSQL, Datastream replicates the data into BigQuery. Datastream checks for staleness every 15 minutes.
  4. dbt is used to transform the data in BigQuery and create a view with transformed data.

Data Pipeline 2

Orchestrated with Prefect, Python files are ran that perform a full ETL process.

  1. Data is extracted from multiple API sources:
    • Data from the Football Data API is extracted to retrieve information on the current standings, team statistics, top scorers, squads, fixtures, and the current round. The following endpoints are used:
      • /standings
      • /teams
      • /top_scorers
      • /squads
      • /fixtures/current_round
      • /fixtures
    • Data from the NewsAPI is extracted to retrieve news article links with filters set to the Premier League from Sky Sports, The Guardian, and 90min. The following endpoints are used:
      • /everything
    • Data from a self-built API written in Golang is extracted to retrieve information on teams' stadiums. The following endpoints are used:
      • /stadiums
    • Data from the YouTube API is extracted to retrieve the latest highlights from NBC Sports YouTube channel.
  2. Python performs any necessary transformations such as coverting data types or checking for NULL values
  3. Majority of the data is then loaded into BigQuery in their respective tables. Fixture data is loaded into Firestore as documents categoirzed by the round number.

Data Pipeline 3

1. Daily exports of the standings and top scorers data in BigQuery are exported to a Cloud Storage bucket using Cloud Scheduler to be used in another project. * The other project is a [CLI](https://github.com/digitalghost-dev/pl-cli/) tool written in Golang.

Pipeline Diagram

data-pipeline-flowchart

CI/CD Pipeline

The CI/CD pipeline is focused on building the Streamlit app into a Docker container that is then pushed to Artifact Registry and deployed to Cloud Run as a Service. Different architecutres are buit for different machine types and pushed to Docker Hub.

  1. The repository code is checked out and a Docker image containing the updated streamlit_app.py file will build.
  2. The newly built Docker image will be pushed to Artifact Registry.
  3. The Docker image is then deployed to Cloud Run as a Service.

Pipeline Diagram

cicd_pipeline


Security

  • Syft and Grype work together to scan the Streamlit Docker image. Syft creates an SBOM and Grype scans the SBOM for vulnerabilities. The results are sent to the repository's Security tab.
  • Snyk is also used to scan the repository for vulnerabilities in the Python packages.