Skip to content

Latest commit

 

History

History
116 lines (81 loc) · 4.39 KB

README.md

File metadata and controls

116 lines (81 loc) · 4.39 KB


Application of ETL process on raw used cars dataset scraped from PakWheels along with its analysis using Jupyter Notebook.

built-with-love powered-by-coffee cc-nc-sa

OverviewToolsArchitectureDemoSupportLicense

Overview

PakWheels is the largest online marketplace for car shoppers and sellers in Pakistan. It aggregates thousands of new, used, and certified second-hand cars from thousands of dealers and private sellers.

This project involves Extract Transform Load(ETL) process on used and new cars dataset which was scraped from PakWheels . Exploratory Data Analysis(EDA) is performed on it using Jupyter Notebook to extract key insights about the Pakistan's used cars marketplace.

The repository directory structure is as follows:

├── LICENSE 
├── README.md          <- The top-level README for developers using this project. 
| 
├── run.py             <- Python script to start ETL process. 
| 
├── data 
│   ├── processed      <- The final, canonical data set for analysis.
│   └── raw            <- The original, immutable data dump. 
│ 
│ 
│ 
├── notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
│                         the creator's initials, and a short `-` delimited description, e.g.
│                         `1.0-mwg-initial-data-exploration`.  
| 
│ 
├── src                <- Source code for use in this project. 
│   ├── __init__.py    <- Makes src a Python module. 
│   │ 
│   ├── data           <- Script to perform ETL. 
│       └── make_dataset.py 
|         
|
├── resources          <- Resources for this readme file. 

Tools

To build this project, following tools were used:

  • Python
  • PyCharm
  • Github
  • Jupyter Notebook

Architecture

The architecture of this project is straightforward which can be understood by the following diagram.

According to the diagram we first create a python script which performs ETL for us on the raw dataset. The output of this process is clean data which is then used for exploratory analysis in Jupyter Notebook.

Demo

The figure below shows a snapshot of ETL process being conducted through terminal. Type run.py (raw data directory). (figure may take few seconds to load)

Support

If you have any doubts, queries or, suggestions then, please connect with me on any of the following platforms:

Linkedin Badge Gmail Badge

License

by-nc-sa

This license allows reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator. If you remix, adapt, or build upon the material, you must license the modified material under identical terms.