Skip to content

udaylunawat/Automatic-License-Plate-Recognition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Contributors Forks Stargazers Issues Apache License LinkedIn

Indian ALPR System

Detects License Plates using car images & Deep Learning

πŸš€Check out the spotlight on Best of Streamlit!πŸ”₯ (Computer Vision Section)

Table of Contents

Motivation

  • The project was primarily made to tackle a myth - "Deep Learning is only useful for Big Data".

Instructions

Run in Google Colab View source on GitHub Download notebook



Demo

Link: Deploy on colab in 2 mins

Home page

YoloV3 Retinanet
Object detection using Yolo V3 Object detection using Retinanet
Enhance Operations on cropped number plates OCR (Optical Character Recognition)
Enhancement on Cropped License Plates OCR on License Plates

Directory Tree

β”œβ”€β”€ banners                           <- Images for skill banner and project banner
β”‚
β”œβ”€β”€ cfg                               <- Configuration files
β”‚
β”œβ”€β”€ data
β”‚Β Β  β”œβ”€β”€ sample_images                 <- Sample images for inference
β”‚Β Β  β”œβ”€β”€ 0_raw                         <- The original, immutable data dump.
β”‚Β Β  β”œβ”€β”€ 1_external                    <- Data from third party sources.
β”‚Β Β  β”œβ”€β”€ 2_interim                     <- Intermediate data that has been transformed.
β”‚Β Β  └── 3_processed                   <- The final, canonical data sets for modeling.
β”‚
β”œβ”€β”€ docs                              <- Streamlit / GitHub pages website
β”‚
β”œβ”€β”€ notebooks                         <- Jupyter notebooks. Naming convention is a number (for ordering),
β”‚                                        the creator's initials, and a short `-` delimited description, e.g.
β”‚                         `              1.0-jqp-initial-data-exploration`.
β”‚
β”œβ”€β”€ output
β”‚   β”œβ”€β”€ features                      <- Fitted and serialized features
β”‚   β”œβ”€β”€ models                        <- Trained and serialized models, model predictions, or model summaries
β”‚   β”‚Β Β  β”œβ”€β”€ snapshots                 <- Saving training snapshots.
β”‚Β Β  β”‚   β”œβ”€β”€ inference                 <- Converted trained model to an inference model.
β”‚Β Β  β”‚   └── TrainingOutput            <- Output logs
β”‚   └── reports                       <- Generated analyses as HTML, PDF, LaTeX, etc.
β”‚       └── figures                   <- Generated graphics and figures to be used in reporting
β”‚
β”œβ”€β”€ src                               <- Source code for use in this project.
β”‚Β Β  β”œβ”€β”€ __init__.py                   <- Makes src a Python module
β”‚   β”‚
β”‚Β Β  β”œβ”€β”€ data                          <- Scripts to download or generate data
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ make_dataset.py
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ generate_pascalvoc.py
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ generate_annotations.py
β”‚Β Β  β”‚Β Β  └── preprocess.py    
β”‚   β”‚
β”‚Β Β  β”œβ”€β”€ features                      <- Scripts to turn raw data into features for modeling
β”‚Β Β  β”‚Β Β  └── build_features.py
β”‚   β”‚
β”‚Β Β  β”œβ”€β”€ models                        <- Scripts to train models and then use trained models to make
β”‚   β”‚   β”‚                                predictions
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ predict_model.py
β”‚Β Β  β”‚Β Β  └── train_model.py
β”‚   β”‚
β”‚Β Β  └── visualization                 <- Scripts to create exploratory and results oriented visualizations
β”‚Β Β      └── visualize.py
β”œβ”€β”€ utils                             <- Utility scripts for Streamlit, yolo, retinanet etc.
β”œβ”€β”€ serve                             <- HTTP API for serving predictions using Streamlit
β”‚   β”œβ”€β”€ Dockerfile                    <- Dockerfile for HTTP API
β”‚   β”œβ”€β”€ Pipfile                       <- The Pipfile for reproducing the serving environment
β”‚   └── app.py                        <- The entry point of the HTTP API using Streamlit app
β”‚
β”œβ”€β”€ .dockerignore                     <- Docker ignore
β”œβ”€β”€ .gitignore                        <- GitHub's excellent Python .gitignore customized for this project
β”œβ”€β”€ app.yaml                          <- contains configuration that is applied to each container started
β”‚                                        for that service
β”œβ”€β”€ config.py                         <- Global configuration variables
β”œβ”€β”€ LICENSE                           <- Your project's license.
β”œβ”€β”€ Makefile                          <- Makefile with commands like `make data` or `make train`
β”œβ”€β”€ README.md                         <- The top-level README for developers using this project.
β”œβ”€β”€ tox.ini                           <- tox file with settings for running tox; see tox.readthedocs.io
β”œβ”€β”€ requirements.txt                  <- The requirements file for reproducing the analysis environment, e.g.
β”‚                                        generated with `pip freeze > requirements.txt`
└── setup.py                          <- makes project pip installable (pip install -e .) so src can be imported

To Do

  1. Convert the app to run without any internet connection.
  2. Work with video detection
  3. Try AWS Textrac OCR, SSD and R-CNN
  4. Try with larger dataset Google's Open Image Dataset v6

Bug / Feature Request

If you find a bug (the website couldn't handle the query and / or gave undesired results), kindly open an issue here by including your search query and the expected result.

If you'd like to request a new function, feel free to do so by opening an issue here. Please include sample queries and their corresponding results.

Technologies Used

Team

Uday Lunawat
Uday Lunawat

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Apache license

Copyright 2020 Uday Lunawat

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Credits

Show some ❀️ by starring some of the repositories!

Made with πŸ’™ for India