Skip to content
This repository has been archived by the owner on Aug 16, 2023. It is now read-only.

Summer of code anay293 ML #49

Open
wants to merge 16 commits into
base: best-machine-learning
Choose a base branch
from

Conversation

anay293
Copy link

@anay293 anay293 commented Jul 31, 2023

Hi!

Here is my final submission for DevClub Summer of Code 2023!

I also wish to apply for the following categories -

  • Most Starred Fork
  • Best All-Rounder
  • Best App submission
  • Best Backend submission
  • Best Frontend submission
  • Best Machine Learning submission

Tasks Done

  • App Development
    • Week 1: Learn Flutter and build a UI using Stateful and Stateless Widgets
    • Week 2: Dive deeper into Flutter's Widgets by building ProductTile, Promoted product banners and implementing forms. Make the app dynamic and interactive!
    • Week 3: Communicate with server resources and manage product data using HTTP methods. Enhancing our app's capabilities for seamless product management and smooth data handling
    • Week 4: Implement Firebase user authentication, integrate cloud storage for image handling and explore Stripe for seamless payments
  • Backend Development
    • Week 1: Learn how a backend works by analysing a real-world website, and then make your own backend, using raw Python & SQL and use it to create a simple URL shortener
    • Week 2: Setup a Django backend server, and learn database models, rendering templates, user authentication and forms. Bonus: make it production-ready!
    • Week 3: Convert your Django backend into a REST API, and learn about function based views, JWT authentication and documentation with Postman. Bonus: make it enterprise-grade!
    • Week 4: Deploy your Django project on Microsoft Azure cloud platform, and learn about virtual machines, domain names and HTTPS.
  • Frontend Development
    • Week 1: Learn how websites work using DevTools, and then learn how to build a simple static website using HTML and CSS, taking designs from Figma and host it on GitHub Pages
    • Week 2: Use Javascript to create a repository network analyzer consuming the GitHub API, and use Bootstrap to make our previous webpage responsive!
    • Week 3: Learn ReactJS, and make your website better using JSX components and State management. Bonus: improve the code quality too!
    • Week 4: Design pages to render, create, and update products. Fetch this data from a public API and understand the interface using API specs. Learn about status codes, state management, error handling, and much more!
  • Machine Learning
    • Week 1: Set up an ML environment to run your code on GPUs, then select and build a price prediction model, and also scrape your own dataset for it from the web.
    • Week 2: Use YOLO to identify and describe items to be sold from the pictures, and fine-tune your Object Detection model using your own dataset
    • Week 3: Develop a model that automatically detects, aligns, and enhances images uploaded by sellers in an online marketplace, using techniques such as image recognition, rotation, and image enhancement
    • Week 4: Develop a chatbot designed for a marketplace website, capable of effectively addressing and resolving buyer queries and complaints, by utilizing a well-organized dataset, NLP frameworks and integration for a seamless user experience

Technologies/Tools/Frameworks used/learnt

In Week one I used Pandas, sklearn and matplotlib library in python and used Linear Regression model to build my model and explored the mentioned library and got to know more about them.
In Week two I used YOLOV4 to build my model apart from that I bit explored to used comd prompt.
In Week three I used CV2 and numpy library to build my model.
In Week four I used Tensor flow, Natural language toolkit, tfleran and numpy library of python then used a .json file for the data set to train the model to work.

Features Implemented

Week 1 A Model that can predict the pricing of the electronic devices based on the trained data set.
Week 2 A Model that can detect the object when someone insert the picture of them.
Week 3 Continuing the object detection built the model that can now align the images if its not properly aligned and enhanced
Week 4 Chat Bot that can interact with any individual.

Deployment Link

Week 1: https://drive.google.com/file/d/1b5CS2BMvf25Q3rG7M8v1jl7_6FenodNE/view?usp=drive_link
Week 2: https://drive.google.com/file/d/1voFKTaRb6ckHvEuVbD05UzrD2ckE_572/view?usp=drive_link
Week 3: https://drive.google.com/file/d/1oaXi5e9B2P6U8Ou_rVsNg5KtyOiquBVJ/view?usp=drive_link
Week 4: https://drive.google.com/file/d/1MW_S2W378NWNhvb7FD_tCrhmkW15hSO_/view?usp=drive_link

Demo Video

Due to time constrain I was not able to submit the video but I was told to submit the links of the notebook and to explain it briefly

Week 1: https://drive.google.com/file/d/1b5CS2BMvf25Q3rG7M8v1jl7_6FenodNE/view?usp=drive_link
(I used pandas, sklearn and matplotlib to made the data set according to my convince then I used get dummies so that I can use product names also as the form of weighted parameter then I imported Linear Regression which I used to train my data and then I was able to get the desired model which I wanted.)

Week 2: https://drive.google.com/file/d/1voFKTaRb6ckHvEuVbD05UzrD2ckE_572/view?usp=drive_link

(First, I cloned the Darknet repository from GitHub. Darknet is a framework that implements YOLO, an object detection system. CUDA is crucial for GPU acceleration, which speeds up deep learning computations.

After that, I compiled the Darknet source code using the 'make' command to build the necessary executable files.
To use YOLO, I needed pre-trained weights for YOLOv4. So, I downloaded the weights file using the 'wget' command.

I applied YOLOv4 to perform object detection on an image of an eagle ('eagle.jpg'). YOLO detected objects in the image and generated a new image called 'predictions.jpg,' with bounding boxes around the identified objects.

Then at the end I imported the OpenCV library and Matplotlib for image processing and visualization. I read the 'predictions.jpg' image, created a figure for plotting, and displayed the output image with the bounding boxes drawn around the detected objects.)

Week 3: https://drive.google.com/file/d/1oaXi5e9B2P6U8Ou_rVsNg5KtyOiquBVJ/view?usp=drive_link

( So basically here it takes an input image and aligns it using a technique called Scale-Invariant Feature Transform. First, it converts the image to grayscale, which helps in detecting keypoints and descriptors.

Then, it uses the SIFT algorithm to find keypoints and their corresponding descriptors in the grayscale image. These discriptors capture the unique features of the keypoints.

After that, it uses a Brute-Force Matcher to seek it's looking for similar features in different parts of the image.
Then I applied a distance ratio test. Only the matches where the distance between two keypoints is within a certain threshold are considered good matches.

Once I had these good matches, I proceed to find a perspective transformation matrix using the RANSAC algorithm. This matrix helps to align the image properly.
Finally, using the perspective transformation matrix, it warps the original image to get the aligned image.
That's how the function works! It's good to go for aligning images, especially when they might have been taken from different angles.)

Week 4: https://drive.google.com/file/d/1MW_S2W378NWNhvb7FD_tCrhmkW15hSO_/view?usp=drive_link

(So, I started by using the NLTK library to process natural language. First, I downloaded the 'punkt' tokenizer, which helps in splitting sentences into individual words. Then, I used the LancasterStemmer from NLTK to reduce words to their base form.

Next, I prepared the data for training the chatbot. I had a JSON file with labeled intents, like greetings or farewells. I extracted the patterns and their corresponding intents from the JSON and tokenized the patterns into word vectors. The intents were represented as one-hot encoded output vectors.

Then, I built a neural network using TensorFlow/Keras. The model had three dense layers, with the input layer having the same number of neurons as the length of the word vectors. The output layer had neurons equal to the number of intents.

After compiling the model, I trained it with the processed data for 1000 epochs, using the optimizer.
Finally, I implemented a chat function where the user can interact with the chatbot. It takes user input, processes it into a word vector, and then feeds it to the trained model for intent prediction. The chatbot responds with one of the predefined responses related to the predicted intent. This goes on until the user types "quit.")

Personal Details

Name Anay Singh
College IIT DELHI
Entry No 2022ME12026
Email ID [email protected]
Phone Number 9084941192

2022ME12026

@as1605 as1605 changed the base branch from main to best-machine-learning August 1, 2023 08:14
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant