This repository has been archived by the owner on Aug 16, 2023. It is now read-only.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hi!
Here is my final submission for DevClub Summer of Code 2023!
I also wish to apply for the following categories -
Tasks Done
Technologies/Tools/Frameworks used/learnt
In Week one I used Pandas, sklearn and matplotlib library in python and used Linear Regression model to build my model and explored the mentioned library and got to know more about them.
In Week two I used YOLOV4 to build my model apart from that I bit explored to used comd prompt.
In Week three I used CV2 and numpy library to build my model.
In Week four I used Tensor flow, Natural language toolkit, tfleran and numpy library of python then used a .json file for the data set to train the model to work.
Features Implemented
Week 1 A Model that can predict the pricing of the electronic devices based on the trained data set.
Week 2 A Model that can detect the object when someone insert the picture of them.
Week 3 Continuing the object detection built the model that can now align the images if its not properly aligned and enhanced
Week 4 Chat Bot that can interact with any individual.
Deployment Link
Week 1: https://drive.google.com/file/d/1b5CS2BMvf25Q3rG7M8v1jl7_6FenodNE/view?usp=drive_link
Week 2: https://drive.google.com/file/d/1voFKTaRb6ckHvEuVbD05UzrD2ckE_572/view?usp=drive_link
Week 3: https://drive.google.com/file/d/1oaXi5e9B2P6U8Ou_rVsNg5KtyOiquBVJ/view?usp=drive_link
Week 4: https://drive.google.com/file/d/1MW_S2W378NWNhvb7FD_tCrhmkW15hSO_/view?usp=drive_link
Demo Video
Due to time constrain I was not able to submit the video but I was told to submit the links of the notebook and to explain it briefly
Week 1: https://drive.google.com/file/d/1b5CS2BMvf25Q3rG7M8v1jl7_6FenodNE/view?usp=drive_link
(I used pandas, sklearn and matplotlib to made the data set according to my convince then I used get dummies so that I can use product names also as the form of weighted parameter then I imported Linear Regression which I used to train my data and then I was able to get the desired model which I wanted.)
Week 2: https://drive.google.com/file/d/1voFKTaRb6ckHvEuVbD05UzrD2ckE_572/view?usp=drive_link
(First, I cloned the Darknet repository from GitHub. Darknet is a framework that implements YOLO, an object detection system. CUDA is crucial for GPU acceleration, which speeds up deep learning computations.
After that, I compiled the Darknet source code using the 'make' command to build the necessary executable files.
To use YOLO, I needed pre-trained weights for YOLOv4. So, I downloaded the weights file using the 'wget' command.
I applied YOLOv4 to perform object detection on an image of an eagle ('eagle.jpg'). YOLO detected objects in the image and generated a new image called 'predictions.jpg,' with bounding boxes around the identified objects.
Then at the end I imported the OpenCV library and Matplotlib for image processing and visualization. I read the 'predictions.jpg' image, created a figure for plotting, and displayed the output image with the bounding boxes drawn around the detected objects.)
Week 3: https://drive.google.com/file/d/1oaXi5e9B2P6U8Ou_rVsNg5KtyOiquBVJ/view?usp=drive_link
( So basically here it takes an input image and aligns it using a technique called Scale-Invariant Feature Transform. First, it converts the image to grayscale, which helps in detecting keypoints and descriptors.
Then, it uses the SIFT algorithm to find keypoints and their corresponding descriptors in the grayscale image. These discriptors capture the unique features of the keypoints.
After that, it uses a Brute-Force Matcher to seek it's looking for similar features in different parts of the image.
Then I applied a distance ratio test. Only the matches where the distance between two keypoints is within a certain threshold are considered good matches.
Once I had these good matches, I proceed to find a perspective transformation matrix using the RANSAC algorithm. This matrix helps to align the image properly.
Finally, using the perspective transformation matrix, it warps the original image to get the aligned image.
That's how the function works! It's good to go for aligning images, especially when they might have been taken from different angles.)
Week 4: https://drive.google.com/file/d/1MW_S2W378NWNhvb7FD_tCrhmkW15hSO_/view?usp=drive_link
(So, I started by using the NLTK library to process natural language. First, I downloaded the 'punkt' tokenizer, which helps in splitting sentences into individual words. Then, I used the LancasterStemmer from NLTK to reduce words to their base form.
Next, I prepared the data for training the chatbot. I had a JSON file with labeled intents, like greetings or farewells. I extracted the patterns and their corresponding intents from the JSON and tokenized the patterns into word vectors. The intents were represented as one-hot encoded output vectors.
Then, I built a neural network using TensorFlow/Keras. The model had three dense layers, with the input layer having the same number of neurons as the length of the word vectors. The output layer had neurons equal to the number of intents.
After compiling the model, I trained it with the processed data for 1000 epochs, using the optimizer.
Finally, I implemented a chat function where the user can interact with the chatbot. It takes user input, processes it into a word vector, and then feeds it to the trained model for intent prediction. The chatbot responds with one of the predefined responses related to the predicted intent. This goes on until the user types "quit.")
Personal Details
2022ME12026