This repository lists several deep learning project I achieved, using different types of neural networks and approaches.
Predict the number of bikeshare users on a given day by building my own deep-learning library. https://github.com/sbatururimi/bikeshare_neural_network
Sentiment analysis with Andrew Trask, a project that classify a movie's review as positive or negative. https://github.com/sbatururimi/sentiment_analysis
Sentiment analysis with TFLearn. The same work as the previous project by now using TFLearn. https://github.com/sbatururimi/sentiment_analysis_TFLearn
Handwritten digit recognition with TFLearn. Using the MNIST dataset, which contains images of handwitten single digits and their respective lavels (numbers from 0 to 9), we train a neuronal network that recognizes handwritten digits. https://github.com/sbatururimi/Handwritten-Digit-Recognition-TFLearn.git
Classify images from the CIFAR-10 dataset using a Convolutional Network with Tensorflow. Trained on a machine with GPU in floydhub.
https://github.com/sbatururimi/image_classification_deep_learning
Building a Recurrent Neural network implementing LSTM cells in order to generate a new Simpsons TV script. The Simpsons dataset of scripts from 27 seasons was used. The Neural Network generates a new TV script for a scene at Moe's Tavern. The RNN was trained on GPU using floydhub.
https://github.com/sbatururimi/tv-script-generation
A case study of Transfer Learning.
In practice, you won't typically be training your own huge networks. There are multiple models out there that have been trained for weeks on huge datasets like ImageNet. In this project, I'll be using one of these pretrained networks, VGGNet, to classify images of flowers.
A languag translation project using Sequence to Sequence model. It has been trained on GPU, using Floyd. This Sequence to Sequence model is trained on a dataset of English and French sentences and can translate new sentences from English to French.
Building an Autoencoder Neural network to compress and denoise images
Compress and denoise with convolutions
Training a GAN on MNIST to generate new handwritten digits
A generative adversarial network (GAN) trained on the MNIST dataset.
A simple case study of the batch normalization for normalizing the inputs to layers within the network.
Building a Deep Convolutional GAN, i.e DCGAN, using convolutional layers in the generator and discriminator. I'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
Training a DCGAN on CelebA to generate new images of human faces
A generative adversarial network (GAN) trained on the CelebA dataset.
Training a semi-supervised GAN for the SVHN (Street View House Numbers) dataset to generate new images and attempt to classify the images with a large proportion of the labels dropped. Semi-supervised Learning.