Skip to content

Latest commit

 

History

History
104 lines (77 loc) · 4.97 KB

README.md

File metadata and controls

104 lines (77 loc) · 4.97 KB

Turning design mockups into code with deep learning

Cloud GPU MIT

This is the code for the article 'Turning design mockups into code with deep learning' on FloydHub's blog.

Within three years deep learning will change front-end development. It will increase prototyping speed and lower the barrier for building software.

The field took off last year when Tony Beltramelli introduced the pix2code paper and Airbnb launched sketching interfaces.

Currently, the largest barrier to automating front-end development is computing power. However, we can use current deep learning algorithms, along with synthesized training data, to start exploring artificial front-end automation right now.

In the provided models, we’ll teach a neural network how to code a basic a HTML and CSS website based on a picture of a design mockup.

We’ll build the neural network in three iterations. Starting with a Hello World version, followed by the main neural network layers, and ending by training it to generalize.

A quick overview of the process:

1) Give a design image to the trained neural network

Insert image

2) The neural network converts the image into HTML markup

3) Rendered output

Screenshot

Installation

FloydHub

FloydHub is hands down the best option to run models on cloud GPUs: floydhub.com

pip install floyd-cli
floyd login
git clone https://github.com/emilwallner/Screenshot-to-code-in-Keras
cd Screenshot-to-code-in-Keras
floyd init projectname
floyd run --gpu --env tensorflow-1.4 --data emilwallner/datasets/imagetocode/1:data --mode jupyter

Local

pip install keras
pip install tensorflow
pip install pillow
pip install h5py
pip install jupyter
git clone https://github.com/emilwallner/Screenshot-to-code-in-Keras
cd Screenshot-to-code-in-Keras/local
jupyter notebook

Go do the desired notebook, files that end with '.ipynb'. To run the model, go to the menu then click on Cell > Run all

The final version, the Bootstrap version, is prepared with a small set to test run the model. If you want to try it with all the data, you need to download the data here: https://www.floydhub.com/emilwallner/datasets/imagetocode, and specify the correct dir_name.

Folder structure

  |-floydhub                               #Folder to run the project on Floyhub
  |  |-Bootstrap                           #The Bootstrap version
  |  |  |-compiler                         #A compiler to turn the tokens to HTML/CSS (by pix2code)
  |  |-Hello_world                         #The Hello World version
  |  |-HTML                                #The HTML version
  |  |  |-resources									
  |  |  |  |-Resources_for_index_file      #CSS and images to test index.html file
  |  |  |  |-html                          #HTML files to train it on
  |  |  |  |-images                        #Screenshots for training
  |-local                                  #Local setup
  |  |-Bootstrap                           #The Bootstrap version
  |  |  |-compiler                         #A compiler to turn the tokens to HTML/CSS (by pix2code)
  |  |  |-resources											
  |  |  |  |-eval_light                    #10 test images and markup
  |  |-Hello_world                         #The Hello World version
  |  |-HTML                                #The HTML version
  |  |  |-Resources_for_index_file         #CSS,images and scripts to test index.html file
  |  |  |-html                             #HTML files to train it on
  |  |  |-images                           #Screenshots for training
  |-readme_images                          #Images for the readme page

Hello World

HTML

Bootstrap

Model weights

Acknowledgments

  • The code is largly influenced by Tony Beltramelli's pix2code paper. Code Paper
  • The structure and some of the functions are from Jason Brownlee's excellent tutorial