-
Notifications
You must be signed in to change notification settings - Fork 0
/
deepbump.html
1 lines (1 loc) · 6.55 KB
/
deepbump.html
1
<!DOCTYPE html> <html lang=en > <link rel=stylesheet href="theme/css/base.css" type="text/css" /> <meta name=viewport content="width=device-width,initial-scale=1"> <title>HugoTini - DeepBump</title> <meta charset=utf-8 /> <meta name=generator content=Pelican /> <link href="https://hugotini.github.io/feeds/all.atom.xml" type="application/atom+xml" rel=alternate title="HugoTini Blog Atom Feed" /> <!-- --> <link rel=stylesheet href="theme/css/article.css" type="text/css" /> <body id=index class=home > <div id=main_container > <!--/ header menu --> <header id=header > <div class=menu-item-active > <a href="https://hugotini.github.io/">Blog</a> <div class=menu-underline-active ></div> </div> <div class=menu-item > <a href="https://hugotini.github.io/about">About</a> <div class=menu-underline ></div> </div> </header> <section id=content > <div style="background-image: url('assets/deepbump/banner.jpg');" id=banner ></div> <div id='title_container'> <h1>DeepBump</h1> <p>Normal Map generation using Machine Learning</p> </div> <time id=item_date datetime="2020-05-07T00:00:00+02:00"> May 2020 </time> <div id=article_content > <p>Normal maps encode surfaces orientation, they're used a lot in computer graphics. They allow "faking" details without having to use extra geometry. Artists typically obtain normal maps in various ways : they might sculpt detailed geometry, then project it on a lower resolution mesh (retopology), they might generate materials procedurally (i.e. using math functions), use photogrammetry, etc.</p> <p>Those techniques require some manual efforts. Sometimes you just want to quickly <a href="https://youtu.be/v_ikG-u_6r0">grab that photo and use it in your scene</a>. Then it makes sense to try to generate normal maps from single pictures.</p> <p>A simple approach is to use the grayscale image as height from which you get the normal map (by taking the slope's normal). It can give decent results on some textures, but still is a rough approximation as darker pixels don't always mean there's a hole on the surface.</p> <h1>DeepBump</h1> <p><a href="https://github.com/HugoTini/DeepBump">DeepBump</a> is an experiment using machine learning for reconstructing normals from single pictures. It mainly makes use of an encoder-decoder <a href="https://arxiv.org/abs/1505.04597">U-Net</a> with a <a href="https://arxiv.org/abs/1801.04381">MobileNetV2</a> architecture. Training data is taken from real-world photogrammetry and procedural materials.</p> <p>This tool is available both as a Blender add-on and as a command-line program. Check <a href="https://github.com/HugoTini/DeepBump">here</a> for install instructions. Once installed, you can generate a normal map from a picture (<em>image texture node</em> in Blender) in one click :</p> <video controls> <source src='https://hugotini.github.io/assets/deepbump/addon_vid.webm' type="video/webm"> </video> <h1>Results</h1> <p>Here are a few shots of normal maps obtained using DeepBump (middle) and compared with the simple "grayscale = height" method (right):</p> <p class='img_container'> <a href='https://hugotini.github.io/assets/deepbump/compare1.jpg'> <img class='article_img' src='https://hugotini.github.io/assets/deepbump/compare1.jpg'> </a> </p> <p class='img_caption'><a href='https://unsplash.com/photos/t4DhcQddCnA'>original photo</a></p> <p class='img_container'> <a href='https://hugotini.github.io/assets/deepbump/compare3.jpg'> <img class='article_img' src='https://hugotini.github.io/assets/deepbump/compare3.jpg'> </a> </p> <p class='img_caption'><a href='https://unsplash.com/photos/uDdoiaWiKFA'>original photo</a></p> <p class='img_container'> <a href='https://hugotini.github.io/assets/deepbump/compare4.jpg'> <img class='article_img' src='https://hugotini.github.io/assets/deepbump/compare4.jpg'> </a> </p> <p class='img_caption'><a href='https://unsplash.com/photos/4UjcOpLLQSE'>original photo</a></p> <p class='img_container'> <a href='https://hugotini.github.io/assets/deepbump/compare2.jpg'> <img class='article_img' src='https://hugotini.github.io/assets/deepbump/compare2.jpg'> </a> </p> <p class='img_caption'><a href='https://unsplash.com/photos/xnQaH8qF0Rc'>original photo</a></p> <p>Still comparing the two methods, this time using <a href="https://unsplash.com/photos/dcasj22jmCk">this photo</a> as input. We apply the normal maps and add some lights (rendered with Blender's realtime engine Eevee). Second row is without color to better visualize the normal maps :</p> <p class='img_container'> <a href='https://hugotini.github.io/assets/deepbump/clay_compare.jpg'> <img class='article_img' src='https://hugotini.github.io/assets/deepbump/clay_compare.jpg'> </a> </p> <p class='img_caption'><a href='https://unsplash.com/photos/dcasj22jmCk'>original photo</a></p> <p>Another example with moving lights, without (left) / with (right) the generated normal map :</p> <video controls> <source src='https://hugotini.github.io/assets/deepbump/brickdoor.webm' type="video/webm"> </video> <p class='img_caption'><a href='https://unsplash.com/photos/ng-jV5Etz3c'>original photo</a></p> <p>In some cases, it might even work decently for some hand-painted textures :</p> <video controls> <source src='https://hugotini.github.io/assets/deepbump/handpainted.webm' type="video/webm"> </video> <p class='img_caption'><a href='https://opengameart.org/content/handpainted-stone-wall-textures'>original texture</a></p> <h1>Limitations</h1> <p>Despite generalization techniques, machine learning models still depend on their training data. If the input picture is very different from what the neural net was trained on, risks are the output won't be ideal, but in any case, it's just one click so it costs nothing to give it a try.</p> <h1>Thanks</h1> <p>Many thanks to the authors of <a href="https://texturehaven.com/">texturehaven.com</a> and <a href="https://cc0textures.com/">cc0textures.com</a> and to their patreons for making such high quality assets available to all without restrictions. DeepBump's training dataset is based on those. Thanks to <a href="https://github.com/qubvel/segmentation_models.pytorch">segmentation_models</a> for making it easy to experiment with different network architectures in PyTorch.</p> </div> </section> </div> <div class='footer'> <a href="https://hugotini.github.io/feeds/all.atom.xml"> <img class='svg_icon' src="theme/icons/rss-alt.svg"> </a> <a href="https://github.com/HugoTini"> <img class='svg_icon' src="theme/icons/github.svg"> </a> <a href="https://twitter.com/Hugo_Tini"> <img class='svg_icon' src="theme/icons/twitter.svg"> </a> <a href="https://hugotini.github.io/about"> <img class='svg_icon' src="theme/icons/email.svg"> </a> </div>