-
Notifications
You must be signed in to change notification settings - Fork 415
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(unit3): Add example of img2img and inpainting implementation #91
base: main
Are you sure you want to change the base?
feat(unit3): Add example of img2img and inpainting implementation #91
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
@johnowhitaker |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
@johnowhitaker
To fix both issues, I converted the images to data URIs and committed the changes to the same branch. Could you please review the updates and approve the workflow again so the latest changes are reflected in the documentation preview? Thank you for your time and support! |
@johnowhitaker To address this issue, I initially tried using Base64 encoding, but it seems to have caused problems with the PR build. To resolve the issue for now: For the existing image (inpaint_w_border.jpg), I updated the link to reference the Raw URL from the main repository. I understand that using external image URLs (e.g., images from my forked repository) may not be ideal for long-term maintenance. If this PR is merged, I can update the link referencing my forked repository as needed. Thank you for your guidance! |
Title: feat(unit3): Add example of img2img and inpainting implementation
Body:
This pull request adds example code of a manually implemented denoising loop for img2img and inpainting to the stable_diffusion_introduction.ipynb file in unit3.
Motivation
I was able to understand diffusion models through the tutorial provided in this repository, and I am extremely grateful for that. It was fascinating to see that diffusion models can be applied to tasks like img2img and inpainting. However, it was difficult to grasp the inner workings just by using the pipeline. Therefore, I added a direct implementation to help users understand it more easily.
Detailed Explanation of Implementation
StableDiffusionPipeline
of thediffusers
library, it has been intentionally simplified to focus on the core aspects of the denoising loop for easier understanding by beginners. Unlike the original code that implements the entire loop, non-essential parts have been omitted to concentrate on the noise removal process, minimizing the code's length.runwayml/stable-diffusion-inpainting
is no longer available, the model for inpainting was replaced withstabilityai/stable-diffusion-2-inpainting
.Request
Please review the code for accuracy, efficiency, and readability. In particular, I would appreciate feedback on the implementation of the denoising loop and the masking process for inpainting.