InvokeAI Version 3.0.2
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface and also serves as the foundation for multiple commercial products.
- What's New
- Installation and Upgrading
- Getting Started with SDXL
- Known Issues
- Getting Help
- Contributing
- Detailed Change Log
To learn more about InvokeAI, please see our Documentation Pages.
What's New in v3.0.2
- LoRA support for SDXL is now available
- Mutli-select actions are now supported in the Gallery
- Images are automatically sent to the board that is selected at invocation
- Images from previous versions of InvokeAI are able to imported with the
invokeai-import-images
command - Inpainting models imported from A1111 will now work with InvokeAI (see upgrading note)
- Model merging functionality has been fixed
- Improved Model Manager UI/UX
- InvokeAI 3.0 can be served via HTTPS
- Execution statistics are visible in the terminal after each invocation
- ONNX models are now supported for use with Text2Image
- Pydantic errors when upgrading inplace have been resolved
- Code formatting is now part of the CI/CD pipeline
- ...and lots more! You can view the full change log here
Installation / Upgrading
Installing using the InvokeAI zip file installer
To install 3.0.2 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh
(Macintosh, Linux) or install.bat
(Windows). Alternatively, you can open a command-line window and execute the installation script directly. If you have an earlier version of InvokeAI installed, we recommend that you install into a new directory, such as invokeai-3
instead of the previously-used invokeai
directory. We provide a script that will let you migrate your old models and settings into the new directory, described below.
In the event of an aborted install that has left the invokeai
directory unusable, you may be able to recover it by asking the installer to install on top of the existing directory. This is a non-destructive operation that will not affect existing models or images.
Upgrading in place
All users can upgrade from 3.0.1 using the launcher's "upgrade" facility. If you are on a Linux or Macintosh, you may also upgrade a 2.3.2 or higher version of InvokeAI to 3.0.2 using this recipe, but upgrading from 2.3 will not work on Windows due to a 2.3.5 bug (see workaround below):
- Enter the root directory you wish to upgrade
- Launch
invoke.sh
orinvoke.bat
- Select the
upgrade
menu option [9] - Select "Manually enter the tag name for the version you wish to update to" option [3]
- Select option [1] to upgrade to the latest version.
- When the upgrade is complete, the main menu will reappear. Choose "rerun the configure script to fix a broken install" option [7]
Windows users can instead follow this recipe:
- Enter the 2.3 root directory you wish to upgrade
- Launch
invoke.sh
orinvoke.bat
- Select the "Developer's console" option [8]
- Type the following commands:
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.2.zip" --use-pep517 --upgrade
invokeai-configure --root .
This will produce a working 3.0 directory. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script
After you have confirmed everything is working, you may remove the following backup directories and files:
- invokeai.init.orig
- models.orig
- configs/models.yaml.orig
- embeddings
- loras
To get back to a working 2.3 directory, rename all the '*.orig" files and directories to their original names (without the .orig), run the update script again, and select [1] "Update to the latest official release".
Note:
- If you had issues with inpainting on a previous InvokeAI 3.0 version, delete your
models/ .cache
folder before proceeding.
What to do if problems occur during the install
Due to the large number of Python libraries that InvokeAI requires, as well as the large size of the newer SDXL models, you may experience glitches during the install process. This particularly affects Windows users. Please see the Installation Troubleshooting Guide for solutions.
In the event that an update makes your environment unusable, you may use the zip installer to reinstall on top of your existing root directory. Models and generated images already in the directory will not be affected.
Migrating models and settings from a 2.3 InvokeAI root directory to a 3.0 directory
We provide a script, invokeai-migrate3
, which will copy your models and settings from a 2.3-format root directory to a new 3.0 directory. To run it, execute the launcher and select option [8] "Developer's console". This will take you to a new command line interface. On the command line, type:
invokeai-migrate3 --from <path to 2.3 directory> --to <path to 3.0 directory>
Provide the old and new directory names with the --from
and --to
arguments respectively. This will migrate your models as well as the settings inside invokeai.init
. You may provide the same --from
and --to
directories in order ot upgrade a 2.3 root directory in place. (The original models and configuration files will be backed up.)
Upgrading using pip
Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using:
pip install --use-pep517 --upgrade InvokeAI
invokeai-configure --yes --skip-sd-weights
You may specify a particular version by adding the version number to the command, as in:
pip install --use-pep517 --upgrade InvokeAI==3.0.2
invokeai-configure --yes --skip-sd-weights
Important: After doing the pip install
, it is necessary to invokeai-configure
in order to download new core models needed to load and convert Stable Diffusion XL .safetensors files. The web server will refuse to start if you do not do so.
Getting Started with SDXL
Stable Diffusion XL (SDXL) is the latest generation of StabilityAI's image generation models, capable of producing high quality 1024x1024 photorealistic images as well as many other visual styles. SDXL comes with two models, a "base" model that generates the initial image, and a "refiner" model that takes the initial image and improves on it in an img2img manner. In many cases, just the base model will give satisfactory results.
To download the base and refiner SDXL models, you have several options:
- Select option [5] from the
invoke.bat
launcher script, and select the base model, and optionally the refiner, from the checkbox list of "starter" models. - Use the Web's Model Manager to select "Import Models" and when prompted provide the HuggingFace repo_ids for the two models:
stabilityai/stable-diffusion-xl-base-1.0
stabilityai/stable-diffusion-xl-refiner-1.0
- Download the models manually and cut and paste their paths into the Location field in "Import Models"
Also be aware that SDXL requires at 6-8 GB of VRAM in order to render 1024x1024 images and a minimum of 16 GB of RAM. For best performance, we recommend the following settings in invokeai.yaml
:
precision: float16
max_cache_size: 12.0
max_vram_cache_size: 0.5
Known Issues in 3.0
This is a list of known bugs in 3.0.2rc1 as well as features that are planned for inclusion in later releases:
- Variant generation was not fully functional and did not make it into the release. It will be added in the next point release.
- Perlin noise and symmetrical tiling were not widely used and have been removed from the feature set.
- High res optimization has been removed from the basic user interface as we experiment with better ways to achieve good results with nodes. However, you will find several community-contributed high-res optimization pipelines in the Community Nodes Discord channel at https://discord.com/channels/1020123559063990373/1130291608097661000 for use with the experimental Node Editor.
Getting Help
For support, please use this repository's GitHub Issues tracking service, or join our Discord.
Contributing
As a community-supported project, we rely on volunteers and enthusiasts for continued innovation and polish. Everything from minor documentation fixes to major feature additions are welcome. To get started as a contributor, please see How to Contribute.
New Contributors
- @zopieux made their first contribution in #3904
- @joshistoast made their first contribution in #3972
- @camenduru made their first contribution in #3944
- @ZachNagengast made their first contribution in #4040
- @sohelzerdoumi made their first contribution in #4116
- @KevinBrack made their first contribution in #4086
- @SauravMaheshkar made their first contribution in #4060
Thank you to all of the new contributors to InvokeAI. We appreciate your efforts and contributions!
Detailed Change Log
- Add LoRAs to the model manager by @zopieux in #3902
- feat: Unify Promp Area Styling by @blessedcoolant in #4033
- Update troubleshooting guide with ~ydantic and SDXL unet issue advice by @lstein in #4054
- fix: Concat Link Styling by @blessedcoolant in #4048
- bugfix: Float64 error for mps devices on set_timesteps by @ZachNagengast in #4040
- Release 3.0.1 release candidate 3 by @lstein in #4025
- Feat/Nodes: Change Input to Textbox by @mickr777 in #3853
- fix: Prompt Node using incorrect output type by @blessedcoolant in #4058
- fix: SDXL Metadata not being retrieved by @blessedcoolant in #4057
- Fix recovery recipe by @lstein in #4066
- Unpin pydantic and numpy in pyproject.toml by @lstein in #4062
- Fix various bugs in ckpt to diffusers conversion script by @lstein in #4065
- Installer tweaks by @lstein in #4070
- fix relative model paths to be against config.models_path, not root by @lstein in #4061
- Update communityNodes.md - FaceTools by @ymgenesis in #4044
- 3.0.1post3 by @lstein in #4082
- Restore model merge script by @lstein in #4085
- Add Nix Flake for development by @zopieux in #4077
- Add python black check to pre-commit by @brandonrising in #4094
- Added a getting started guide & updated the user landing page flow by @Millu in #4028
- Add missing Optional on a few nullable fields by @zopieux in #4076
- ONNX Support by @StAlKeR7779 in #3562
- Chakra optimizations by @maryhipp in #4096
- Add onnxruntime to the main dependencies by @brandonrising in #4103
- fix(ui): post-onnx fixes by @psychedelicious in #4105
- Update lint-frontend.yml by @psychedelicious in #4113
- fix: Model Manager Tab Issues by @blessedcoolant in #4087
- fix: flake: add opencv with CUDA, new patchmatch dependency by @zopieux in #4115
- Fix manual installation documentation by @lstein in #4107
- fix https/wss behind reverse proxy by @sohelzerdoumi in #4116
- Refactor/cleanup root detection by @lstein in #4102
- chore: delete nonfunctional nix flake by @psychedelicious in #4117
- Feat/auto assign board on click by @KevinBrack in #4086
- (ci) only install black when running static checks by @ebr in #4036
- fix .swap() by reverting improperly merged @classmethod change by @damian0815 in #4080
- chore: move PR template to
.github/
dir by @SauravMaheshkar in #4060 - Path checks in a workflow step for python tests by @brandonrising in #4122
- fix(db): retrieve metadata even when no session_id by @psychedelicious in #4110
- project header by @maryhipp in #4134
- Restore ability to convert merged inpaint .safetensors files by @lstein in #4084
- ui: multi-select and batched gallery image operations by @psychedelicious in #4032
- Add execution stat reporting after each invocation by @lstein in #4125
- Stop checking for unet/model.onnx when a model_index.json is detected by @brandonrising in #4132
- autoAddBoardId should always be defined as "none" or board_id by @maryhipp in #4149
- Add support for diff/full lora layers by @StAlKeR7779 in #4118
- [WIP] Add sdxl lora support by @StAlKeR7779 in #4097
- Provide ti name from model manager, not from ti itself by @StAlKeR7779 in #4120
- Installer should download fp16 models if user has specified 'auto' in config by @lstein in #4129
- add
--ignore_missing_core_models
CLI flag to bypass checking for missing core models by @damian0815 in #4081 - fix broken civitai example link by @lstein in #4153
- devices.py - Update MPS FP16 check to account for upcoming MacOS Sonoma by @gogurtenjoyer in #3886
- Bump version number on main to distinguish from release by @lstein in #4158
- Fix random number generator by @JPPhoto in #4159
- Added HSL Nodes by @hipsterusername in #3459
- backend: fix up types by @psychedelicious in #4109
- Fix hue adjustment by @JPPhoto in #4182
- fix(ModelManager): fix overridden VAE with relative path by @keturn in #4059
- Maryhipp/multiselect updates by @maryhipp in #4188
- feat(ui): add LoRA support to SDXL linear UI by @psychedelicious in #4194
- api(images): allow HEAD request on image/full by @keturn in #4193
- Fix crash when attempting to update a model by @lstein in #4192
- Refrain from writing deprecated legacy options to invokeai.yaml by @lstein in #4190
- Pick correct config file for sdxl models by @lstein in #4191
- Add slider for VRAM cache in configure script by @lstein in #4133
- 3.0.2 Release Branch by @Millu in #4203
- Add techjedi's image import script by @lstein in #4171
- refactor(diffusers_pipeline): remove unused pipeline methods 🚮 by @keturn in #4175
- ImageLerpInvocation math bug: Add self.min, not self.max by @lillekemiker in #4176
- feat: add
app_version
to image metadata by @psychedelicious in #4198 - fix(ui): fix canvas model switching by @psychedelicious in #4221
- fix(ui): fix lora sort by @psychedelicious in #4222
- Update dependencies and docs to cu118 by @lstein in #4212
- Prevent
vae: ''
from crashing model by @lstein in #4209 - Probe LoRAs that do not have the text encoder by @lstein in #4181
- Bugfix: Limit RAM and VRAM cache settings to permissible values by @lstein in #4214
- Temporary force set vae to same precision as unet by @StAlKeR7779 in #4233
- Add support for LyCORIS IA3 format by @StAlKeR7779 in #4234
- Two changes to command-line scripts by @lstein in #4235
- 3.0.2 Release by @Millu in #4236
Full Changelog: v3.0.1rc3...v3.0.2