Skip to content

InvokeAI docker images for use in GPU cloud and local environments. Includes AI-Dock base for authentication and improved user experience.

License

Notifications You must be signed in to change notification settings

ai-dock/invokeai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Docker Build

AI-Dock + Invoke AI Docker Image

Run Invoke AI in a docker container locally or in the cloud.

Note

These images do not bundle models or third-party configurations. You should use a provisioning script to automatically configure your container. You can find examples in config/provisioning.

Documentation

All AI-Dock containers share a common base which is designed to make running on cloud services such as vast.ai and runpod.io as straightforward and user friendly as possible.

Common features and options are documented in the base wiki but any additional features unique to this image will be detailed below.

Note

The default provisioning script downloads models to $WORKSPACE/storage; You will need to manually scan this directory as symlinks are not yet set for this image.

Version Tags

The :latest tag points to :latest-cuda

Tags follow these patterns:

CUDA
  • :pytorch-[pytorch-version]-py[python-version]-cuda-[x.x.x]-base-[ubuntu-version]

  • :latest-cuda:pytorch-2.2.1-py3.10-cuda-11.8.0-base-22.04

ROCm
  • :pytorch-[pytorch-version]-py[python-version]-rocm-[x.x.x]-runtime-[ubuntu-version]

  • :latest-rocm:pytorch-2.2.1-py3.10-rocm-5.7-runtime-22.04

CPU
  • :pytorch-[pytorch-version]-py[python-version]-ubuntu-[ubuntu-version]

  • :latest-cpu:pytorch-2.2.1-py3.10-cpu-22.04

Browse here for an image suitable for your target environment.

Supported Python versions: 3.10

Supported Pytorch versions: 2.2.1

Supported Platforms: NVIDIA CUDA, AMD ROCm, CPU

Additional Environment Variables

Variable Description
AUTO_UPDATE Update Invoke AI on startup (default true)
INVOKEAI_VERSION InvokeAI version tag (default None)
INVOKEAI_PORT_HOST InvokeAI port (default 9090)
INVOKEAI_URL Override $DIRECT_ADDRESS:port with URL for Invoke AI service
INVOKEAI_* Invoke AI environment configuration as described in the project documentation

See the base environment variables here for more configuration options.

Additional Micromamba Environments

Environment Packages
invokeai Invoke AI and dependencies

This micromamba environment will be activated on shell login.

See the base micromamba environments here.

Additional Services

The following services will be launched alongside the default services provided by the base image.

Invoke AI

The service will launch on port 9090 unless you have specified an override with INVOKEAI_PORT_HOST.

Invoke AI will be updated to the latest version on container start. You can pin the version to a branch or commit hash by setting the INVOKEAI_VERSION variable.

To manage this service you can use supervisorctl [start|stop|restart] invokeai or through the Service Portal process manager tab.

Note

All services are password protected by default. See the security and environment variables documentation for more information.

Pre-Configured Templates

Vast.​ai


Runpod.​io


The author (@robballantyne) may be compensated if you sign up to services linked in this document. Testing multiple variants of GPU images in many different environments is both costly and time-consuming; This helps to offset costs

About

InvokeAI docker images for use in GPU cloud and local environments. Includes AI-Dock base for authentication and improved user experience.

Topics

Resources

License

Stars

Watchers

Forks

Sponsor this project

  •  
  •