Note
|
Currently Not Working on WSL2. |
-
By default, only install ComfyUI, ComfyUI-3D-Pack, and any custom nodes required by example workflows.
-
ComfyUI-Manager will be installed but disabled by default.
-
After the download is complete, the script will attempt to rebuild dependencies for the 3D-Pack.
-
This process could take about 10 minutes.
-
If the rebuild is unnecessary (some workflows such as TripoSR can run directly), add an empty file named
.build-complete
to the storage folder (similar to the.download-complete
file). -
The build process will auto target local GPU, usually no need to config. If having issues, try set env var
TORCH_CUDA_ARCH_LIST
(see the table below).
-
mkdir -p storage
docker run -it --rm \
--name comfy3d-pt25 \
--gpus all \
-p 8188:8188 \
-v "$(pwd)"/storage:/root \
-e CLI_ARGS="" \
yanwk/comfyui-boot:comfy3d-pt25
mkdir -p storage
podman run -it --rm \
--name comfy3d-pt25 \
--device nvidia.com/gpu=all \
--security-opt label=disable \
-p 8188:8188 \
-v "$(pwd)"/storage:/root \
-e CLI_ARGS="" \
docker.io/yanwk/comfyui-boot:comfy3d-pt25
Variable | Example Value | Memo |
---|---|---|
HTTP_PROXY |
Set HTTP proxy. |
|
PIP_INDEX_URL |
Set mirror site for Python Package Index. |
|
HF_ENDPOINT |
Set mirror site for HuggingFace Hub. |
|
HF_TOKEN |
'hf_your_token' |
Set HuggingFace Access Token. More |
HF_HUB_ENABLE_HF_TRANSFER |
1 |
Enable HuggingFace Hub experimental high-speed file transfers. Only make sense if you have >1000Mbps and VERY STABLE connection (e.g. cloud server). More |
TORCH_CUDA_ARCH_LIST |
7.5 |
Build target for PyTorch and its extensions. For most users, you only need to set one build target for your GPU. More |
CMAKE_ARGS |
(Default) |
Build options for CMAKE for projects using CUDA. |
TRELLIS
officially provides a Gradio demo that can generate orbit videos and .glb
models from images.
This Docker image has almost all the necessary dependencies, so you can easily run the demo. The execution script is provided below.
-
Note: Requires more than 16G VRAM.
-
ATTN_BACKEND
options-
flash-attn
is suitable for Ampere architecture (30 series/A100) and later GPUs. -
xformers
has better compatibility.
-
-
SPCONV_ALGO
options-
native
starts faster and is suitable for single runs. -
auto
has better performance, but will take some time for benchmarking at the beginning.
-
mkdir -p storage
podman run -it \
--name trellis-demo \
--device nvidia.com/gpu=all \
--security-opt label=disable \
-p 7860:7860 \
-v "$(pwd)"/storage:/root \
-e ATTN_BACKEND="flash-attn" \
-e SPCONV_ALGO="native" \
-e GRADIO_SERVER_NAME="0.0.0.0" \
-e PIP_USER=true \
-e PIP_ROOT_USER_ACTION=ignore \
-e PYTHONPYCACHEPREFIX="/root/.cache/pycache" \
docker.io/yanwk/comfyui-boot:comfy3d-pt25 \
/bin/fish
export PATH="$PATH:/root/.local/bin"
# Run the compilation script, takes about 10 minutes.
bash /runner-scripts/build-deps-trellis-demo.sh
# Download the model
huggingface-cli download JeffreyXiang/TRELLIS-image-large
# Download and run TRELLIS demo
git clone --depth=1 --recurse-submodules \
https://github.com/microsoft/TRELLIS.git \
/root/TRELLIS
cd /root/TRELLIS
python3 app.py
Note
|
You may safely ignore the message matrix-client 0.4.0 requires urllib3~=1.21, but you have urllib3 2.2.3 which is incompatible. As matrix-client is used by ComfyUI-Manager, it is not relevant in this context.
|