-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add comfyui flux speedup doc #1144
base: main
Are you sure you want to change the base?
Conversation
WalkthroughThe changes involve the addition of a new workflow configuration in the JSON file for the onediff_comfy_nodes project, which introduces 17 new nodes with unique identifiers and properties. These nodes facilitate data processing and transformation tasks. Additionally, the README.md file for the FLUX project has been updated with setup instructions, usage examples, and performance comparisons between PyTorch and OneDiff, enhancing user guidance and understanding of the tool's capabilities. Modifications were also made to the Changes
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Outside diff range and nitpick comments (5)
onediff_comfy_nodes/docs/FLUX/README.md (2)
2-2
: Format the URL as a proper Markdown link.Replace the bare URL with a properly formatted Markdown link.
-huggingface: https://huggingface.co/black-forest-labs/FLUX.1-dev +huggingface: [FLUX.1-dev on Hugging Face](https://huggingface.co/black-forest-labs/FLUX.1-dev)🧰 Tools
🪛 Markdownlint
2-2: null
Bare URL used(MD034, no-bare-urls)
21-21
: Format the URL as a proper Markdown link.Replace the bare URL with a properly formatted Markdown link.
-- step1: Get User Access Tokens here https://huggingface.co/settings/tokens +- step1: Get User Access Tokens from the [Hugging Face Token Settings](https://huggingface.co/settings/tokens)🧰 Tools
🪛 Markdownlint
21-21: null
Bare URL used(MD034, no-bare-urls)
onediff_comfy_nodes/benchmarks/resources/workflows/nexfort/flux_speedup.json (3)
682-686
: Consider using environment variables or configuration files for model paths.The model paths are hardcoded in the configuration:
flux1-dev.sft
ae.sft
t5xxl_fp16.safetensors
clip_l.safetensors
This could make the workflow less portable across different environments.
Consider:
- Using environment variables for model paths
- Adding a configuration file for model paths
- Supporting relative paths from a base models directory
Also applies to: 717-719, 34-38
377-379
: Consider breaking down the long prompt into components.The prompt is very long and combines multiple aspects:
- Character description
- Outfit details
- Scene setting
- Lighting conditions
- Props and actions
This could be harder to maintain and modify.
Consider breaking it down into structured components using a template system or separate configuration, making it easier to mix and match different elements.
438-443
: Consider making image resolution more configurable.Both ModelSamplingFlux and EmptySD3LatentImage use hardcoded resolution values (1024x1024). This might not be optimal for all use cases.
Consider:
- Making resolution configurable through environment variables or config files
- Adding validation for supported resolution ranges
- Supporting aspect ratio preservation when dimensions change
Also applies to: 491-495
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (2)
onediff_comfy_nodes/benchmarks/resources/workflows/nexfort/flux_speedup.png
is excluded by!**/*.png
onediff_comfy_nodes/docs/FLUX/flux_quality_compare.png
is excluded by!**/*.png
📒 Files selected for processing (2)
onediff_comfy_nodes/benchmarks/resources/workflows/nexfort/flux_speedup.json
(1 hunks)onediff_comfy_nodes/docs/FLUX/README.md
(1 hunks)
🧰 Additional context used
🪛 LanguageTool
onediff_comfy_nodes/docs/FLUX/README.md
[uncategorized] ~47-~47: Possible missing preposition found.
Context: ...# WorkFlow Here is a very basic example how to use it: [workflow_flux_speedup.json]...
(AI_EN_LECTOR_MISSING_PREPOSITION)
[uncategorized] ~68-~68: The grammatical number of this noun doesn’t look right. Consider replacing it.
Context: ...rence, and it varies a lot on different CPU. ## Quality The following table sh...
(AI_EN_LECTOR_REPLACEMENT_NOUN_NUMBER)
🪛 Markdownlint
onediff_comfy_nodes/docs/FLUX/README.md
2-2: null
Bare URL used
(MD034, no-bare-urls)
21-21: null
Bare URL used
(MD034, no-bare-urls)
🔇 Additional comments (2)
onediff_comfy_nodes/benchmarks/resources/workflows/nexfort/flux_speedup.json (2)
1-4
: LGTM: File structure and versioning are well-defined.
The configuration follows the expected JSON structure with proper versioning (0.4) and tracking of node/link IDs.
Also applies to: 868-870
722-867
: Verify node connections and memory management.
The workflow contains complex node connections that might impact memory usage. The note mentions potential memory issues that can be addressed using fp8 types.
✅ Verification successful
Node connections and memory management are correctly configured
Based on the workflow file and documentation:
- The workflow uses the recommended FLUX configuration with proper node connections
- Memory management is addressed through the available fp8 option:
- The note in the workflow explicitly mentions "You can set the weight_dtype to one of the fp8 types if you have memory issues"
- Documentation confirms fp8 support through t5xxl_fp8_e4m3fn.safetensors as an alternative to fp16
The connections between nodes follow the expected pattern for FLUX workflows, and the memory optimization suggestion is properly documented in the workflow notes.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for potential memory issues in the workflow
# Look for memory-related configurations and comments
rg -i "memory|fp8|dtype" --type json
# Check for similar workflows to compare memory handling
fd -e json . | xargs grep -l "OneDiffFluxLoader"
Length of output: 236
Script:
#!/bin/bash
# Let's try a different approach to analyze the workflow configuration
# First, check the content of the workflow file
cat onediff_comfy_nodes/benchmarks/resources/workflows/nexfort/flux_speedup.json
# Look for any documentation or configuration files that might explain the node types
fd -e md -e txt . | xargs grep -l -i "node.*connection\|memory\|workflow"
# Search for similar workflow files
fd -e json . | grep -v "package.json\|tsconfig.json"
Length of output: 21358
Script:
#!/bin/bash
# Let's check the documentation for memory-related information and node connections
# Check FLUX documentation which seems relevant to this workflow
cat onediff_comfy_nodes/docs/FLUX/README.md
# Check the main README for any relevant information
cat onediff_comfy_nodes/README.md
# Look specifically for fp8 related configurations
rg -i "fp8|weight_dtype" onediff_comfy_nodes/
Length of output: 9498
The following table shows the comparison of the plot, seed=0, Baseline (non optimized) on the left, and OneDiff (optimized) on the right | ||
![quality compare](./flux_quality_compare.png) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Missing quality comparison image referenced in documentation
The referenced image flux_quality_compare.png
is missing from the expected path. This broken image reference affects the documentation's completeness and readability.
onediff_comfy_nodes/docs/FLUX/README.md
: Add the missing image fileflux_quality_compare.png
or update the documentation to reference the correct image path.
🔗 Analysis chain
Verify the quality comparison image exists.
Ensure the referenced image file exists and is accessible.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify the quality comparison image exists
if [ ! -f "docs/FLUX/flux_quality_compare.png" ]; then
echo "Warning: Quality comparison image does not exist"
fi
Length of output: 158
Here is a very basic example how to use it: | ||
[workflow_flux_speedup.json](../../benchmarks/resources/workflows/nexfort/flux_speedup.json) | ||
![flux_speedup_workflow](../../benchmarks/resources/workflows/nexfort/flux_speedup.png) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Fix grammar and broken file references
The grammar issue needs to be fixed, and the referenced files are missing in the specified paths. Please update both the grammar and file references:
- Fix the grammar by adding "of" after "example"
- The following referenced files are missing and need to be updated with correct paths:
benchmarks/resources/workflows/nexfort/flux_speedup.json
benchmarks/resources/workflows/nexfort/flux_speedup.png
🔗 Analysis chain
Fix grammar and verify referenced files.
- The sentence needs a preposition "of" after "example".
- Ensure the referenced workflow file and image exist in the specified paths.
-Here is a very basic example how to use it:
+Here is a very basic example of how to use it:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify referenced files exist
files_to_check=(
"benchmarks/resources/workflows/nexfort/flux_speedup.json"
"benchmarks/resources/workflows/nexfort/flux_speedup.png"
)
for file in "${files_to_check[@]}"; do
if [ ! -f "$file" ]; then
echo "Warning: Referenced file $file does not exist"
fi
done
Length of output: 760
🧰 Tools
🪛 LanguageTool
[uncategorized] ~47-~47: Possible missing preposition found.
Context: ...# WorkFlow Here is a very basic example how to use it: [workflow_flux_speedup.json]...
(AI_EN_LECTOR_MISSING_PREPOSITION)
COMFYUI_DIR=$pwd/ComfyUI | ||
# install ComfyUI | ||
git clone https://github.com/comfyanonymous/ComfyUI.git | ||
|
||
# install onediff & onediff_comfy_nodes | ||
git clone https://github.com/siliconflow/onediff.git | ||
cd onediff && pip install -r onediff_comfy_nodes/FLUX/requirements.txt && pip install -e . | ||
ln -s $pwd/onediff/onediff_comfy_nodes $COMFYUI_DIR/custom_nodes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Fix shell command syntax and enhance setup instructions.
- The
$pwd
variable usage is incorrect. In shell scripts, use$(pwd)
or${PWD}
. - Consider adding prerequisite checks for Python version and git.
-COMFYUI_DIR=$pwd/ComfyUI
+# Check Python version
+python3 --version # Should be 3.10
+
+# Set up directory
+COMFYUI_DIR=$(pwd)/ComfyUI
# install ComfyUI
git clone https://github.com/comfyanonymous/ComfyUI.git
# install onediff & onediff_comfy_nodes
git clone https://github.com/siliconflow/onediff.git
cd onediff && pip install -r onediff_comfy_nodes/FLUX/requirements.txt && pip install -e .
-ln -s $pwd/onediff/onediff_comfy_nodes $COMFYUI_DIR/custom_nodes
+ln -s $(pwd)/onediff/onediff_comfy_nodes $COMFYUI_DIR/custom_nodes
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
COMFYUI_DIR=$pwd/ComfyUI | |
# install ComfyUI | |
git clone https://github.com/comfyanonymous/ComfyUI.git | |
# install onediff & onediff_comfy_nodes | |
git clone https://github.com/siliconflow/onediff.git | |
cd onediff && pip install -r onediff_comfy_nodes/FLUX/requirements.txt && pip install -e . | |
ln -s $pwd/onediff/onediff_comfy_nodes $COMFYUI_DIR/custom_nodes | |
# Check Python version | |
python3 --version # Should be 3.10 | |
# Set up directory | |
COMFYUI_DIR=$(pwd)/ComfyUI | |
# install ComfyUI | |
git clone https://github.com/comfyanonymous/ComfyUI.git | |
# install onediff & onediff_comfy_nodes | |
git clone https://github.com/siliconflow/onediff.git | |
cd onediff && pip install -r onediff_comfy_nodes/FLUX/requirements.txt && pip install -e . | |
ln -s $(pwd)/onediff/onediff_comfy_nodes $COMFYUI_DIR/custom_nodes |
export ACCESS_TOKEN="User Access Tokens" | ||
wget --header="Authorization: Bearer $ACCESS_TOKEN" \ | ||
https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/flux1-dev.safetensors -O models/checkpoints/flux1-dev.safetensors | ||
|
||
wget --header="Authorization: Bearer $ACCESS_TOKEN" \ | ||
https://huggingface.co/stabilityai/stable-diffusion-3-medium/resolve/main/text_encoders/clip_l.safetensors -O models/clip/clip_l.safetensors | ||
|
||
# wget --header="Authorization: Bearer $ACCESS_TOKEN" \ | ||
# https://huggingface.co/stabilityai/stable-diffusion-3-medium/resolve/main/text_encoders/t5xxl_fp16.safetensors -O models/clip/t5xxl_fp16.safetensors | ||
|
||
wget --header="Authorization: Bearer $ACCESS_TOKEN" \ | ||
https://huggingface.co/stabilityai/stable-diffusion-3-medium/resolve/main/text_encoders/t5xxl_fp8_e4m3fn.safetensors -O models/clip/t5xxl_fp8_e4m3fn.safetensors | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance security and clarity of model download instructions.
- Consider using environment variables or a configuration file for token management.
- The commented-out wget command should either be removed or explained.
-export ACCESS_TOKEN="User Access Tokens"
+# Load token from environment or .env file
+if [ -z "$HF_ACCESS_TOKEN" ]; then
+ echo "Please set HF_ACCESS_TOKEN environment variable"
+ exit 1
+fi
wget --header="Authorization: Bearer $ACCESS_TOKEN" \
https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/flux1-dev.safetensors -O models/checkpoints/flux1-dev.safetensors
wget --header="Authorization: Bearer $ACCESS_TOKEN" \
https://huggingface.co/stabilityai/stable-diffusion-3-medium/resolve/main/text_encoders/clip_l.safetensors -O models/clip/clip_l.safetensors
-# wget --header="Authorization: Bearer $ACCESS_TOKEN" \
-# https://huggingface.co/stabilityai/stable-diffusion-3-medium/resolve/main/text_encoders/t5xxl_fp16.safetensors -O models/clip/t5xxl_fp16.safetensors
+# Note: FP16 model is optional and can be used instead of FP8 model if higher precision is needed
wget --header="Authorization: Bearer $ACCESS_TOKEN" \
https://huggingface.co/stabilityai/stable-diffusion-3-medium/resolve/main/text_encoders/t5xxl_fp8_e4m3fn.safetensors -O models/clip/t5xxl_fp8_e4m3fn.safetensors
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export ACCESS_TOKEN="User Access Tokens" | |
wget --header="Authorization: Bearer $ACCESS_TOKEN" \ | |
https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/flux1-dev.safetensors -O models/checkpoints/flux1-dev.safetensors | |
wget --header="Authorization: Bearer $ACCESS_TOKEN" \ | |
https://huggingface.co/stabilityai/stable-diffusion-3-medium/resolve/main/text_encoders/clip_l.safetensors -O models/clip/clip_l.safetensors | |
# wget --header="Authorization: Bearer $ACCESS_TOKEN" \ | |
# https://huggingface.co/stabilityai/stable-diffusion-3-medium/resolve/main/text_encoders/t5xxl_fp16.safetensors -O models/clip/t5xxl_fp16.safetensors | |
wget --header="Authorization: Bearer $ACCESS_TOKEN" \ | |
https://huggingface.co/stabilityai/stable-diffusion-3-medium/resolve/main/text_encoders/t5xxl_fp8_e4m3fn.safetensors -O models/clip/t5xxl_fp8_e4m3fn.safetensors | |
``` | |
# Load token from environment or .env file | |
if [ -z "$HF_ACCESS_TOKEN" ]; then | |
echo "Please set HF_ACCESS_TOKEN environment variable" | |
exit 1 | |
fi | |
wget --header="Authorization: Bearer $ACCESS_TOKEN" \ | |
https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/flux1-dev.safetensors -O models/checkpoints/flux1-dev.safetensors | |
wget --header="Authorization: Bearer $ACCESS_TOKEN" \ | |
https://huggingface.co/stabilityai/stable-diffusion-3-medium/resolve/main/text_encoders/clip_l.safetensors -O models/clip/clip_l.safetensors | |
# Note: FP16 model is optional and can be used instead of FP8 model if higher precision is needed | |
wget --header="Authorization: Bearer $ACCESS_TOKEN" \ | |
https://huggingface.co/stabilityai/stable-diffusion-3-medium/resolve/main/text_encoders/t5xxl_fp8_e4m3fn.safetensors -O models/clip/t5xxl_fp8_e4m3fn.safetensors |
| Data update date(yyyy-mm-dd) | 2024-11-19 | | ||
| PyTorch E2E time | 12.44 s | | ||
| OneDiff E2E time | 7.12 s (-42%) | | ||
| PyTorch Max Mem Used | 20.510 GiB | | ||
| OneDiff Max Mem Used | 23.094 GiB | | ||
| PyTorch Warmup with Run time | 49.09 s | | ||
| OneDiff Warmup with Compilation time<sup>1</sup> | 113.73 s | | ||
| OneDiff Warmup with Cache time | 20.27 s | | ||
|
||
<sup>1</sup> OneDiff Warmup with Compilation time is tested on AMD EPYC 7543 32-Core Processor CPU. Note this is just for reference, and it varies a lot on different CPU. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix date and grammar issues in performance comparison.
- The data update date appears to be incorrect (shows future date).
- Use plural form "CPUs" in the footnote.
- Consider adding more context about test conditions.
-| Data update date(yyyy-mm-dd) | 2024-11-19 |
+| Data update date(yyyy-mm-dd) | 2023-11-19 |
-<sup>1</sup> OneDiff Warmup with Compilation time is tested on AMD EPYC 7543 32-Core Processor CPU. Note this is just for reference, and it varies a lot on different CPU.
+<sup>1</sup> OneDiff Warmup with Compilation time is tested on AMD EPYC 7543 32-Core Processor. Note this is just for reference, and it varies a lot on different CPUs.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Data update date(yyyy-mm-dd) | 2024-11-19 | | |
| PyTorch E2E time | 12.44 s | | |
| OneDiff E2E time | 7.12 s (-42%) | | |
| PyTorch Max Mem Used | 20.510 GiB | | |
| OneDiff Max Mem Used | 23.094 GiB | | |
| PyTorch Warmup with Run time | 49.09 s | | |
| OneDiff Warmup with Compilation time<sup>1</sup> | 113.73 s | | |
| OneDiff Warmup with Cache time | 20.27 s | | |
<sup>1</sup> OneDiff Warmup with Compilation time is tested on AMD EPYC 7543 32-Core Processor CPU. Note this is just for reference, and it varies a lot on different CPU. | |
| Data update date(yyyy-mm-dd) | 2023-11-19 | | |
| PyTorch E2E time | 12.44 s | | |
| OneDiff E2E time | 7.12 s (-42%) | | |
| PyTorch Max Mem Used | 20.510 GiB | | |
| OneDiff Max Mem Used | 23.094 GiB | | |
| PyTorch Warmup with Run time | 49.09 s | | |
| OneDiff Warmup with Compilation time<sup>1</sup> | 113.73 s | | |
| OneDiff Warmup with Cache time | 20.27 s | | |
<sup>1</sup> OneDiff Warmup with Compilation time is tested on AMD EPYC 7543 32-Core Processor. Note this is just for reference, and it varies a lot on different CPUs. |
🧰 Tools
🪛 LanguageTool
[uncategorized] ~68-~68: The grammatical number of this noun doesn’t look right. Consider replacing it.
Context: ...rence, and it varies a lot on different CPU. ## Quality The following table sh...
(AI_EN_LECTOR_REPLACEMENT_NOUN_NUMBER)
"widgets_values": [ | ||
"The reference sampling implementation auto adjusts the shift value based on the resolution, if you don't want this you can just bypass (CTRL-B) this ModelSamplingFlux node.\n" | ||
], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Move setup instructions to proper documentation.
Currently, setup instructions and tips are embedded in Note nodes within the workflow. This makes them:
- Hard to find for new users
- Not searchable in documentation
- Not versioned properly with the code
Consider:
- Moving these instructions to a proper README or documentation file
- Adding links to the documentation in the workflow file
- Including version compatibility information
Also applies to: 648-650
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (3)
onediff_comfy_nodes/__init__.py (1)
50-53
: Consider adding inline documentation.Since this is part of adding flux documentation, consider adding a brief comment explaining why flux speedup is tied to nexfort availability. This would help future maintainers understand the dependency relationship.
if is_nexfort_available(): + # Flux speedup requires nexfort backend for optimal performance from .extras_nodes import nodes_nexfort_booster, nodes_flux_speedup
onediff_comfy_nodes/extras_nodes/nodes_flux_speedup.py (2)
56-68
: Consider encapsulating environment variable changes within a context managerSetting the environment variable
CUBLASLT_WORKSPACE_SIZE
globally can lead to unintended side effects, especially in environments where multiple processes or threads might be affected. It's recommended to encapsulate the environment variable change within a context manager to limit its scope to the necessary operations.You can define a context manager to temporarily set the environment variable:
from contextlib import contextmanager @contextmanager def set_env_var(key, value): old_value = os.environ.get(key) os.environ[key] = value try: yield finally: if old_value is None: del os.environ[key] else: os.environ[key] = old_valueThen, modify the
speedup
function to use this context manager:def speedup(model: ModelPatcher, enable_quantize: bool, enable_transform: bool): - os.environ["CUBLASLT_WORKSPACE_SIZE"] = str(1024) + with set_env_var("CUBLASLT_WORKSPACE_SIZE", "1024"): if enable_quantize: quantize( model.model.diffusion_model, quant_type="fp8_e4m3_e4m3_dynamic_per_tensor", ) if enable_transform: transform_model(model.model.diffusion_model)
70-94
: Review default values for quantization and transformationIn the
OneDiffFluxSpeedup
class, the default values forquantize_model
andtransform
are set toTrue
. Ensure that these defaults align with typical user expectations. If quantization or transformation may not be desired in all cases, consider setting the defaults toFalse
or prompting users to make an explicit choice.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
onediff_comfy_nodes/__init__.py
(1 hunks)onediff_comfy_nodes/extras_nodes/nodes_flux_speedup.py
(1 hunks)
🔇 Additional comments (1)
onediff_comfy_nodes/__init__.py (1)
50-53
: LGTM! Verify the nexfort dependency.
The integration of flux speedup nodes alongside nexfort booster nodes looks correct. The conditional import under is_nexfort_available()
suggests that flux speedup depends on nexfort.
Let's verify this dependency relationship:
✅ Verification successful
Conditional import of flux_speedup with nexfort is justified
The verification confirms that flux_speedup has a direct dependency on nexfort:
- It imports multiple core components from nexfort (RMSNorm4Comfy4D, MLPEmbedder, Modulation, etc.)
- Uses nexfort's transformation and quantization capabilities
- The documentation shows it's part of the nexfort optimization workflow
The conditional import under is_nexfort_available()
is correct and necessary.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for nexfort dependency in flux speedup implementation
# Expected: Find references to nexfort in the flux speedup implementation
# Search for nexfort imports or usage in flux speedup
rg -l "nexfort" "onediff_comfy_nodes/extras_nodes/nodes_flux_speedup.py" || echo "No direct nexfort references found"
# Check if there are any other files that might define the relationship
rg -l "flux.*nexfort|nexfort.*flux" || echo "No relationship found in other files"
Length of output: 235
Script:
#!/bin/bash
# Let's check the actual nexfort usage in flux speedup implementation
rg "nexfort" "onediff_comfy_nodes/extras_nodes/nodes_flux_speedup.py" -C 2
# Also check the relationship documented in the README
cat "onediff_comfy_nodes/docs/FLUX/README.md"
Length of output: 4483
def load_unet(self, unet_name, enable_quant, enable_transform): | ||
model_options = {} | ||
unet_path = folder_paths.get_full_path("diffusion_models", unet_name) | ||
with comfy_cpu_context(): | ||
model = comfy.sd.load_diffusion_model( | ||
unet_path, model_options=model_options | ||
) | ||
model.offload_device = comfy.model_management.unet_offload_device() | ||
speedup(model, enable_quant, enable_transform) | ||
|
||
return (model,) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ensure proper handling of model loading exceptions
When loading the diffusion model using comfy.sd.load_diffusion_model
, any exceptions that occur during the loading process may not be handled, which could lead to unhandled exceptions and potentially crash the application. Consider adding try-except blocks to handle exceptions gracefully and provide informative error messages to the user.
Apply this diff to enhance error handling:
def load_unet(self, unet_name, enable_quant, enable_transform):
model_options = {}
unet_path = folder_paths.get_full_path("diffusion_models", unet_name)
with comfy_cpu_context():
- model = comfy.sd.load_diffusion_model(
- unet_path, model_options=model_options
- )
+ try:
+ model = comfy.sd.load_diffusion_model(
+ unet_path, model_options=model_options
+ )
+ except Exception as e:
+ print(f"Error loading diffusion model: {e}")
+ return (None,)
model.offload_device = comfy.model_management.unet_offload_device()
speedup(model, enable_quant, enable_transform)
return (model,)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def load_unet(self, unet_name, enable_quant, enable_transform): | |
model_options = {} | |
unet_path = folder_paths.get_full_path("diffusion_models", unet_name) | |
with comfy_cpu_context(): | |
model = comfy.sd.load_diffusion_model( | |
unet_path, model_options=model_options | |
) | |
model.offload_device = comfy.model_management.unet_offload_device() | |
speedup(model, enable_quant, enable_transform) | |
return (model,) | |
def load_unet(self, unet_name, enable_quant, enable_transform): | |
model_options = {} | |
unet_path = folder_paths.get_full_path("diffusion_models", unet_name) | |
with comfy_cpu_context(): | |
try: | |
model = comfy.sd.load_diffusion_model( | |
unet_path, model_options=model_options | |
) | |
except Exception as e: | |
print(f"Error loading diffusion model: {e}") | |
return (None,) | |
model.offload_device = comfy.model_management.unet_offload_device() | |
speedup(model, enable_quant, enable_transform) | |
return (model,) |
add flux doc and related comfyui nodes
Summary by CodeRabbit
Release Notes
New Features
OneDiffFluxSpeedup
andOneDiffFluxLoader
classes.Documentation
Chores