Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error occurred when executing I2VGenXL_Generation_Zho: #88

Open
grayfruit opened this issue Feb 21, 2024 · 0 comments
Open

Error occurred when executing I2VGenXL_Generation_Zho: #88

grayfruit opened this issue Feb 21, 2024 · 0 comments

Comments

@grayfruit
Copy link

Any help, please
Error occurred when executing I2VGenXL_Generation_Zho:

CUDA error: the launch timed out and was terminated
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

File "c:\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "c:\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "c:\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-I2VGenXL-------\I2VGenXL.py", line 76, in i2vgenxl_generate_image
output = pipe(
File "c:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "c:\ComfyUI_windows_portable\python_embeded\lib\site-packages\diffusers\pipelines\i2vgen_xl\pipeline_i2vgen_xl.py", line 804, in call
latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
File "c:\ComfyUI_windows_portable\python_embeded\lib\site-packages\diffusers\schedulers\scheduling_ddim.py", line 407, in step
alpha_prod_t = self.alphas_cumprod[timestep]

cmd

Prompt executed in 45.75 seconds
Exception in thread Thread-8 (prompt_worker):
Traceback (most recent call last):
File "threading.py", line 1016, in _bootstrap_inner
File "threading.py", line 953, in run
File "c:\ComfyUI_windows_portable\ComfyUI\main.py", line 143, in prompt_worker
comfy.model_management.soft_empty_cache()
File "c:\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 822, in soft_empty_cache
torch.cuda.empty_cache()
File "c:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\cuda\memory.py", line 162, in empty_cache
torch._C._cuda_emptyCache()
RuntimeError: CUDA error: the launch timed out and was terminated
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant