Skip to content
This repository has been archived by the owner on Dec 14, 2023. It is now read-only.

TypeError: Linear.forward() got an unexpected keyword argument 'scale' #134

Open
kenkenissocool opened this issue Sep 21, 2023 · 6 comments

Comments

@kenkenissocool
Copy link

Hi, I have been trying to do fine-tuning with stable LoRA, according to the manual. I only can do the basics, so I haven't modified the stable_lora_config.yaml other than the path to dataset folders and video specifications. Therefore I believe the codes aren't contaminated, but this error comes up every time. Anyone have ideas solving this?
Error message:

/root/venv/work2/lib/python3.11/site-packages/diffusers/configuration_utils.py:134: FutureWarning: Accessing config attribute `num_train_timesteps` directly via 'DDPMScheduler' object attribute is deprecated. Please access 'num_train_timesteps' over 'DDPMScheduler's config object instead, e.g. 'scheduler.config.num_train_timesteps'.
  deprecate("direct config name access", "1.0.0", deprecation_message, standard_warn=False)
/root/venv/work2/lib/python3.11/site-packages/diffusers/configuration_utils.py:134: FutureWarning: Accessing config attribute `prediction_type` directly via 'DDPMScheduler' object attribute is deprecated. Please access 'prediction_type' over 'DDPMScheduler's config object instead, e.g. 'scheduler.config.prediction_type'.
  deprecate("direct config name access", "1.0.0", deprecation_message, standard_warn=False)
09/21/2023 07:42:50 - INFO - models.unet_3d_condition - Forward upsample size to force interpolation output size.
Traceback (most recent call last):
  File "/root/another/Text-To-Video-Finetuning/train.py", line 986, in <module>
    main(**OmegaConf.load(args.config))
  File "/root/another/Text-To-Video-Finetuning/train.py", line 848, in main
    loss, latents = finetune_unet(batch, train_encoder=train_text_encoder)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/another/Text-To-Video-Finetuning/train.py", line 821, in finetune_unet
    model_pred = unet(noisy_latents, timesteps, encoder_hidden_states=encoder_hidden_states).sample
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/work2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/work2/lib/python3.11/site-packages/accelerate/utils/operations.py", line 636, in forward
    return model_forward(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/work2/lib/python3.11/site-packages/accelerate/utils/operations.py", line 624, in __call__
    return convert_to_fp32(self.model_forward(*args, **kwargs))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/work2/lib/python3.11/site-packages/torch/amp/autocast_mode.py", line 14, in decorate_autocast
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/root/another/Text-To-Video-Finetuning/models/unet_3d_condition.py", line 409, in forward
    sample = transformer_g_c(self.transformer_in, sample, num_frames)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/another/Text-To-Video-Finetuning/models/unet_3d_blocks.py", line 75, in transformer_g_c
    sample = g_c(custom_checkpoint(transformer, mode='temp'), 
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/work2/lib/python3.11/site-packages/torch/utils/checkpoint.py", line 251, in checkpoint
    return _checkpoint_without_reentrant(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/work2/lib/python3.11/site-packages/torch/utils/checkpoint.py", line 432, in _checkpoint_without_reentrant
    output = function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/another/Text-To-Video-Finetuning/models/unet_3d_blocks.py", line 63, in custom_forward
    inputs = module(
             ^^^^^^^
  File "/root/venv/work2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/work2/lib/python3.11/site-packages/diffusers/models/transformer_temporal.py", line 156, in forward
    hidden_states = block(
                    ^^^^^^
  File "/root/venv/work2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/work2/lib/python3.11/site-packages/diffusers/models/attention.py", line 197, in forward
    attn_output = self.attn1(
                  ^^^^^^^^^^^
  File "/root/venv/work2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/work2/lib/python3.11/site-packages/diffusers/models/attention_processor.py", line 426, in forward
    return self.processor(
           ^^^^^^^^^^^^^^^
  File "/root/venv/work2/lib/python3.11/site-packages/diffusers/models/attention_processor.py", line 1013, in __call__
    query = attn.to_q(hidden_states, scale=scale)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/venv/work2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Linear.forward() got an unexpected keyword argument 'scale'
@kenkenissocool
Copy link
Author

my_stable_lora_config.txt
This is the stable_lora_config.yaml I modified.

@Samran-Elahi
Copy link
Contributor

any updates on this ? I am facing the same issue

@ykarmesh
Copy link

ykarmesh commented Oct 4, 2023

I am also getting the same error.

@ExponentialML
Copy link
Owner

Hey, sorry for the late response @kenkenissocool! This is due to Diffusers implementing their own version of LoRA in recent versions, which causes this error.

I will look to resolve this very soon.

@Rbrq03
Copy link

Rbrq03 commented Oct 30, 2023

Everyone, presently, this issue can be effectively addressed through the following steps:
pip uninstall diffusers
pip install diffusers==0.18.1

It works for me, hope it helps

@ShashwatNigam99
Copy link

Is there any update for this?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants