Skip to content
This repository has been archived by the owner on Dec 14, 2023. It is now read-only.

webui Lora Might be causing errors in checkpoint models. #101

Open
justinwking opened this issue Jul 15, 2023 · 3 comments
Open

webui Lora Might be causing errors in checkpoint models. #101

justinwking opened this issue Jul 15, 2023 · 3 comments

Comments

@justinwking
Copy link

justinwking commented Jul 15, 2023

Some weights of the model checkpoint were not used when initializing UNet3DConditionModel:
This IS expected if you are initializing CLIPTextModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPTextModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).

Has anyone else had similar issues. I believe it has to do with the Lora Training because I only notice such behavior on models created while also training the new webui lora. The most recent model did not use the Loras, and had no such issues.

@ExponentialML
Copy link
Owner

Hello. I cannot reproduce this issue. I would check to see that your model path is correct. If it is, then could you please post the following? (You can remove any personally identifiable information).

  • The config.json in your model's directory.
  • The log leading up to this point (the one you have in your post).
  • The .yaml config you're using for training.

@justinwking
Copy link
Author

justinwking commented Jul 15, 2023

Here is a link to one of the problematic models...

https://www.dropbox.com/sh/ttvqyfddlq0mvjl/AAAjeXguhPXSanFA2x_--4xLa?dl=0

Which is based on this model...

https://www.dropbox.com/sh/247hj87lcvewsb5/AADeZsqTDTAE1mI2WlsclcU7a?dl=0

Which is based on the original diffuser model.

I'm not able to get to the yaml or log file at the moment, but maybe you will notice something here? But the error message occurred when loading the model for inference using inference.py .

I believe the error message could be related to this, since it's a similar error message, but in mine it shows a lot of the layers in the model.

@justinwking
Copy link
Author

This is a link to a config.json file that was created.

https://www.dropbox.com/scl/fi/eoq4byu2ap3f6k96ld396/config.yaml?rlkey=8izitbdc1vgvlbhzvf367sypw&dl=0

since disabling the lora training, I haven't had issues with that error message. It could be a possible glitch because of the version of the software I used. But I was hoping to make it known to confirm whether there was something truly going on. Is anyone else able to reproduce the error message with this model? Or is there something wrong in the models configuration that could be easily fixable so that I could use the model?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants