-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[LoRA] fix vanilla fine-tuned lora loading. #8691
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for fixing this. Could you please give a short explanation why this fails with legacy LoRA adapters? Also, would it be possible to catch this in a test?
It failed because of this check: diffusers/src/diffusers/loaders/lora.py Line 400 in 4ad7a1f
More specifically, Unfortunately, it won't be possible to catch this in a fast test because the format is about 1.5 years old, and no one really uses it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for explaining. LGTM.
It would probably be a good idea to add the explanation to the commit message if there is ever the need to understand the history of this part of the code.
fix vanilla fine-tuned lora loading.
fix vanilla fine-tuned lora loading.
What does this PR do?
Fixes vanilla (legacy) fine-tuning LoRA loading. The bug was introduced in #8316.