-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[LoRA] pop the LoRA scale so that it doesn't get propagated to the weeds #7338
Conversation
cross_attention_kwargs (`dict`, *optional*): | ||
A kwargs dictionary that if specified is passed along to the [`AttnProcessor`]. | ||
added_cond_kwargs: (`dict`, *optional*): | ||
A kwargs dictionary containin additional embeddings that if specified are added to the embeddings that | ||
are passed along to the UNet blocks. | ||
down_block_additional_residuals (`tuple` of `torch.Tensor`, *optional*): | ||
additional residuals to be added to UNet long skip connections from down blocks to up blocks for | ||
example from ControlNet side model(s) | ||
mid_block_additional_residual (`torch.Tensor`, *optional*): | ||
additional residual to be added to UNet mid block output, for example from ControlNet side model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duplicates.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
# we're popping the `scale` instead of getting it because otherwise `scale` will be propagated | ||
# to the internal blocks and will raise deprecation warnings. this will be confusing for our users. | ||
if cross_attention_kwargs is not None and "scale" in cross_attention_kwargs: | ||
lora_scale = cross_attention_kwargs.pop("scale", 1.0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If "scale" in cross_attention_kwargs
, we don't need a default for pop
, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that is correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks!
@yiyixuxu do you want me to do a patch release for this as this way the users won't get confused. |
@sayakpaul sure
|
Cool, I will take care of it. |
Co-authored-by: YiYi Xu <yixu310@gmail.com>
…eds (#7338) * pop scale from the top-level unet instead of getting it. * improve readability. * Apply suggestions from code review Co-authored-by: YiYi Xu <yixu310@gmail.com> * fix a little bit. --------- Co-authored-by: YiYi Xu <yixu310@gmail.com>
What does this PR do?
The changes should be self-explanatory. But let me know in case of any explanation.
I think we should maybe do a patch release for this.