-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add self.use_ada_layer_norm_*
params back to BasicTransformerBlock
#6841
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
@@ -538,7 +538,7 @@ def hack_CrossAttnDownBlock2D_forward( | |||
|
|||
return hidden_states, output_states | |||
|
|||
def hacked_DownBlock2D_forward(self, hidden_states, temb=None): | |||
def hacked_DownBlock2D_forward(self, hidden_states, temb=None, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is kwargs
needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hacked_DownBlock2D_forward
is a function they wrote to replace the forward
method of the DownBlock2D
,hence the signature has to match. We added a new argument scale
for the lora refactor and causes an error here without **kwargs
I think they should write their custom blocks instead
@@ -634,7 +634,9 @@ def hacked_CrossAttnUpBlock2D_forward( | |||
|
|||
return hidden_states | |||
|
|||
def hacked_UpBlock2D_forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None): | |||
def hacked_UpBlock2D_forward( | |||
self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None, **kwargs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same.
self.use_ada_layer_norm_zero = (num_embeds_ada_norm is not None) and norm_type == "ada_norm_zero" | ||
self.use_ada_layer_norm = (num_embeds_ada_norm is not None) and norm_type == "ada_norm" | ||
self.use_ada_layer_norm_single = norm_type == "ada_norm_single" | ||
self.use_layer_norm = norm_type == "layer_norm" | ||
self.use_ada_layer_norm_continuous = norm_type == "ada_norm_continuous" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am okay with this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the late reply. I'd maybe just add a comment to state that they are kept for back-compatibility reasons.
@@ -603,7 +603,9 @@ def hacked_CrossAttnUpBlock2D_forward( | |||
|
|||
return hidden_states | |||
|
|||
def hacked_UpBlock2D_forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None): | |||
def hacked_UpBlock2D_forward( | |||
self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None, **kwargs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the swift action. Just a question.
#6841) fix sd reference community ppeline Co-authored-by: yiyixuxu <yixu310@gmail,com>
huggingface#6841) fix sd reference community ppeline Co-authored-by: yiyixuxu <yixu310@gmail,com>
fix #6838
I also fixed some other error in the sd reference community pipelines while I'm at it (so also fix #5028 (comment))
we should deprecate these params too