Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

a problem about the code,thanks #30

Open
aosong01 opened this issue Sep 26, 2023 · 3 comments
Open

a problem about the code,thanks #30

aosong01 opened this issue Sep 26, 2023 · 3 comments

Comments

@aosong01
Copy link

it seems that you change all the basictransformerblock in both down_blocks, mid_blocks and up_blocks. why still change the up_blocks in the unet again?

def register_extended_attention(model):
    for _, module in model.unet.named_modules():
        if isinstance_str(module, "BasicTransformerBlock"):
            module.attn1.forward = sa_forward(module.attn1)

    res_dict = {1: [1, 2], 2: [0, 1, 2], 3: [0, 1, 2]}
    # we are injecting attention in blocks 4 - 11 of the decoder, so not in the first block of the lowest resolution
    for res in res_dict:
        for block in res_dict[res]:
            module = model.unet.up_blocks[res].attentions[block].transformer_blocks[0].attn1
            module.forward = sa_forward(module)
@ShashwatNigam99
Copy link

I have the same query

The first for loop modifies the following blocks:

 down_blocks.0.attentions.0.transformer_blocks.0
 down_blocks.0.attentions.1.transformer_blocks.0
 down_blocks.1.attentions.0.transformer_blocks.0
 down_blocks.1.attentions.1.transformer_blocks.0
 down_blocks.2.attentions.0.transformer_blocks.0
 down_blocks.2.attentions.1.transformer_blocks.0
 up_blocks.1.attentions.0.transformer_blocks.0
 up_blocks.1.attentions.1.transformer_blocks.0
 up_blocks.1.attentions.2.transformer_blocks.0
 up_blocks.2.attentions.0.transformer_blocks.0
 up_blocks.2.attentions.1.transformer_blocks.0
 up_blocks.2.attentions.2.transformer_blocks.0
 up_blocks.3.attentions.0.transformer_blocks.0
 up_blocks.3.attentions.1.transformer_blocks.0
 up_blocks.3.attentions.2.transformer_blocks.0
 mid_block.attentions.0.transformer_blocks.0

The second for loop modifies:

up_blocks.1.attentions.1.transformer_blocks.0.attn1
up_blocks.1.attentions.2.transformer_blocks.0.attn1
up_blocks.2.attentions.0.transformer_blocks.0.attn1
up_blocks.2.attentions.1.transformer_blocks.0.attn1
up_blocks.2.attentions.2.transformer_blocks.0.attn1
up_blocks.3.attentions.0.transformer_blocks.0.attn1
up_blocks.3.attentions.1.transformer_blocks.0.attn1
up_blocks.3.attentions.2.transformer_blocks.0.attn1

Which is a subset of the first for loop.

according to the comment, the first block of the lowest resolution shouldn't have extended attention registered. the first for loop registers extended attention for that block as well.

@Zeldalina
Copy link

同问

@edward3862
Copy link

I think the valid function should be register_extended_attention_pnp where a list injection_schedule is defined.

for _, module in model.unet.named_modules():
if isinstance_str(module, "BasicTransformerBlock"):
module.attn1.forward = sa_forward(module.attn1)
setattr(module.attn1, 'injection_schedule', [])
res_dict = {1: [1, 2], 2: [0, 1, 2], 3: [0, 1, 2]}
# we are injecting attention in blocks 4 - 11 of the decoder, so not in the first block of the lowest resolution
for res in res_dict:
for block in res_dict[res]:
module = model.unet.up_blocks[res].attentions[block].transformer_blocks[0].attn1
module.forward = sa_forward(module)
setattr(module, 'injection_schedule', injection_schedule)

The injection is activated according to injection_schedule.

if self.injection_schedule is not None and (self.t in self.injection_schedule or self.t == 1000):
# inject unconditional
q[n_frames:2 * n_frames] = q[:n_frames]
k[n_frames:2 * n_frames] = k[:n_frames]
# inject conditional
q[2 * n_frames:] = q[:n_frames]
k[2 * n_frames:] = k[:n_frames]

if self.injection_schedule is not None and (self.t in self.injection_schedule or self.t == 1000):
source_batch_size = int(hidden_states.shape[0] // 3)
# inject unconditional
hidden_states[source_batch_size:2 * source_batch_size] = hidden_states[:source_batch_size]
# inject conditional
hidden_states[2 * source_batch_size:] = hidden_states[:source_batch_size]

BTW, I tried removing the first loop in L203-L206 and found the result was not changed. However, when removing the second loop in L208-L214, the result would get worse.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants