You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when reading training code, I found a learnable positional_embedding which is passed to the first block of Unet1D's downs, mid_blocks, ups.
such as the code of Unet1D's downs:
for block0, block1, attncross, block2, attn, downsample in self.downs:
x = block0(x, context)
x = block1(x, t)
h.append(x)
x = attncross(x, context_cross) if self.text_condition else attncross(x)
x = block2(x, t)
x = attn(x)
h.append(x)
x = downsample(x)
the context is the instan_condition_f from the next code:
instance_indices = torch.arange(self.sample_num_points).long().to(self.device)[None, :].repeat(batch_size, 1)
instan_condition_f = self.positional_embedding[instance_indices, :]
I wonder the function of positional_embedding. thanks for your help.
The text was updated successfully, but these errors were encountered:
when reading training code, I found a learnable positional_embedding which is passed to the first block of Unet1D's downs, mid_blocks, ups.
such as the code of Unet1D's downs:
the context is the instan_condition_f from the next code:
instance_indices = torch.arange(self.sample_num_points).long().to(self.device)[None, :].repeat(batch_size, 1)
instan_condition_f = self.positional_embedding[instance_indices, :]
I wonder the function of positional_embedding. thanks for your help.
The text was updated successfully, but these errors were encountered: