How to implement local gradient in a tensor #13428
Unanswered
xiaolang01
asked this question in
code help: NLP / ASR / TTS
Replies: 1 comment
-
You should be able to achieve it by overriding def on_after_backward(self):
for p in self.model...parameters():
p.grad[:original_vocab_size, :] = 0.0 https://pytorch-lightning.readthedocs.io/en/1.6.4/common/lightning_module.html#on-after-backward |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I want to achieve such an operation how the code should be written?
Beta Was this translation helpful? Give feedback.
All reactions