-
Hi, I am trying to learn training sample weights by doing maml on a validation set: So I want to do backprop through a one-step update - to compute the gradients of training sample weights w.r.t the validation loss:
the weighted training loss looks like this:
The issue is that I'm recovering sample weight gradients that are just zero. I would guess that this should be doable, since nesting grad ops & doing 'normal' maml on regular parameter weights works just fine. I guess the training weights are being treated as constants at some point? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Embarrassed to say that it looks like I miss-broadcasted things. |
Beta Was this translation helpful? Give feedback.
Embarrassed to say that it looks like I miss-broadcasted things.
Feel free to remove this.