Adversarial training on the inputs #1964
Replies: 2 comments 1 reply
-
@andreaderetti you can pass |
Beta Was this translation helpful? Give feedback.
-
I tried to deploy the output, which have the following shape: When I try to use the first tensor to compute the loss and then backpropagate I obtain the error: 'one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 3, 20, 20, 85]], which is output 0 of SigmoidBackward0, is at version 2; expected version 0 instead.' Thanks for the time, Andrea. |
Beta Was this translation helpful? Give feedback.
-
Hey you there!
I'm trying to use this implementation of YOLO in order to perform adversarial training on the inputs. To do so I would like to adopt a white box approach, and the derivatives of the loss wrt the inputs are needed to backpropagate and update the input. I can't compute the derivative starting for the results of the models (namely results.xywhn for instance). Any idea on how I could do it? Is there something really basic that I'm missing here? Do I need to use the real output of the NN (namely the last layer), and in this case how can I obtain this last layer rather then the results given by results,'something'?
The implementation of YOLO is super straightforward with this repo, it would be amazing to be able to compute the derivatives wrt the inputs.
Thanks for the time, Andrea.
Beta Was this translation helpful? Give feedback.
All reactions