-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about code #1
Comments
Empirically, the |
I tried to implement a version where everything it is exactly similar to the article, but for now, it is still not working, even if I tried everything similar to what they did in the article, here is the code if you want to take a look, (https://github.com/mokeddembillel/Amortized-SVGD-GAN) |
Hi, thank you for your code which helped me understand some of the concepts from the paper better.
I had a few remaining questions about the implementation, it would be great if you could clarify those.
g_optim.zero_grad() autograd.backward( -z_i, ##why minus, and why zi grad_tensors=svgd) g_optim.step()
I was a little confused about why we are taking the gradient with respect to -z_i and not z_i in the above lines and also why we are computing the kernel over two different batches of particles (z_i and z_j) rather than between the particles of just one batch.. is that to help with something like training stability for example?
Thanks!
The text was updated successfully, but these errors were encountered: