-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Returning KL divergence #75
Comments
related question, @DavidMChan , is the |
The average gradient norm is basically the norm of the gradient of the KL-divergence with respect to the particle positions. Thus, it can be a proxy for how stable the optimization process is, but is not the same as the KL. |
Thanks for the explanation! That makes sense. So, just to confirm-- there's no way at all currently to see the KL/D value ? |
Currently, no - but I'll consider working it into the next version, and we're always welcome to PRs if anyone wants to contribute! Here would be a good place in the code to start looking: Line 513 in b740a7d
|
Thank you for this fantastic work!
Could it be possible the fit_transform() method returns the KL divergence of the run?
Thx!
The text was updated successfully, but these errors were encountered: