Best way to compute mean training loss (or some metric) based on weights at END of epoch? #18797
Unanswered
johnmarktaylor91
asked this question in
code help: CV
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Suppose that I want to compute the mean training loss, or the mean of some metric, based on the weights of the model at the END of an epoch (as opposed to logging the loss each batch, during which the model's weights are obviously changing). The reason for this is to allow a better "apples to apples" comparison of training and validation loss (i.e., based on the same weights). Is there a "preferred" way to do this using PyTorch Lightning? I understand this requires a separate pass through your training set after each training epoch, just curious if there's a natural way to do this.
Beta Was this translation helpful? Give feedback.
All reactions