-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About total token #8
Comments
It's still a requirement for the precision of token generation because we need to ensure the probability of other tokens is higher than "<eos>" token until meeting the length of baseline tokens. But the length of baseline tokens isn't equal to the given max_output_len since the related hyperparameter can't change. So can we get the length of baseline tokens after baseline generation? |
Yes, I have encountered the same issue. When the test data is limited, this discrepancy isn't apparent. However, when testing with the full dataset, there always seems to be some deviation compared to the baseline. Do you think it would be feasible to limit the results within a certain range for the final evaluation? Or, as @Thewillman suggested, use the length obtained from the baseline as the max_output_len? |
It seems that lots of framework can't meet the condition. When the size of dataset becomes larger, the loss of output length expands a lot. |
Can we get the number of token of each example in baseline for enforcing our written framework to match the same token?
The text was updated successfully, but these errors were encountered: