-
-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QA Chinese model result does not match python version #108
Comments
Tonghua-Li
changed the title
QA model result does not match python version
QA Chinese model result does not match python version
Jan 2, 2022
Thanks @Tonghua-Li to experiment spaGO on Chinese models! Let me take a look. In the meantime, did you check already if the output from the tokenization matches the one in Python/Rust? |
@matteo-grella , I am new to NLP and tensorflow. Here is the comparison between spago and python, lengths are different. Anything else I should check? spago
python
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Using this Chinese model
This model runs on python locally, the output is correct, but from spaGO is not.
Similar to #101, but I cannot find the bool parameter for QA,
how to turn off
output is forced to be a distribution (sum must be 1), whereas with Python, the output is free
?Translated QA:
Context: My name is Clara, I live in Berkeley
Q: what is my name?
A: Clara
Output is supposed to be
克拉拉
but got
./bert-server server --repo=~/.spago --model=luhua/chinese_pretrain_mrc_roberta_wwm_ext_large --tls-disable
The text was updated successfully, but these errors were encountered: