-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance for the Rationale predictor demo #1
Comments
For both these cases the ideal way would be train the rationale predictor model on such datapoints. Although the first statement is very ambiguous and the target is not specified. It might be said as a friendly banter. |
Dear Hate-Alert/Tutorial-Resources,
Thanks for your reply. May I know the data format for the training process?
Or any other considerations for the training process?
Thanks for your response.
Best regards,
Chai
…On Fri, 16 Feb 2024 at 14:52, Punyajoy Saha ***@***.***> wrote:
For both these cases the ideal way would be train the rationale predictor
model on such datapoints.
Although the first statement is very ambiguous and the target is not
specified. It might be said as a friendly banter.
In the second one the target does not represent any vulnerable groups
hence it might misclassify it.
—
Reply to this email directly, view it on GitHub
<#1 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6ZF4EBM3SADQCUPVHGTAGTYT3625AVCNFSM6AAAAABDLQHMXGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNBXHA2DONRQHA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Great work on the hate speech detection. However, the performance of the model in the Rationale Predictor demo seems less accurate than desired. I tried a few sentences, and the results were labeled as the normal class instead of the abusive class. I suspect that the model is biased toward the normal class. Could you please suggest ways to improve the model's performance?
Here are some examples of sentences and their results:
I will kill you. {'Normal': 0.51631415, 'Abusive': 0.48368585}
I hate the rich people. {'Normal': 0.8278808, 'Abusive': 0.17211922}
Hope to hear from you soon.
Thank you.
The text was updated successfully, but these errors were encountered: