You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I see the code and find that in the HH-RLHF dataset you use the red-team data for test. I want to know how the test scores are calculated? I didnt find ground-truth in the red-team dataset. How are the scores for harmless and helpful calculated in the paper?
The text was updated successfully, but these errors were encountered:
We use GPT-4's evaluation as the ground-truth. We also show that GPT-4 and human share similar evaluation results in the paper.
I got an output file named res_0.json which contains outputs of LLM. Do I need to put the outputs into GPT4 API to get the evaluation as the groundtruth? It means that there isn't an evaluation process in the code now, right?
Thank you for your code and effort and hope for your reply!
I see the code and find that in the HH-RLHF dataset you use the red-team data for test. I want to know how the test scores are calculated? I didnt find ground-truth in the red-team dataset. How are the scores for harmless and helpful calculated in the paper?
The text was updated successfully, but these errors were encountered: