Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Eval details 📑
Eval name
gpt4facts
Eval description
Evaluate the model's ability to recall and provide accurate facts from gpt4.
What makes this a useful eval?
2309240522114RI6KBA7_gpt-3.5-turbo_gpt4facts.jsonl
[2023-09-23 23:24:29,221] [oaieval.py:245] Final report:
[2023-09-23 23:24:29,222] [oaieval.py:247] counts/B: 48
[2023-09-23 23:24:29,222] [oaieval.py:247] counts/D: 30
[2023-09-23 23:24:29,222] [oaieval.py:247] counts/A: 24
The above results, when:
fact.yaml
: a factual consistency eval which, given a completiona
and reference answerb
, returns:"A"
ifa
b
, i.e., the submitted answer is a subset of the expert answer and is fully consistent with it."B"
ifa
b
, i.e., the submitted answer is a superset of the expert answer and is fully consistent with it."C"
ifa
b
, i.e., the submitted answer contains all the same details as the expert answer."D"
ifa
b
, i.e., there is a disagreement between the submitted answer and the expert answer."E"
ifa
b
, i.e., the answers differ, but these differences don't matter from the perspective of factuality.There was also interest in this eval according to the last review of it. I was using basic Match, when modelgraded was requested to be used.
Criteria for a good eval ✅
Basic
evals or theFact
Model-graded eval, or an exhaustive rubric for evaluating answers for theCriteria
Model-graded eval.Eval structure 🏗️
evals/registry/data/{name}
evals/registry/evals/{name}.yaml
Final checklist 👀
Submission agreement
By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (https://platform.openai.com/docs/usage-policies).
Email address validation
If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the commits on the merged pull request.
Limited availability acknowledgment
We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and the high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR.
Submit eval
pip install pre-commit; pre-commit install
and have verified thatmypy
,black
,isort
, andautoflake
are running when I commit and pushFailure to fill out all required fields will result in the PR being closed.
Eval JSON data
Since we are using Git LFS, we are asking eval submitters to add in as many Eval Samples (at least 5) from their contribution here:
View evals in JSON
Eval