Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
While I was reading some of OpenAI code and documents, I noticed little typos/bugs. In my view, only the first of the three listed merits further scrutiny, as it appears the code may not align with the author's original intention. This is not an actual pull request.
1. eval_sample(self, sample: Any, *_)
Commit: Code flow nit in translate eval
Case when
expected
isNone
In the case when
expected
isNone
, the code (line 45):Will have
expected
to hold[None]
(a list with a single valueNone
)The code later checks if
expected is not None
(line 57) which will always be true, which I don't think was the intended behavior.Case when
score
isNone
When the score is evaluated by (line 58):
If
score
isNone
, the linematch = score > 30
(line 61) will produce the exceptionTypeError: '>' not supported between instances of 'NoneType' and 'int'
and the following checkscore is not None
(line 63) will never be reached.2. find_top_k_closest_embeddings
Commit Factor the normalization in cosine similarity fn
Really a nit here, this is not a performance issue in practical terms. The normalization of the two input vectors can be combined.
3. Typo in the post Introducing Triton: Open-source GPU programming for neural networks
The section "Fused softmax with the Torch JIT.", provides the following code:
I believe
numerator = torch.exp(x)
should benumerator = torch.exp(z)
(i.e.z
notx
)Final checklist 馃憖
Submission agreement
By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (https://platform.openai.com/docs/usage-policies).
Email address validation
If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the commits on the merged pull request.
Limited availability acknowledgment
We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and the high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR.
Submit eval
pip install pre-commit; pre-commit install
and have verified thatblack
,isort
, andautoflake
are running when I commit and pushFailure to fill out all required fields will result in the PR being closed.