Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FEAT: Auto-Evaluation of batch of tests #31

Open
MiNeves00 opened this issue Oct 25, 2023 · 1 comment
Open

FEAT: Auto-Evaluation of batch of tests #31

MiNeves00 opened this issue Oct 25, 2023 · 1 comment
Labels
enhancement New feature or request

Comments

@MiNeves00
Copy link
Contributor

MiNeves00 commented Oct 25, 2023

Feature Request

Tests are running in a batch but they are not being evaluated against the GT answer.

The evaluation could be done using similarity metrics, LLM or even mathematics.

Motivation

This would help with quickly testing models and prompts and seeing the result quantified immediately.

Your contribution

Discussion

@MiNeves00 MiNeves00 added the enhancement New feature or request label Oct 25, 2023
@MiNeves00 MiNeves00 assigned MiNeves00 and unassigned MiNeves00 Oct 25, 2023
@drkarthi
Copy link

drkarthi commented Mar 4, 2024

Hi @MiNeves00, I would like to keep track of precision and recall associated with different prompts on a test dataset. Is this issue meant to solve that problem and what is the current status?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants