Issues: explodinggradients/ragas
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
is_async missing in context_relevancy in ragas 0.1.8
bug
Something isn't working
module-metrics
this is part of metrics module
#986
opened May 23, 2024 by
abetatos
Can this part of the code be applied to Chinese scenarios
question
Further information is requested
#985
opened May 22, 2024 by
w666x
Contradiction in evaluate is_async parameter docstring and code
documentation
Improvements or additions to documentation
question
Further information is requested
#983
opened May 22, 2024 by
dschwalm
[R-259] Which is the best LLM for evaluation?
linear
Created by Linear-GitHub Sync
question
Further information is requested
[R-256] make better example dataset for getting started
Improvement
Created by Linear-GitHub Sync
module-testsetgen
Module testset generation
#978
opened May 21, 2024 by
jjmachan
Tried Generation Test Set from Together APIs and Hugging Face Embeddings
bug
Something isn't working
#977
opened May 20, 2024 by
Eknathabhiram
Grammar and punctuation improvements in critique prompts
bug
Something isn't working
module-metrics
this is part of metrics module
#971
opened May 20, 2024 by
ruankie
Un-deprecate multiple ground truth answers?
enhancement
New feature or request
#968
opened May 17, 2024 by
athewsey
Is it possible to add an argument to the evaluate() function to configure the group name?
enhancement
New feature or request
#967
opened May 17, 2024 by
zzzmc
Testset generation ValueError: invalid literal for int() with base 10:
bug
Something isn't working
#966
opened May 17, 2024 by
choshiho
Answer Correctness giving wrong results for batches and single records
bug
Something isn't working
#965
opened May 17, 2024 by
aravindpai
Adapted output keys set(output.keys())={'深度', '相关性', '清晰度', '结构'} do not match with the original output keys: output_keys[i]={'structure', 'clarity', 'depth', 'relevance'}
bug
Something isn't working
#964
opened May 17, 2024 by
qism
TestsetGenerator -> RuntimeError: ... got Future <..> attached to a differen t loop
bug
Something isn't working
#963
opened May 16, 2024 by
abetatos
embedding nodes: 0%| Segmentation fault (core dumped)
bug
Something isn't working
#962
opened May 16, 2024 by
WGS-note
AttributeError: 'PhiForCausalLM' object has no attribute 'generate_prompt'
bug
Something isn't working
#960
opened May 16, 2024 by
TheDominus
Random RuntimeError: Tool context error detected. This can occur due to parallelization
bug
Something isn't working
#957
opened May 15, 2024 by
franck-cussac
1 task done
issue with metrics evaluation in case of any exception
bug
Something isn't working
#956
opened May 15, 2024 by
mukuls-zeta
[R-254] Issue in Evaluation using local LLM
linear
Created by Linear-GitHub Sync
question
Further information is requested
#955
opened May 15, 2024 by
sheetalkamthe55
Ragas llama_index integration as shown doesn't work for custom LLMs
question
Further information is requested
#954
opened May 14, 2024 by
pliablepixels
Failed to parse output. Returning None. - SimpleEvolution - TestsetGenerator
bug
Something isn't working
#945
opened May 10, 2024 by
JPonsa
Evaluate() function gets unexpected arguments
question
Further information is requested
#944
opened May 9, 2024 by
nelagamy
httpcore.ProxyError: 407 Proxy Authentication Required
question
Further information is requested
#940
opened May 8, 2024 by
Chihuahua12345
RAGAS compatibility with mistral models
bug
Something isn't working
#938
opened May 7, 2024 by
0Falli0
Question of computing Context Relevancy
question
Further information is requested
#928
opened May 2, 2024 by
ShuangLI59
Previous Next
ProTip!
Add no:assignee to see everything that’s not assigned.