You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[ ] I checked the documentation and related resources and couldn't find an answer to my question.
Your Question
“WARNING:ragas.llms.output_parser:Failed to parse output. Returning None.”
I tried including the trace using Langsmith to check for requests and responses. For the given input prompt I believe it is an issue of context length because I get a blank response. I tried different LLMs but the error remains the same.
Code Examples
Hosted LLama 2 model with LLamaCPP. Below is the command I used python3 -m llama_cpp.server --model /tmp/llama_index/models/llama-13b.Q5_K_M.gguf --port 8009 --host 129.69.217.24 --chat_format llama-2
[ ] I checked the documentation and related resources and couldn't find an answer to my question.
Your Question
I tried including the trace using Langsmith to check for requests and responses. For the given input prompt I believe it is an issue of context length because I get a blank response. I tried different LLMs but the error remains the same.
Code Examples
Hosted LLama 2 model with LLamaCPP. Below is the command I used
python3 -m llama_cpp.server --model /tmp/llama_index/models/llama-13b.Q5_K_M.gguf --port 8009 --host 129.69.217.24 --chat_format llama-2
Following is a sample testset I am using,
Ragas_dataset.csv
Can ignore the dataset part, I tried the same with
fiqa_eval = load_dataset("explodinggradients/fiqa", "ragas_eval")
but same issue persists.
Code:
Additional context
Please let me know if I should provide more information
R-254
The text was updated successfully, but these errors were encountered: