Skip to content

explodinggradients/ragas

Repository files navigation

Supercharge Your LLM Application Evaluations πŸš€

GitHub release Build License Open In Colab discord-invite

Objective metrics, intelligent test generation, and data-driven insights for LLM apps

Ragas is your ultimate toolkit for evaluating and optimizing Large Language Model (LLM) applications. Say goodbye to time-consuming, subjective assessments and hello to data-driven, efficient evaluation workflows. Don't have a test dataset ready? We also do production-aligned test set generation.

Key Features

  • 🎯 Objective Metrics: Evaluate your LLM applications with precision using both LLM-based and traditional metrics.
  • πŸ§ͺ Test Data Generation: Automatically create comprehensive test datasets covering a wide range of scenarios.
  • πŸ”— Seamless Integrations: Works flawlessly with popular LLM frameworks like LangChain and major observability tools.
  • πŸ“Š Build feedback loops: Leverage production data to continually improve your LLM applications.

πŸ›‘οΈ Installation

Pypi:

pip install ragas

Alternatively, from source:

pip install git+https://github.com/explodinggradients/ragas

πŸ”₯ Quickstart

Evaluate your LLM App

This is 5 main lines:

from ragas import SingleTurnSample
from ragas.metrics import AspectCritic

test_data = {
    "user_input": "summarise given text\nThe company reported an 8% rise in Q3 2024, driven by strong performance in the Asian market. Sales in this region have significantly contributed to the overall growth. Analysts attribute this success to strategic marketing and product localization. The positive trend in the Asian market is expected to continue into the next quarter.",
    "response": "The company experienced an 8% increase in Q3 2024, largely due to effective marketing strategies and product adaptation, with expectations of continued growth in the coming quarter.",
}
evaluator_llm = LangchainLLMWrapper(ChatOpenAI(model="gpt-4o"))
metric = AspectCritic(name="summary_accuracy",llm=evaluator_llm, definition="Verify if the summary is accurate.")
await metric.single_turn_ascore(SingleTurnSample(**test_data))

Find the complete Quickstart Guide

Analyze your Evaluation

Sign up for app.ragas.io to review, share and analyze your evaluations

See how to use it

πŸ«‚ Community

If you want to get more involved with Ragas, check out our discord server. It's a fun community where we geek out about LLM, Retrieval, Production issues, and more.

Contributors

+----------------------------------------------------------------------------+
|     +----------------------------------------------------------------+     |
|     | Developers: Those who built with `ragas`.                      |     |
|     | (You have `import ragas` somewhere in your project)            |     |
|     |     +----------------------------------------------------+     |     |
|     |     | Contributors: Those who make `ragas` better.       |     |     |
|     |     | (You make PR to this repo)                         |     |     |
|     |     +----------------------------------------------------+     |     |
|     +----------------------------------------------------------------+     |
+----------------------------------------------------------------------------+

We welcome contributions from the community! Whether it's bug fixes, feature additions, or documentation improvements, your input is valuable.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

πŸ” Open Analytics

At Ragas, we believe in transparency. We collect minimal, anonymized usage data to improve our product and guide our development efforts.

βœ… No personal or company-identifying information

βœ… Open-source data collection code

βœ… Publicly available aggregated data

To opt-out, set the RAGAS_DO_NOT_TRACK environment variable to true.