-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FEAT: Adapt to Rate Limit instead of Failure #37
Comments
Hi @MiNeves00 i'm the maintainer of LiteLLM and we allow you to maximize throughput + throttle requests by load balancing between multiple LLM endpoints. I thought it would be helpful for your use case, I'd love feedback if not Here's the quick start, to use LiteLLM load balancer (works with 100+ LLMs) Step 1 Create a Config.yamlmodel_list:
- model_name: openhermes
litellm_params:
model: openhermes
temperature: 0.6
max_tokens: 400
custom_llm_provider: "openai"
api_base: http://192.168.1.23:8000/v1
- model_name: openhermes
litellm_params:
model: openhermes
custom_llm_provider: "openai"
api_base: http://192.168.1.23:8001/v1
- model_name: openhermes
litellm_params:
model: openhermes
custom_llm_provider: "openai"
frequency_penalty : 0.6
api_base: http://192.168.1.23:8010/v1 Step 2: Start the litellm proxy:
Step3 Make Request to LiteLLM proxy:
|
Hey @ishaan-jaff thanks for the info! That case of load balancing between different endpoints might end up spinning into an issue of its own. In that case we will be sure to take a look at and explore LiteLLM, seems pretty simple to test it out. Although for now the focus of this issue is more for a use case where the user wants a specific provider. The user does not want failure due to rate limiting but also wants to maximize the rate used. |
@MiNeves00 our router should allow you to maximize your throughput from your rate limits https://docs.litellm.ai/docs/routing happy to make a PR on this |
@ishaan-jaff I appreciate your availability for a PR. However, I just read the documentation again but I understood that you can maximize throughput by routing between several models. If it is with just one model and one provider I only found the Cooldown function in LiteLLM to be useful for this scenario. Although it seems to behave in a naive manner, being that when it hits the cooldown error limit per minute it cools down for a whole minute while it might not have needed to cool down for such a long time. Am I interpreting it right? |
Feature Request
Providers like OpenAI have some rate limits (things like a limit in the requests per minute).
This feature would allow llm studio to wait it out (or keep trying) when necessary so that the response does not error even if it takes longer.
Advanced feature:
By being aware of the exact rate limit of the user (depends on their tier in OpenAI for example) it could also decide which prompts to send at what time in order to maximize the rate limit without overstepping it (in cases where these are parallel).
Motivation
Gives more robustness to LLM calls. The user does not need to worry about their application breaking when making to many requests per minute.
Your contribution
Discussion
The text was updated successfully, but these errors were encountered: