Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FEAT: Adapt to Rate Limit instead of Failure #37

Open
MiNeves00 opened this issue Oct 25, 2023 · 4 comments
Open

FEAT: Adapt to Rate Limit instead of Failure #37

MiNeves00 opened this issue Oct 25, 2023 · 4 comments
Labels
enhancement New feature or request priority:low Low priority. Could take 1+ month to resolve

Comments

@MiNeves00
Copy link
Contributor

Feature Request

Providers like OpenAI have some rate limits (things like a limit in the requests per minute).
This feature would allow llm studio to wait it out (or keep trying) when necessary so that the response does not error even if it takes longer.

Advanced feature:
By being aware of the exact rate limit of the user (depends on their tier in OpenAI for example) it could also decide which prompts to send at what time in order to maximize the rate limit without overstepping it (in cases where these are parallel).

Motivation

Gives more robustness to LLM calls. The user does not need to worry about their application breaking when making to many requests per minute.

Your contribution

Discussion

@MiNeves00 MiNeves00 added the enhancement New feature or request label Oct 25, 2023
@MiNeves00 MiNeves00 moved this to Backlog in LLMstudio Oct 25, 2023
@MiNeves00 MiNeves00 moved this from Backlog to Priority Backlog in LLMstudio Oct 25, 2023
@MiNeves00 MiNeves00 moved this from Priority Backlog to In Progress in LLMstudio Oct 26, 2023
@ishaan-jaff
Copy link

Hi @MiNeves00 i'm the maintainer of LiteLLM and we allow you to maximize throughput + throttle requests by load balancing between multiple LLM endpoints.

I thought it would be helpful for your use case, I'd love feedback if not

Here's the quick start, to use LiteLLM load balancer (works with 100+ LLMs)
doc: https://docs.litellm.ai/docs/simple_proxy#model-alias

Step 1 Create a Config.yaml

model_list:
- model_name: openhermes
  litellm_params:
      model: openhermes
      temperature: 0.6
      max_tokens: 400
      custom_llm_provider: "openai"
      api_base: http://192.168.1.23:8000/v1
- model_name: openhermes
  litellm_params:
      model: openhermes
      custom_llm_provider: "openai"
      api_base: http://192.168.1.23:8001/v1
- model_name: openhermes
  litellm_params:
      model: openhermes
      custom_llm_provider: "openai"
      frequency_penalty : 0.6
      api_base: http://192.168.1.23:8010/v1

Step 2: Start the litellm proxy:

litellm --config /path/to/config.yaml

Step3 Make Request to LiteLLM proxy:

curl --location 'http://0.0.0.0:8000/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
      "model": "openhermes",
      "messages": [
        {
          "role": "user",
          "content": "what llm are you"
        }
      ],
    }
'

@MiNeves00
Copy link
Contributor Author

Hey @ishaan-jaff thanks for the info! That case of load balancing between different endpoints might end up spinning into an issue of its own. In that case we will be sure to take a look at and explore LiteLLM, seems pretty simple to test it out.

Although for now the focus of this issue is more for a use case where the user wants a specific provider. The user does not want failure due to rate limiting but also wants to maximize the rate used.

@ishaan-jaff
Copy link

@MiNeves00 our router should allow you to maximize your throughput from your rate limits

https://docs.litellm.ai/docs/routing

happy to make a PR on this

@MiNeves00
Copy link
Contributor Author

@ishaan-jaff I appreciate your availability for a PR. However, I just read the documentation again but I understood that you can maximize throughput by routing between several models.

If it is with just one model and one provider I only found the Cooldown function in LiteLLM to be useful for this scenario.

Although it seems to behave in a naive manner, being that when it hits the cooldown error limit per minute it cools down for a whole minute while it might not have needed to cool down for such a long time. Am I interpreting it right?
Cooldowns - Set the limit for how many calls a model is allowed to fail in a minute, before being cooled down for a minute.

@claudiolemos claudiolemos removed this from LLMstudio Mar 19, 2024
@MiNeves00 MiNeves00 added the priority:low Low priority. Could take 1+ month to resolve label Nov 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request priority:low Low priority. Could take 1+ month to resolve
Projects
None yet
Development

No branches or pull requests

3 participants