Pre-call hooks?
#2873
Replies: 1 comment 2 replies
-
Ok, found it at https://litellm.vercel.app/docs/observability/custom_callback#callback-functions |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
In a sense, LiteLLM (the proxy) is a way to create an OpenAI-compatible endpoint. I'd like to deploy my own OpenAI-compatible endpoint on some serverless provider, but in such a way that whenever I prompt my own endpoint, "something happens" and only then I'm routing you the LLM.
For example, I might want to do some prompt pruning or do my own code-based choice of the right model to call.
Or maybe I want to call my own RAG. So basically I might want an OAI-compatible endpoint that calls something that's beyond just the LLM. A simple way to do it might be be exposing some pre-call hooks. My endpoint accepts a prompt but then it actually calls my own RAG and only with the enriched prompt it calls the LLM.
Is this already possible?
Beta Was this translation helpful? Give feedback.
All reactions