New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Enable API Logging with Helicone #598
Conversation
API requests will be logged via Helicone when the `HELICONE_API_KEY` variable is set
interpreter/core/respond.py
Outdated
if os.getenv("HELICONE_API_KEY"): | ||
litellm.api_base = "https://oai.hconeai.com/v1" | ||
litellm.headers = {"Helicone-Auth": f"Bearer {os.getenv('HELICONE_API_KEY')}"} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea of making it simple to use, but this approach actually breaks the existing --api_base
parameter functionality and prevents users from being able to also test out a local model or another API proxy.
I think we should consider making thrid party integrations like this opt-in via a CLI parameter or a flag in the user's config.yaml
file that can be overridden.
Maybe adding some sort of integrations
array to the config.yaml
and then if the user has helicone
as an entry in that array, we can enable helicone and try to pick up the API key? It might need to follow something similar to the functionality we use in interpreter/terminal_interface/validate_llm_settings.py
to make sure we can find the HELICONE_API_KEY
if Helicone is enabled.
Overall, this a solid idea to add and nice to be able to track Open Interpreter's token usage and other datapoints in the Helicone dashboard.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the feedback! I implemented this approach and pushed a new commit!
Right now, I added the validation for integrations in the setup_llm.py
file, but do you think it would be better to have a separate module for it? I've only added helicone for now, but I am thinking of adding support for other LLM monitoring tools like LangSmith and llm.report.
@Kabilan108 we expose callbacks through litellm - you can set a custom callback to log data to Helicone if you'd like: https://docs.litellm.ai/docs/observability/custom_callback |
- add support for specifying external integrations in the users config file. - Currently supports API request logging with Helicone.
temperature: 0 | ||
integrations: [helicone] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might be good to put in an example in the docs, but we won’t want to enable this by default in the core config.yaml
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It’s hard to tell, but I don’t think there are any actual changes to this file that are relevant to this PR.
Hey Kabilan! We're now approaching integrations with simple guides like this one. Let me know if you'd like to reopen this PR, or make a new one into our docs showing how to integrate Open Interpreter with Helicone. Thanks! |
API requests will be logged via Helicone when the
HELICONE_API_KEY
variable is setDescribe the changes you have made:
litellm.api_base
andlitellm.headers
when HELICONE_API_KEY is provided.Reference any relevant issue (Fixes #000)
I have tested the code on the following OS:
AI Language Model (if applicable)