New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add groq API #1407
Comments
+1 for this ^ |
it can be as simple as let cursor respect 'OPENAI_API_BASE_URL' env |
But it will make you lose the access to openai, and also dont have codebase coding.
…________________________________
From: Cheng Yang ***@***.***>
Sent: Tuesday, April 23, 2024 4:13:46 PM
To: getcursor/cursor ***@***.***>
Cc: Tommy Nguyen ***@***.***>; Author ***@***.***>
Subject: Re: [getcursor/cursor] Add groq API (Issue #1407)
it can be as simple as let cursor respect 'OPENAI_API_BASE_URL' env
—
Reply to this email directly, view it on GitHub<#1407 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ADJZKO6LR74G637JETJZGPLY6YQ4VAVCNFSM6AAAAABGRJZJFOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANZRHAYTAMBTGI>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
This would be awesome - has anyone figured out a workaround for now it would be a gamechanger prob 10x my dev speed |
yes! just override openai base url to groq https://api.groq.com/openai/v1 |
@yangcheng Thank you! For some reason the speed seems around the same as current gpt4 in cursor, is this your experience? |
for me the chat is noticeably faster. commnad+K is about same, maybe the bottleneck is somewhere else. maybe you can try 8b models to be sure? |
its about the same for small models on groq : / |
Today I made llm-router, enabling the use of |
@yangcheng Thanks! But how do we pass our API key? https://console.groq.com/docs/api-keys says:
|
@spikecodes Set the base URL and API key to Groq. However, you will lose access to OpenAI models until you remove the base URL and replace your Groq key with your OpenAI key. The llm-router option I mentioned above allows you to provide the Groq API key via an environment variable, enabling you to use OpenAI models and others, like OLLAMA, simultaneously. |
Thanks @kcolemangt! I didn't realize, I could pass the Groq API key the same as the OpenAI API key. Regarding llm-router, I don't think it would help me much as I don't really need to switch back and forth between models. Just want a good fast option. |
Does anyone know why cursor is still slow even using groq? I would imagine the completion should be slightly faster if the llm part is done in 1/8th of the time... is this a limitation of extensions built with vscodium ? increasing the speed of cursor seems like the next important milestone to get this to become the best AI editor. @truell20 @Sanger2000 Do you know who would be the best person to ask regarding this ? |
saw you tweet, very cool,how can we use it? |
Yes please, god i need this |
this is the game changer for the speed with llama 3.
The text was updated successfully, but these errors were encountered: