-
Notifications
You must be signed in to change notification settings - Fork 320
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This is NOT running ollama, privacy issue #24
Comments
I also was a bit misled by the README, which states, "For local processing, we integrated Ollama running *the same model *to ensure privacy in incognito mode" Line 29 in 1b46085
There is only an implementation for handling text files using groq; there is no implementation currently for using ollama Lines 189 to 195 in 1b46085
There is a hardcoded groq api key, and it doesn't work anymore. |
I assume malicious intent |
Not malicious-- just lazy lol. This was a hackathon project, and we swapped out Ollama for Groq because it was much faster. Works fine with Ollama though. We don't really have the time to fix this ourselves, but if anyone raises a PR we'll gladly merge! |
@areibman and @Bardo-Konrad please take a look at #44 which is a PR which is supposed to fix this issue. |
Can you at least update the README.md so that it isn't false advertising? |
When running incognito, why do I get groq.RateLimitError?
groq.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for model
llama3-70b-8192
in organization...
on tokens per minute (TPM): Limit 6000, Used 0, Requested ~24996. Please try again in 3m9.96s. Visit https://console.groq.com/docs/rate-limits for more information.', 'type': 'tokens', 'code': 'rate_limit_exceeded'}}The text was updated successfully, but these errors were encountered: