Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

This is NOT running ollama, privacy issue #24

Open
Bardo-Konrad opened this issue May 30, 2024 · 5 comments · May be fixed by #44
Open

This is NOT running ollama, privacy issue #24

Bardo-Konrad opened this issue May 30, 2024 · 5 comments · May be fixed by #44

Comments

@Bardo-Konrad
Copy link

When running incognito, why do I get groq.RateLimitError?

groq.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for model llama3-70b-8192 in organization ... on tokens per minute (TPM): Limit 6000, Used 0, Requested ~24996. Please try again in 3m9.96s. Visit https://console.groq.com/docs/rate-limits for more information.', 'type': 'tokens', 'code': 'rate_limit_exceeded'}}

@barakplasma
Copy link

barakplasma commented Jun 16, 2024

I also was a bit misled by the README, which states, "For local processing, we integrated Ollama running *the same model *to ensure privacy in incognito mode"

We built LlamaFS on a Python backend, leveraging the Llama3 model through Groq for file content summarization and tree structuring. For local processing, we integrated Ollama running the same model to ensure privacy in incognito mode. The frontend is crafted with Electron, providing a sleek, user-friendly interface that allows users to interact with the suggested file structures before finalizing changes.

There is only an implementation for handling text files using groq; there is no implementation currently for using ollama

llama-fs/src/loader.py

Lines 189 to 195 in 1b46085

client = AsyncGroq(
api_key=os.environ.get("GROQ_API_KEY"),
)
summaries = await asyncio.gather(
*[dispatch_summarize_document(doc, client) for doc in documents]
)
return summaries

There is a hardcoded groq api key, and it doesn't work anymore.

@Bardo-Konrad
Copy link
Author

I assume malicious intent

@areibman
Copy link
Collaborator

I assume malicious intent

Not malicious-- just lazy lol. This was a hackathon project, and we swapped out Ollama for Groq because it was much faster. Works fine with Ollama though.

We don't really have the time to fix this ourselves, but if anyone raises a PR we'll gladly merge!

@barakplasma barakplasma linked a pull request Jun 17, 2024 that will close this issue
@barakplasma
Copy link

@areibman and @Bardo-Konrad please take a look at #44 which is a PR which is supposed to fix this issue.
But please test it / take a good look at it; I didn't test it enough

@barakplasma
Copy link

Can you at least update the README.md so that it isn't false advertising?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants