Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding the 'Computer' destroyed open intererpreter which was the best product i used #1255

Open
6rz6 opened this issue May 4, 2024 · 9 comments

Comments

@6rz6
Copy link

6rz6 commented May 4, 2024

Describe the bug

  1. The default system msg is irrationally long and no model except gpt4 preview can even load it. It cost a $1 per minute of use sending this 10,000 token msg each time.
  2. After a full install using complete still when doing first thing --profiles there is a dependency errror related to opening files.
  3. Complete inability to integrate any model without litellm 'crying' about the format it expects using the CLI with the (model=huggingface.. ) even when i do it 1 to 1 as it wants using hugging face hub instead of hugging face , it does recognize the hub key, not in the env and not using -ak in the CLI.
  4. The rapid change to the format of the config files frorm version to version makes it neary impossible to adapt previous version system,profile any ymal which was working perfectly fine before the update and worst of all it deletes the content of the config while sarcastically claiming the content will not be effected during the migration to the new format.

Reproduce

  1. Install on a fresh machine a complete install and run --profiles. Bug.
  2. Try to use a default.yaml from version 0.21.0 and see it deleted instead of migrated to the new 0.25.0 version.
  3. Try to use an ollama,huggingface HUB model and see litellm unhandled errors which also not handeled by open-interpreter (its a simple string manipulation, handle it dont raise a kernel error)
  4. Use a model like gpt 3.5 turbo and see how from being a super capable tool in v0.19.0 it become a lazy bordering stupid and un reliable with zero memory which keeps saying task is done after every command while task is not even stated, just even after it recap and change directory as step 1 he already forgets (context windows full) the second item in it recap.

Expected behavior

  1. Fully operational system and profile default files. Not exceed 1000 to 2000 tokens.
  2. Add Option 4 in the install fully adapted to local / custom non openai models.
  3. Fix the yaml migration of the config files to really migrate not deleting and without backing up the old version overwriting(!) It. Which is un restoreable.

Screenshots

No response

Open Interpreter version

0.2.0 and above

Python version

3.11

Operating System name and version

Wsl2 on win10

Additional context

I can gladly help, i dug very deep into all the core files to try fix these issues.
Thank you.
@6rz6

@6rz6 6rz6 closed this as completed May 4, 2024
@6rz6 6rz6 reopened this May 4, 2024
@i-hodl
Copy link

i-hodl commented May 5, 2024

Instructions for Setting Up a Model File on OLLAMA:

You can address some of the current system issues by creating and using a custom model file. Here’s a simple example of how to set up a model file using OLLAMA. This script should be run in your terminal:

# Pull your chosen base model from OLLAMA
ollama pull <base_model_name>

# Create a Modelfile specifying the base model and setting up system attributes
echo "FROM <base_model_name>" > Modelfile
echo "SYSTEM You are a friendly assistant. You embody the characteristics of a helpful, knowledgeable assistant with a strong emphasis on user interaction and problem-solving." >> Modelfile

# Create the new model using the Modelfile
ollama create -f Modelfile <your_model_name>

# Push the new model to the repository
ollama push <your_model_name>

Customization Instructions:

  • Replace <base_model_name> with the name of the model you wish to use as your base (e.g., llava or wizard-vicuna).
  • Replace <your_model_name> with the name you choose for your newly created model.

This approach helps bypass issues related to system message length and dependency errors by allowing you to configure a simplified, efficient setup that's tailored to your needs. Running these commands in the terminal will create a more manageable environment, potentially reducing operational costs and enhancing performance stability.

Additional Tips:

  • Ensure your terminal is set up to connect to OLLAMA’s repositories.
  • Verify that all placeholders are correctly replaced with actual values before executing the commands.

By following these steps, you can create a more streamlined and effective model setup that addresses specific bugs and improves overall system responsiveness.

@hossain666
Copy link

@hossain666
Copy link

@hossain666
Copy link

package-lock.json

@6rz6
Copy link
Author

6rz6 commented May 5, 2024

comparison.gif

I see now the working formart is :parameter: its good to know, and the size if the prompt looks totally rational. I wish the base came with an extra example ontop of the fast,empty ,OS and way too long ones it comes with.
Would you be able to post your's as text ? Even if its just the infrastructure template i will be grateful 🙏

@6rz6
Copy link
Author

6rz6 commented May 5, 2024

Instructions for Setting Up a Model File on OLLAMA:

You can address some of the current system issues by creating and using a custom model file. Here’s a simple example of how to set up a model file using OLLAMA. This script should be run in your terminal:

# Pull your chosen base model from OLLAMA
ollama pull <base_model_name>

# Create a Modelfile specifying the base model and setting up system attributes
echo "FROM <base_model_name>" > Modelfile
echo "SYSTEM You are a friendly assistant. You embody the characteristics of a helpful, knowledgeable assistant with a strong emphasis on user interaction and problem-solving." >> Modelfile

# Create the new model using the Modelfile
ollama create -f Modelfile <your_model_name>

# Push the new model to the repository
ollama push <your_model_name>

Customization Instructions:

  • Replace <base_model_name> with the name of the model you wish to use as your base (e.g., llava or wizard-vicuna).
  • Replace <your_model_name> with the name you choose for your newly created model.

This approach helps bypass issues related to system message length and dependency errors by allowing you to configure a simplified, efficient setup that's tailored to your needs. Running these commands in the terminal will create a more manageable environment, potentially reducing operational costs and enhancing performance stability.

Additional Tips:

  • Ensure your terminal is set up to connect to OLLAMA’s repositories.
  • Verify that all placeholders are correctly replaced with actual values before executing the commands.

By following these steps, you can create a more streamlined and effective model setup that addresses specific bugs and improves overall system responsiveness.

Honestly ollama is the only provider which im able to use with CLI call using -ab ip:11434 -o -m=ollama/model
I wish lite llm would accept all other models like that or at least the formatting for lite llm would be done on our side once the op-in gets the params.
I see no reason why any model shouldn't utilize exactly the same syntax which work perfect with openai and ollama.
Try to use COHERE or Huggingfacehub or mistralai in the same format in CLI and it breaks. 🤔 the config files help but even with them i find myself struggling for hours instead of just using open intererpreter as an extention of myself like it used to be.

@6rz6
Copy link
Author

6rz6 commented May 5, 2024

When called from python for example activated by crewAI it work way better than the way i use it from CLI, maybe the solution will be instead of the aliases i have for everymode in CLI to have py code for each although if i have py code for each then instead of me distributing it we should merge it to the base model as initially suggested

@6rz6
Copy link
Author

6rz6 commented May 5, 2024

github-linguist/linguist#6811 (comment)

I wish the code was strict as the rules ))

@i-hodl
Copy link

i-hodl commented May 5, 2024

Absolutely, I resonate with your points on CLI usability across different models. I primarily use oh-my-zsh and spaceship for OLLAMA models, which simplifies things, but I’ve also noticed inconsistencies when trying to apply similar CLI formats with COHERE or Hugging Face. It often feels like instead of leveraging the interpreter as an extension of my capabilities, I'm stuck configuring endlessly.

I completely agree that having a uniform syntax for model interaction could vastly improve usability. It would reduce the learning curve and make tool integration more seamless across different platforms.

Regarding config files, they sometimes help but often add another layer of complexity. Simplifying CLI interaction to make it as intuitive as using Python scripts directly might be a better approach for consistency and efficiency. Perhaps advocating for integrating these scripts into the base model, as you suggested, could be a step towards standardizing model interactions.

I wish the CLI interactions were as strict and standardized as coding best practices, making our work much more straightforward!

When called from python for example activated by crewAI it work way better than the way i use it from CLI, maybe the solution will be instead of the aliases i have for everymode in CLI to have py code for each although if i have py code for each then instead of me distributing it we should merge it to the base model as initially suggested

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants