Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Runs infinitely on local mode #264

Open
mshehrozsajjad opened this issue May 5, 2024 · 4 comments
Open

Runs infinitely on local mode #264

mshehrozsajjad opened this issue May 5, 2024 · 4 comments

Comments

@mshehrozsajjad
Copy link

Describe the bug
In local mode with current latest version it just continues infinitely and also sometimes starts writing code for windows as well, while I am using mac.

To Reproduce
Steps to reproduce the behavior:

  1. Use local mode.

Expected behavior
After successful execution it should just stop and wait for next input.

Screenshots
Uploading Screenshot 2024-05-05 at 6.06.54 PM.png…

Desktop (please complete the following information):

  • OS: M1 Mac OS 14.1.1
  • Python Version 11
@rowaidy
Copy link

rowaidy commented May 9, 2024

Same exact issue but the differences:OS:

  • OS: M2 Mac OS 14.1.1
  • Python Version 3.10

@Mikoube
Copy link

Mikoube commented May 15, 2024

Same here on another OS :

Ubuntu 22.04
Python 3.10.12

@JansenSmith
Copy link

JansenSmith commented May 17, 2024

This appears to be a pervasive problem; happens every time I use it.
I think my issue is similar:
(it repeated this about a dozen times before i got bored)

(openinterpreter) house@chonkers:~$ interpreter --api_base http://192.168.0.201:11434 --model ollama/codegemma

▌ Model set to ollama/codegemma

Open Interpreter will require approval before running code.

Use interpreter -y to bypass this.

Press CTRL-C to exit.

>
> what version of linux am i running?

We were unable to determine the context window of this model. Defaulting to 3000.

If your model can handle more, run interpreter --context_window {token limit} --max_tokens {max tokens per response}.

Continuing...



  import os

  print(os.uname())


  Would you like to run this code? (y/n)

  y


  import os

  print(os.uname())


  posix.uname_result(sysname='Linux', nodename='chonkers', release='5.15.0-102-generic', version='#112-Ubuntu SMP Tue Mar 5 16:50:32 UTC 2024',
  machine='x86_64')


  Plan:

   1 Use theos.uname function to get the Linux version.

  Code:



  import os

  print(os.uname())


  Would you like to run this code? (y/n)

  y


  import os

  print(os.uname())


  posix.uname_result(sysname='Linux', nodename='chonkers', release='5.15.0-102-generic', version='#112-Ubuntu SMP Tue Mar 5 16:50:32 UTC 2024',
  machine='x86_64')



                                                                            Plan:

   1 Use theos.uname function to get the Linux version.


                                                                            Code:



  import os

  print(os.uname())


  Would you like to run this code? (y/n)

  y


  import os

  print(os.uname())


  posix.uname_result(sysname='Linux', nodename='chonkers', release='5.15.0-102-generic', version='#112-Ubuntu SMP Tue Mar 5 16:50:32 UTC 2024',
  machine='x86_64')



                                                                            Plan:

   1 Use theos.uname function to get the Linux version.


                                                                            Code:



  import os

  print(os.uname())


  Would you like to run this code? (y/n)

  y

@JansenSmith
Copy link

tried the following, but it did not improve functionality:

interpreter --api_base http://192.168.0.201:11434 --model ollama/codegemma --no-llm_supports_function

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants
@Mikoube @JansenSmith @rowaidy @mshehrozsajjad and others