Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
KillianLucas committed Oct 27, 2023
2 parents 7f8867f + cf945d2 commit 42c3176
Show file tree
Hide file tree
Showing 11 changed files with 94 additions and 40 deletions.
17 changes: 8 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,7 @@

<p align="center">
<a href="https://discord.gg/6p3fD6rBVm">
<img alt="Discord" src="https://img.shields.io/discord/1146610656779440188?logo=discord&style=flat&logoColor=white"/>
</a>
<img alt="Discord" src="https://img.shields.io/discord/1146610656779440188?logo=discord&style=flat&logoColor=white"/></a>
<a href="README_JA.md"><img src="https://img.shields.io/badge/ドキュメント-日本語-white.svg" alt="JA doc"/></a>
<a href="README_ZH.md"><img src="https://img.shields.io/badge/文档-中文版-white.svg" alt="ZH doc"/></a>
<a href="README_IN.md"><img src="https://img.shields.io/badge/Hindi-white.svg" alt="IN doc"/></a>
Expand Down Expand Up @@ -92,7 +91,7 @@ However, OpenAI's service is hosted, closed-source, and heavily restricted:

---

Open Interpreter overcomes these limitations by running on your local environment. It has full access to the internet, isn't restricted by time or file size, and can utilize any package or library.
Open Interpreter overcomes these limitations by running in your local environment. It has full access to the internet, isn't restricted by time or file size, and can utilize any package or library.

This combines the power of GPT-4's Code Interpreter with the flexibility of your local development environment.

Expand Down Expand Up @@ -196,9 +195,9 @@ interpreter.model = "gpt-3.5-turbo"

### Running Open Interpreter locally

**Issues running locally?** Read our new [GPU setup guide](./docs/GPU.md) and [Windows setup guide](./docs/WINDOWS.md).
**Issues running locally?** Read our new [GPU setup guide](./docs/GPU.md), [Windows setup guide](./docs/WINDOWS.md) or [MacOS (Apple Silicon only) setup guide](./docs/MACOS.md).

You can run `interpreter` in local mode from the command line to use `Code Llama`:
You can run `interpreter` in local mode from the command line to use `Mistral 7B`:

```shell
interpreter --local
Expand All @@ -214,7 +213,7 @@ interpreter --local --model tiiuae/falcon-180B

You can easily modify the `max_tokens` and `context_window` (in tokens) of locally running models.

Smaller context windows will use less RAM, so we recommend trying a shorter window if GPU is failing.
Smaller context windows will use less RAM, so we recommend trying a shorter window if the GPU is failing.

```shell
interpreter --max_tokens 2000 --context_window 16000
Expand Down Expand Up @@ -242,7 +241,7 @@ In the interactive mode, you can use the below commands to enhance your experien
`%debug [true/false]`: Toggle debug mode. Without arguments or with 'true', it
enters debug mode. With 'false', it exits debug mode.
`%reset`: Resets the current session.
`%undo`: Remove previous messages and its response from the message history.
`%undo`: Remove the previous user message and the AI's response from the message history.
`%save_message [path]`: Saves messages to a specified JSON path. If no path is
provided, it defaults to 'messages.json'.
`%load_message [path]`: Loads messages from a specified JSON path. If no path
Expand Down Expand Up @@ -348,7 +347,7 @@ You can run `interpreter -y` or set `interpreter.auto_run = True` to bypass this

- Be cautious when requesting commands that modify files or system settings.
- Watch Open Interpreter like a self-driving car, and be prepared to end the process by closing your terminal.
- Consider running Open Interpreter in a restricted environment like Google Colab or Replit. These environments are more isolated, reducing the risks associated with executing arbitrary code.
- Consider running Open Interpreter in a restricted environment like Google Colab or Replit. These environments are more isolated, reducing the risks of executing arbitrary code.

## How Does it Work?

Expand All @@ -364,7 +363,7 @@ Please see our [Contributing Guidelines](./CONTRIBUTING.md) for more details on

## License

Open Interpreter is licensed under the MIT License. You are permitted to use, copy, modify, distribute, sublicense and sell copies of the software.
Open Interpreter is licensed under the MIT License. You are permitted to use, copy, modify, distribute, sublicense, and sell copies of the software.

**Note**: This software is not affiliated with OpenAI.

Expand Down
2 changes: 2 additions & 0 deletions docs/MACOS.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Code-Llama on MacOS (Apple Silicon)

> __ATTENTION: This tutorial is intended for Apple Silicon only Macs. Intel based Macs cannot use the GPU mode.__
When running Open Interpreter on macOS with Code-Llama (either because you did
not enter an OpenAI API key or you ran `interpreter --local`) you may want to
make sure it works correctly by following the instructions below.
Expand Down
2 changes: 1 addition & 1 deletion docs/WINDOWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ The resolve this issue, perform the following steps.
pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir
```
Alternatively, if you want to include GPU suppport, follow the steps in [Local Language Models with GPU Support](./GPU.md)
Alternatively, if you want to include GPU support, follow the steps in [Local Language Models with GPU Support](./GPU.md)
6. Make sure you close and re-launch any cmd windows that were running interpreter
Expand Down
3 changes: 2 additions & 1 deletion interpreter/cli/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@
"help_text": "optionally enable safety mechanisms like code scanning; valid options are off, ask, and auto",
"type": str,
"choices": ["off", "ask", "auto"],
"default": "off"
},
{
"name": "gguf_quality",
Expand Down Expand Up @@ -203,7 +204,7 @@ def cli(interpreter):
setattr(interpreter, attr_name, attr_value)

# if safe_mode and auto_run are enabled, safe_mode disables auto_run
if interpreter.auto_run and not interpreter.safe_mode == "off":
if interpreter.auto_run and (interpreter.safe_mode == "ask" or interpreter.safe_mode == "auto"):
setattr(interpreter, "auto_run", False)

# Default to Mistral if --local is on but --model is unset
Expand Down
2 changes: 1 addition & 1 deletion interpreter/core/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ def __init__(self):

# LLM settings
self.model = ""
self.temperature = 0
self.temperature = None
self.system_message = ""
self.context_window = None
self.max_tokens = None
Expand Down
20 changes: 17 additions & 3 deletions interpreter/core/respond.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
from ..utils.merge_deltas import merge_deltas
from ..utils.display_markdown_message import display_markdown_message
from ..utils.truncate_output import truncate_output
from ..code_interpreters.language_map import language_map
import traceback
import litellm

Expand Down Expand Up @@ -113,9 +114,22 @@ def respond(interpreter):

# Get a code interpreter to run it
language = interpreter.messages[-1]["language"]
if language not in interpreter._code_interpreters:
interpreter._code_interpreters[language] = create_code_interpreter(language)
code_interpreter = interpreter._code_interpreters[language]
if language in language_map:
if language not in interpreter._code_interpreters:
interpreter._code_interpreters[language] = create_code_interpreter(language)
code_interpreter = interpreter._code_interpreters[language]
else:
#This still prints the code but don't allow code to run. Let's Open-Interpreter know through output message
error_output = f"Error: Open Interpreter does not currently support {language}."
print(error_output)

interpreter.messages[-1]["output"] = ""
output = "\n" + error_output

# Truncate output
output = truncate_output(output, interpreter.max_output)
interpreter.messages[-1]["output"] = output.strip()
break

# Yield a message, such that the user can stop code execution if they want to
try:
Expand Down
3 changes: 3 additions & 0 deletions interpreter/llm/convert_to_coding_llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,9 @@ def coding_llm(messages):
# Default to python if not specified
if language == "":
language = "python"
else:
#Removes hallucinations containing spaces or non letters.
language = ''.join(char for char in language if char.isalpha())

output = {"language": language}

Expand Down
68 changes: 50 additions & 18 deletions interpreter/llm/setup_openai_coding_llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ def coding_llm(messages):
params["api_key"] = interpreter.api_key
if interpreter.max_tokens:
params["max_tokens"] = interpreter.max_tokens
if interpreter.temperature:
if interpreter.temperature is not None:
params["temperature"] = interpreter.temperature

# These are set directly on LiteLLM
Expand All @@ -99,6 +99,9 @@ def coding_llm(messages):

for chunk in response:

if interpreter.debug_mode:
print("Chunk from LLM", chunk)

if ('choices' not in chunk or len(chunk['choices']) == 0):
# This happens sometimes
continue
Expand All @@ -108,31 +111,60 @@ def coding_llm(messages):
# Accumulate deltas
accumulated_deltas = merge_deltas(accumulated_deltas, delta)

if interpreter.debug_mode:
print("Accumulated deltas", accumulated_deltas)

if "content" in delta and delta["content"]:
yield {"message": delta["content"]}

if ("function_call" in accumulated_deltas
and "arguments" in accumulated_deltas["function_call"]):

arguments = accumulated_deltas["function_call"]["arguments"]
arguments = parse_partial_json(arguments)

if arguments:

if (language is None
and "language" in arguments
and "code" in arguments # <- This ensures we're *finished* typing language, as opposed to partially done
and arguments["language"]):
language = arguments["language"]
if ("name" in accumulated_deltas["function_call"] and accumulated_deltas["function_call"]["name"] == "execute"):
arguments = accumulated_deltas["function_call"]["arguments"]
arguments = parse_partial_json(arguments)

if arguments:
if (language is None
and "language" in arguments
and "code" in arguments # <- This ensures we're *finished* typing language, as opposed to partially done
and arguments["language"]):
language = arguments["language"]
yield {"language": language}

if language is not None and "code" in arguments:
# Calculate the delta (new characters only)
code_delta = arguments["code"][len(code):]
# Update the code
code = arguments["code"]
# Yield the delta
if code_delta:
yield {"code": code_delta}
else:
if interpreter.debug_mode:
print("Arguments not a dict.")

# 3.5 REALLY likes to halucinate a function named `python` and you can't really fix that, it seems.
# We just need to deal with it.
elif ("name" in accumulated_deltas["function_call"] and accumulated_deltas["function_call"]["name"] == "python"):
if interpreter.debug_mode:
print("Got direct python call")
if (language is None):
language = "python"
yield {"language": language}
if language is not None and "code" in arguments:
# Calculate the delta (new characters only)
code_delta = arguments["code"][len(code):]

if language is not None:
# Pull the code string straight out of the "arguments" string
code_delta = accumulated_deltas["function_call"]["arguments"][len(code):]
# Update the code
code = arguments["code"]
code = accumulated_deltas["function_call"]["arguments"]
# Yield the delta
if code_delta:
yield {"code": code_delta}

yield {"code": code_delta}

else:
if interpreter.debug_mode:
print("GOT BAD FUNCTION CALL: ", accumulated_deltas["function_call"])


return coding_llm
2 changes: 1 addition & 1 deletion interpreter/llm/setup_text_llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ def base_llm(messages):
params["api_key"] = interpreter.api_key
if interpreter.max_tokens:
params["max_tokens"] = interpreter.max_tokens
if interpreter.temperature:
if interpreter.temperature is not None:
params["temperature"] = interpreter.temperature

# These are set directly on LiteLLM
Expand Down
11 changes: 6 additions & 5 deletions interpreter/rag/get_relevant_procedures_string.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,11 +19,12 @@ def get_relevant_procedures_string(interpreter):
# Update the procedures database to reflect any changes in interpreter.procedures
if interpreter._procedures_db.keys() != interpreter.procedures:
updated_procedures_db = {}
for key in interpreter.procedures:
if key in interpreter._procedures_db:
updated_procedures_db[key] = interpreter._procedures_db[key]
else:
updated_procedures_db[key] = interpreter.embed_function(key)
if interpreter.procedures is not None:
for key in interpreter.procedures:
if key in interpreter._procedures_db:
updated_procedures_db[key] = interpreter._procedures_db[key]
else:
updated_procedures_db[key] = interpreter.embed_function(key)
interpreter._procedures_db = updated_procedures_db

# Assemble the procedures query string. Last two messages
Expand Down
4 changes: 3 additions & 1 deletion interpreter/terminal_interface/validate_llm_settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
import time
import inquirer
import litellm
import getpass

def validate_llm_settings(interpreter):
"""
Expand Down Expand Up @@ -71,7 +72,8 @@ def validate_llm_settings(interpreter):
---
""")

response = input("OpenAI API key: ")
response = getpass.getpass("OpenAI API key: ")
print(f"OpenAI API key: {response[:4]}...{response[-4:]}")

if response == "":
# User pressed `enter`, requesting Mistral-7B
Expand Down

0 comments on commit 42c3176

Please sign in to comment.