New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] feat: allow editing of code blocks before execution #612
base: main
Are you sure you want to change the base?
Conversation
interpreter.messages.append({ | ||
**old_message, | ||
"code": edited_code, | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't seem to trigger getting back into the same Do you want to scan this code?
/ Do you want to run this code?
loop and skips straight to execution.
It also doesn't render the code with syntax highlighting, which is a bummer.
interpreter/utils/open_file.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the same logic that powered interpreter --config
, but since we're opening other files now, it seemed logical to extract it into a utility that we can call whenever we need to open a specific path in the user's default application.
@@ -1,34 +1,9 @@ | |||
import os | |||
import subprocess | |||
from yaspin import yaspin | |||
from yaspin.spinners import Spinners |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was just an extraneous import.
|
||
open_file(temp_file) | ||
|
||
with yaspin(text=f" Editing {language_name} code...").green.right.dots as loading: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's probably a better way to handle this. The big problem right now is that since we don't really get any kind of useful signal when the user is done, we can't automatically clean up the spinner like we do with code scanning.
This probably needs to shift to an imperative call to a new yaspin
instance instead of using with yaspin()
so we have more control over the loading indicator.
0be3623
to
4401368
Compare
import subprocess | ||
import os | ||
import platform | ||
import pkg_resources | ||
import appdirs | ||
from ..utils.display_markdown_message import display_markdown_message |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cleaning up imports that are no longer needed due to the open_file
utility, and removing display_markdown_message
as it is no longer used in this file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cleaning up imports that are no longer needed due to the
open_file
utility, and removingdisplay_markdown_message
as it is no longer used in this file.
Would love any ideas, suggestions, or assistance getting the edited code block to render as a syntax highlighted code block and ask the user for confirmation before executing. |
try: | ||
print(" Press `ENTER` after you've saved your edits.") | ||
|
||
open_file(temp_file) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally, eventually we'll be able to incorporate some sort of in-place editing that feels nice, but for the proof of concept, it seemed more prudent to leverage the user's existing default editor.
@ericrallen think i figured this out but might have botched the push to git. will tag u in my PR |
I can help with that, Working on updating the local LLM docs with my latest changes and a full detailed tutorial. once done let me take a look and ✅i hope the answer is yes, because otherwise,even when you run only Op-In you clearly see the white output of Opin with the prefix and suffix to the user system prompt and then you see a green block of code, that is the forwarding of Litellm after phrasing a little the system message again before retrieving it back to Op-In. ❌If no and last sync was 5 months ago first better re-sync ♻ the branch I will submit a pull request soon for the docs and one get your feedback can sync pull newer files, and fix the intervention in the system prompt manually, in my case i need just to remove the word 'prompt' which is pushed to the system file among other chat related data and the LOGs. If we are 5 months behind commits, no use investing in this version unless there is something i'm not aware of with main pull IMO better checkout / pull request the newest version and solve it with a 'man in the middle' as i did with integration @BellaBijl wdyt? CC ->(@sbendary25 @ericrallen @Notnaton ) -> feedback would be highly welcome |
Describe the changes you have made:
This work-in-progress introduces the ability to edit code in your default editor before running it.
When Open Interpreter asks if you want to run the provided code, there is a new option,
%edit
, which will allow you to edit the code.It currently waits for you to come back and hit
ENTER
to continue execution.Unfortunately, it seems to automatically run the edited code, and I haven't been able to get it back into the
scan
/run
loop.I tried using a few different file/directory watching Python packages, but it seems the file close event is not very easy to listen for and not consistent across platforms, so for the initial proof of concept, I’m relying on user intervention.
Demo
Testing Instructions
gh pr checkout https://github.com/KillianLucas/open-interpreter/pull/612
poetry run interpreter
Note:
auto_run
must be disabled so don't run it with-y
or withauto_run: true
in yourconfig.yaml
.Example:
%edit
magic commandENTER
Reference any relevant issue (Fixes #537)
I have tested the code on the following OS:
AI Language Model (if applicable)