Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ask to run code after improvement #722

Closed
UmerHA opened this issue Sep 19, 2023 · 6 comments
Closed

Ask to run code after improvement #722

UmerHA opened this issue Sep 19, 2023 · 6 comments
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@UmerHA
Copy link
Collaborator

UmerHA commented Sep 19, 2023

Unlike after code creation, after code improvement(i.e., the -i flag) gtp-engineer doesn't ask if the code should be run.

Feature description

When using the -i flag and gpt-engineer has finished improving code, the use should be asked if they want to run the code.

Motivation/Application

In my view, a good workflow for gpt-engineer would be:

  1. I enter a prompt, gpte asks clarifying questions, I answer, gpte creates a codebase
  2. With my consent, gpte runs the code
  3. I get asked if the code worked well. If not, I'm asked if I want to improve.
  4. I enter an improvement prompt
  5. gpte improves code
  6. With my consent, gpte runs the code <- this issue
  7. Repeat
@UmerHA UmerHA added enhancement New feature or request good first issue Good for newcomers labels Sep 19, 2023
@mahimairaja
Copy link

So you wish to add improvement on interactive mode? Am I right?

@UmerHA
Copy link
Collaborator Author

UmerHA commented Sep 20, 2023

So you wish to add improvement on interactive mode? Am I right?

Yes, I think I'd be good if that existed. Not sure if I can contribute though.

@UmerHA
Copy link
Collaborator Author

UmerHA commented Sep 22, 2023

fyi: requires #721 to be merged

@Ruijian-Zha
Copy link
Contributor

I concur with @UmerHA's points. Additionally, automating the environment setup to handle dependencies would be beneficial, saving users from manual installations, especially when new dependencies are introduced post-improvement. This would streamline the process, making the -i flag for code improvement more user-friendly and efficient.

@ATheorell
Copy link
Collaborator

@Ruijian-Zha I'm not sure if I understand. If successful, the generated run.sh script should create a virtual environment and install dependencies. Hypothetically, by including the run.sh/requirement.txt/whatever in the improvement prompt, the agent should already be able to suggest addition of new dependencies. I may be missing something here.

@Ruijian-Zha
Copy link
Contributor

@ATheorell The key idea is to run run.sh, requirements.txt, or similar scripts iteratively until the problem is resolved. The recently released Autogen can automatically install the necessary requirements in the current Conda sub-environment, aligning perfectly with our objectives. For more details, including my test code and results, you can visit my personal GitHub page: Autogen Direct Run & Iterative Debug.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

4 participants