gpt-engineer lets you:
- Specify software in natural language
- Sit back and watch as an AI writes and executes the code
- Ask the AI to implement improvements
For stable release:
python -m pip install gpt-engineer
For development:
git clone https://github.com/gpt-engineer-org/gpt-engineer.git
cd gpt-engineer
poetry install
poetry shell
to activate the virtual environment
We actively support Python 3.10 - 3.12. The last version to support Python 3.8 - 3.9 was 0.2.6.
Choose one of:
- Export env variable (you can add this to .bashrc so that you don't have to do it each time you start the terminal)
export OPENAI_API_KEY=[your api key]
- .env file:
- Create a copy of
.env.template
named.env
- Add your OPENAI_API_KEY in .env
- Create a copy of
- Custom model:
- See docs, supports local model, azure, etc.
Check the Windows README for Windows usage.
Other ways to run:
- Use Docker (instructions)
- Do everything in your browser:
- Create an empty folder for your project anywhere on your computer
- Create a file called
prompt
(no extension) inside your new folder and fill it with instructions - Run
gpte <project_dir>
with a relative path to your folder- For example:
gpte projects/my-new-project
from the gpt-engineer directory root with your new folder inprojects/
- For example:
- Locate a folder with code which you want to improve anywhere on your computer
- Create a file called
prompt
(no extension) inside your new folder and fill it with instructions for how you want to improve the code - Run
gpte <project_dir> -i
with a relative path to your folder- For example:
gpte projects/my-old-project -i
from the gpt-engineer directory root with your folder inprojects/
- For example:
- gpt-engineer installs the binary 'bench', which gives you a simple interface for benchmarking your own agent implementations against popular public datasets.
- The easiest way to get started with benchmarking is by checking out the template repo, which contains detailed instructions and an agent template.
- Currently supported benchmark:
By running gpt-engineer you agree to our terms.
gptengineer.app is a commercial project for the automatic generation of web apps. It features a UI for non-technical users connected to a git-controlled codebase. The gptengineer.app team is actively supporting the open source community.
You can specify the "identity" of the AI agent by overriding the preprompts
folder, with your own version of the preprompts
, using the --use-custom-preprompts
argument.
Editing the preprompts
is how you make the agent remember things between projects.
By default, gpt-engineer expects text input via a prompt
file. It can also accept imagine inputs for vision-capable models. This can be useful for adding UX or architecture diagrams as additional context for GPT Engineer. You can do this by specifying an image directory with the—-image_directory flag and setting a vision-capable model in the second cli argument.
E.g. gpte projects/example-vision gpt-4-vision-preview --prompt_file prompt/text --image_directory prompt/images -i
By default, gpt-engineer supports OpenAI Models via the OpenAI API or Azure Open AI API, and Anthropic models.
With a little extra set up you can also run with open source models, like WizardCoder. See the documentation for example instructions.
The gpt-engineer community mission is to maintain tools that coding agent builders can use and facilitate collaboration in the open source community.
If you are interested in contributing to this, we are interested in having you.
If you want to see our broader ambitions, check out the roadmap, and join discord to get input on how you can contribute to it.
gpt-engineer is governed by a board of long-term contributors. If you contribute routinely and have an interest in shaping the future of gpt-engineer, you will be considered for the board.