A command-line interface for LLMs written in Bash.
basic.webm
Basic usage of ell
- Ask LLMs from your terminal
- Pipe friendly
- Bring your terminal context to the LLMs and ask questions
- Chat with LLMs in your terminal
- Function calling and more supported via templates.
To use ell, you need the following:
- bash
- jq (For parsing JSON)
- curl (For sending HTTPS requests)
- perl (For PCRE. POSIX bash doesn't support look-ahead and look-behind regex. Not necessary if you don't use record mode)
git clone https://github.com/simonmysun/ell.git ~/.ellrc.d
echo 'export PATH="${HOME}/.ellrc.d:${PATH}"' >> ~/.bashrc
or
git clone [email protected]:simonmysun/ell.git ~/.ellrc.d
echo 'export PATH="${HOME}/.ellrc.d:${PATH}"' >> ~/.bashrc
This will clone the repository into .ellrc.d
in your home directory and add it to your PATH.
See Configuration.
Here's an example configuration to use gemini-1.5-flash
from Google. You need to set these variables in your ~/.ellrc
:
ELL_API_STYLE=gemini
ELL_LLM_MODEL=gemini-1.5-flash
ELL_TEMPLATE=default-gemini
ELL_API_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
ELL_API_URL=https://generativelanguage.googleapis.com/v1beta/models/
Here's an example configuration to use gpt-4o-mini
from OpenAI.
ELL_API_STYLE=openai
ELL_LLM_MODEL=gpt-4o-mini
ELL_TEMPLATE=default-openai
ELL_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
ELL_API_URL=https://api.openai.com/v1/chat/completions
Make sure you have configured correctly.
Ask a question:
ell "What is the capital of France?"
Specify a model and use a file as input:
ell -m gpt-4o -f user_prompt.txt
Record terminal input and output and use as context:
ell -r
# do random stuff
ell What does the error code mean?
ell How to fix it?
Run in interactive mode:
ell -i
If you were in record mode via ell -r
, the context of the shell will be used. The two modes can be combined: ell -r -i
.
Specify a template and start in record mode and interactive mode:
ell -r -i -t ctf-gemini
or
ell -r -i -t ctf-openai
depends on which API you are using.
ctf.webm
Example application
See Templates.
See Styling.
See Plugins.
See Risks Consideration.
-
Q: Why is it called "ell"?
-
A: "ell" is a combination of shell and LLM. It is a shell script to use LLM backends. "shellm" was once considered, but it was dropped because it could be misunderstood as "she llm". "ell" is shorter, easy to type and easy to remember. It does not conflict with any active software. Note that the name "shell" of shell scripts is because it is the outer layer of the operating system exposed to the user. It doesn't indicate that it is a CLI or GUI. Unfortunately it cannot be shortened to "L" which has the same pronunciation because that would conflict with too many things.
-
Q: Why is it written in Bash?
-
A: Because Bash is the most common shell on Unix-like systems and there is just no need to use a more complex language for this.
-
Q: What is the difference between ell and other similar projects?
-
A: ell is written in almost pure Bash, which makes it very lightweight and easy to install. It is also very easy to extend and modify. It is pipe friendly, which means it is designed to be used in combination with other tools.
- https://github.com/kardolus/chatgpt-cli - A CLI for ChatGPT written in Go.
- https://github.com/kharvd/gpt-cli A CLI for various LLM backends written in Python.
- https://github.com/JohannLai/gptcli A CLI for OpenAI LLms written in TypeScript.
- https://github.com/x-cmd/x-cmd A powerful collection of tools which includes a CLI for LLM APIs.
Contributions are welcome! If you have any ideas, suggestions, or bug reports, please open an issue or submit a pull request.
This project is licensed under the MIT License. See the LICENSE file for more details.