SuperPrompter is a Python-based application that utilises the SuperPrompt-v1 model to generate optimised text prompts for AI/LLM image generation (for use with Stable Diffusion etc...) from user prompts.
See Brian Fitzgerald's Blog for a detailed explanation of the SuperPrompt-v1 model and its capabilities / limitations.
- Utilises the SuperPrompt-v1 model for text generation
- Basic graphical user interface built with tkinter
- Customisable generation parameters (max new tokens, repetition penalty, temperature, top p, top k, seed)
- Optional logging of input parameters and generated outputs
- Bundling options to include or exclude pre-downloaded model files
- Unloads the models when the application is idle to free up memory
Check releases page to see if there are any prebuilt binaries available for your platform.
- Python 3.x
- Required Python packages (listed in
requirements.txt
) - python-tk (
brew install python-tk
)
-
Clone the repository:
git clone https://github.com/sammcj/SuperPrompter.git
-
Navigate to the project directory:
cd SuperPrompter
-
Create a virtual environment (optional but recommended):
make venv
-
Install the required packages:
make install
-
Run the application:
make run
-
The application window will open, displaying a splash screen while checking for the SuperPrompt-v1 model files. If the model files are not found, they will be automatically downloaded.
-
Once the model is loaded, the main application window will appear. Enter your prompt in the "Your Prompt" text area.
-
Adjust the generation parameters (max new tokens, repetition penalty, temperature, top p, top k, seed) as desired.
- Note that is seed is set to
0
, a random seed will be used.
- Note that is seed is set to
-
Click the "Generate" button or press Enter to generate text based on the provided prompt and parameters.
-
The generated output will be displayed in the "Output" text area.
-
Optionally, enable logging by checking the "Enable Logging" checkbox. When enabled, the input parameters and generated outputs will be saved to a log file named
~/.superprompter/superprompter_log.txt
in the user's home directory.
SuperPrompter can be bundled into a standalone executable using PyInstaller. The bundling process is automated with a Makefile
and a bundle.py
script.
To bundle the application:
-
Install deps
make venv make install
-
Check it runs
make run
-
Run the bundling command
make bundle
This command will download the SuperPrompt-v1 model files and bundle the application with the model files included. Alternatively, if you want to bundle the application without including the model files (they will be downloaded at runtime), run:
make bundleWithOutModels
-
The bundled executable will be available in the
dist
directory.
The following steps have been contributed, I am not able to test on Windows so YMMV.
git clone https://github.com/sammcj/superprompter
cd SuperPrompter
python -m venv venv
venv\Scripts\activate
pip install -r requirements.txt
pip3 install torch --index-url https://download.pytorch.org/whl/cu121
python superprompter.py
Contributions are welcome! If you find any issues or have suggestions for improvements, please open an issue or submit a pull request.
This project is licensed under the MIT License.
- The SuperPrompt-v1 model is developed by Roborovski and can be found at https://huggingface.co/roborovski/superprompt-v1.
- The application uses the Transformers library by Hugging Face for working with the SuperPrompt-v1 model.
- Acephaliax on Reddit for the Windows instructions.