Okos is a Telegram AI Assistant built with TypeScript, LangGraph, and multiple AI model providers. It maintains conversation context and provides summaries of interactions.
- Multiple AI model support (OpenAI, Google Gemini, Groq, Ollama)
- Conversation context management
- Automatic conversation summarization
- Multiple Images input support
- Internet searching
- Redis for state persistence
- Docker support for both local and cloud deployments
- Node.js 20+ (for development only)
- Docker and Docker Compose (for containerized deployment)
- Telegram Bot Token from BotFather
- API keys for chosen AI providers
- Redis server
- Ollama with Llama model installed (for Ollama model provider)
ghcr.io/johnnybui/okos
Platforms: amd64
and arm64
- Clone the repository
- Install dependencies:
yarn install
- Copy the example environment file:
# For local development
cp .env.example .env
# For Docker deployment
cp .env.docker.example .env.docker
- Configure environment variables in
.env
or.env.docker
:
TELEGRAM_BOT_TOKEN
: Your Telegram bot tokenMODEL_PROVIDER
: Choose from 'ollama', 'google', 'groq', or 'openai'- Provider-specific API keys and model names
- Redis URL
- (Optional) LangSmith credentials for monitoring
Development mode with hot reload:
yarn dev
Production mode:
yarn build
yarn start
You can deploy using one of two options:
For local LLM inference:
- Build Containers (optional):
Use the command below to build the containers. Alternatively, to use a prebuilt image, edit thedocker-compose
file, replacing thebuild: .
line with:Run build:image: ghcr.io/johnnybui/okos
yarn build:ollama
- Start Services:
yarn up:ollama
For cloud-based AI providers (OpenAI, Google, Groq):
- Build Containers (optional):
Similar to local deployment, replacebuild: .
in thedocker-compose
file with the prebuilt image if desired:Run build:image: ghcr.io/johnnybui/okos
yarn build:cloud
- Start Services:
yarn up:cloud
TELEGRAM_BOT_TOKEN
: Telegram Bot tokenMODEL_PROVIDER
: AI model provider ('ollama', 'google', 'groq', or 'openai')SEARCH_PROVIDER
: Search provider ('tavily' or 'brave')TAVILY_API_KEY
: Tavily API key for internet searchingBRAVE_SEARCH_API_KEY
: Brave Search API key for internet searchingREDIS_URL
: Redis connection URL
- OpenAI:
OPENAI_API_KEY
OPENAI_MODEL_NAME
(default: gpt-4o-mini)
- Google:
GOOGLE_API_KEY
GOOGLE_MODEL_NAME
(default: gemini-1.5-flash)
- Groq:
GROQ_API_KEY
GROQ_MODEL_NAME
(default: llama-3.3-70b-versatile)
- Ollama:
OLLAMA_API_URL
OLLAMA_MODEL_NAME
(default: llama3.2)
LANGCHAIN_TRACING_V2
: Enable LangSmith tracingLANGCHAIN_ENDPOINT
: LangSmith endpointLANGCHAIN_API_KEY
: LangSmith API keyLANGCHAIN_PROJECT
: LangSmith project name
MIT