Skip to content

Releases: sigoden/aichat

v0.23.0-rc1

20 Oct 13:46
Compare
Choose a tag to compare
v0.23.0-rc1 Pre-release
Pre-release

Break Changing

Respect XDG_CONFIG_HOME for placing config dir

Drop replicate client and remove support for octoai

REPL Changing

- .clear messages          Erase messages in the current session
+ .empty session           Erase messages in the current session
+ .compress session        Compress messages in the current session

CLI Changing

+ --empty-session          Ensure the session is empty

New Features

  • add batch_size to RAG yaml (#876)
  • add retry logic to embedding/rerank api calls (#879)
  • add AICHAT_EMBEDDINGS_RETRY_LIMIT (#882)
  • respect XDG_CONFIG_HOME for placing config dir (#889)
  • change prompt for choosing shell command actions (#898)
  • abandon replicate client (#900)
  • remove support for octoai (#901)
  • when saving input to message.md, use file paths instead of file contents (#905)
  • add .compress session REPL command (#907)
  • prelude supports : (#913)
  • session persists role name (#914)
  • webui supports pasting images (#921)
  • add CLI option --empty-session (#922)
  • rename .clear messages to .empty session (#923)
  • add shell action copy (#926)

Bug Fixes

  • allow reading from special files (device, fifo, etc) (#886)
  • unexpected REPL without tty (#911)
  • prelude works only if the state is empty (#920)
  • unexpected error while piping to shell execution on macOS (#930)

v0.22.0

18 Sep 00:21
029058c
Compare
Choose a tag to compare

RAG Changes

Store RAG in YAML format instead of bin format

We used to store RAG in bin format at <aichat-config-dir>/rags/<name>.bin. However, the bin format has various drawbacks, so we are now using the yaml format to store RAG.

All RAGs in bin format will be ignored. Please recreate them in YAML format.

Support for RAG-scoped top_k and reranker_model options

Now, users can set the top_k and reranker_model parameters individually for each RAG.

.set rag_top_k 5
.set rag_reranker_model cohere:rerank-english-v3.0

New REPL Commands

.delete                  Delete roles/sessions/RAGs/agents
.save agent-config       Save the current agent config to file
.sources rag             View the RAG sources in the last query

New Features

  • add config serve_addr & env $SERVE_ADDR for specifying serve addr (#839)
  • better html to markdown converter (#840)
  • add role %create-prompt% (#843)
  • tolerate failure to load some rag files (#846)
  • support RAG-scoped rag_top_k and rag_reranker_model (#847)
  • save rag in YAML instead of bin (#848)
  • chat-completions api supports tools (#850)
  • support rerank api (#851)
  • abandon config rag_min_score_rerank (#852)
  • add .delete repl command (#862)
  • no delete the existing role/session when saving with a new name (#863)
  • specify shell via $AICHAT_SHELL (#866)
  • role/session/agent should not inherit the global use_tools (#868)
  • add .save agent-config repl command (#870)
  • add .sources rag repl command (#871)

Bug Fixes

  • render stream failed due to read cursor position timeout (#835)

v0.21.1

04 Sep 00:03
Compare
Choose a tag to compare

What's Changed

  • fix: : cannot be used as seperator for role arguments #830
  • feat: add siliconflow client #831

Break Changing in v0.21.0

Replace roles.yaml with roles/<name>.md (see #804)

Migrate ollama/qianwen/cloudflare clients to openai-compatible

clients:

-  - type: ollama
-    api_base: http://localhost:11434
+  - type: openai-compatible
+    name: ollama
+    api_base: http://localhost:11434/v1

-  - type: qianwen
+  - type: openai-compatible
+    name: qianwen

-  - type: cloudflare
-    account_id: xxx
-    api_base: https://api.cloudflare.com/client/v4
+  - type: openai-compatible
+    name: cloudflare
+    api_base: https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/v1

v0.21.0

03 Sep 09:04
d57f114
Compare
Choose a tag to compare

Break Changing

Replace roles.yaml with roles/<name>.md (see #804)

Migrate ollama/qianwen/cloudflare clients to openai-compatible

clients:

-  - type: ollama
-    api_base: http://localhost:11434
+  - type: openai-compatible
+    name: ollama
+    api_base: http://localhost:11434/v1

-  - type: qianwen
+  - type: openai-compatible
+    name: qianwen

-  - type: cloudflare
-    account_id: xxx
-    api_base: https://api.cloudflare.com/client/v4
+  - type: openai-compatible
+    name: cloudflare
+    api_base: https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/v1

Clients Changes

  • migrate ollama to openai-compatible
  • migrate qianwen to openai-compatible
  • migrate cloudflare to openai-compatible
  • add github
  • add ai21
  • add huggingface

New Features

  • support builtin website crawling (recursive_url) (#786)
  • no check model's support for function calls (#791)
  • enable custom api_base for most clients (#793)
  • support github client (#798)
  • support ai21 client (#800)
  • replace roles.yaml with roles/<name>.md (#810)
  • save temp session with temp-<timestamp> if save_session: true (#811)
  • webui use querystring as settings (#814)
  • webui support RAG (#815)
  • migrate ollama/qianwen clients to openai-compatible (#816)
  • migrate cloudflare client to openai-compatible (#821)
  • add huggingface client (#822)
  • use dynamic batch size for embedding (#826)

Bug Fixes

  • incorrect function call handling with session in non-REPL (#777)
  • claude fails to run tools with zero arguments (#780)
  • invalid model error while switching roles if the model_id is same to current (#788)
  • incomplete stream response in proxy LLM api (#796)

v0.20.0

02 Aug 10:03
514a368
Compare
Choose a tag to compare

Patch Client API

AIChat supports patching API request url, headers and body.

For example, we can patch claude:claude-3-5-sonnet-20240620 to use beta 8192 output tokens.

clients:
  - type: claude
    ...
    patch:                                # Patch api request
      chat_completions:                   # Api type, one of chat_completions, embeddings, and rerank
        'claude-3-5-sonnet-20240620':     # The regex to match model names, e.g. '.*' 'gpt-4o' 'gpt-4o|gpt-4-.*'
          headers:
            anthropic-beta: max-tokens-3-5-sonnet-2024-07-15
          body:
            max_tokens: 8192

More flexible using tools

AIChat introduces the use_tools configuration to manage which tools are included. This configuration works across global, role, session, and agent levels.
AIChat also introduces mapping_tools for managing aliases for a tool or toolset.

mapping_tools:
  fs: 'fs_cat,fs_ls,fs_mkdir,fs_rm,fs_write'
use_tools: 'execute_command,fs'

Configuration Changes

- buffer_editor: null
+ editor: null

- dangerously_functions_filter: null
- agents:
-   - name: todo
-     ...

The tool determines whether the operation is dangerous and whether to ask for confirmation, the dangerously_functions_filter is unnecessary.

Each AI agent has its own config.yaml file, there is no need for a central agents configuration.

Enviroment Variables Changes

  • AIChat supports env file (<aichat-config-dir>/.env) for managing environment variables.

  • All config items have related environment variables to override their values.

    For example, we can use AICHAT_MODEL to override the default LLM and AICHAT_LIGHT_THEME to switch to light theme.

  • AIChat Supports env AICHAT_PATCH_{client}_CHAT_COMPLETIONS for patching chat completions api request url, headers and body.

    For example, set AICHAT_PATCH_OPENAI_CHAT_COMPLETIONS='{"gpt-4o":{"body":{"seed":666,"temperature":0}}}' to make gpt-4o more deterministic.

Client Changes

  • vertexai client supports cluade/mistral models, vertexai-claude client was abandoned.
  • bedrock client switch to converse api, supports llama3.1/mistral-large-v2/cohere.command-r models.
  • rag-dedicated client was abandoned, use openai-compatible client instead.

CLI Changes

-  -w, --wrap <WRAP>          Control text wrapping (no, auto, <max-width>)
-  -H, --no-highlight         Turn off syntax highlighting
-      --light-theme          Use light theme

Use environment variables AICHAT_WRAP, AICHAT_HIGHLIGHT and AICHAT_LIGHT_THEME instead.

New REPL Commands

.variable <name> <value>
.set stream false
.set use_tools <tools>

New Features

  • load env vars from file (#685)
  • enhenced flexibility for use tools (#688)
  • agent can reuse tools (#690)
  • support agent variables (#692)
  • --file/.file can load dirs (#693)
  • adjust the way of obtaining function call results (#695)
  • webui supports text to speech for messages (#712)
  • enhance logger (#731)
  • move agent config to separate file (#741)
  • webui add autofocus to chat-input in textarea (#742)
  • merge vertexai-cluade with vertexai (#745)
  • vertexai support mistral models (#746)
  • ollama support tools and new embeddings api (#748)
  • all config fields have related environment variables (#751)
  • bedrock client switch to converse api and support cohere models (#747)
  • support patching request url, headers and body (#756)
  • abandon rag_dedicated client and improve (#757)
  • abandon cli options --wrap, --no-highlight and --light-theme (#758)
  • add config.stream and .set stream repl command (#759)
  • export agent variable as LLM_AGENT_VAR_* (#766)
  • add agent-scoped agent_prelude config (#770)
  • rename config.buffer_editor to config.editor (#773)

Bug Fixes

  • .starter tab completion (#709)
  • webui input-panel exceeds viewpoint on mobile (#714)
  • problem with input token limit (#737)
  • invalid tool_calls of qianwen client (#740)
  • unable to rebuild agent rag (#763)

v0.19.0

03 Jul 23:21
4edf14f
Compare
Choose a tag to compare

Support RAG

Seamlessly integrates document interactions into your chat experience.

aichat-rag

Support AI Agent

AI Agent = Prompt (Role) + Tools (Function Callings) + Knowndge (RAG). It's also known as OpenAI's GPTs.

aichat-agent

New Platforms

  • lingyiwanwu(01ai)
  • voyageai
  • jina

New Models

  • claude:claude-3-5-sonnet-20240620
  • vertexai:gemini-1.5-pro-001
  • vertexai:gemini-1.5-flash-001
  • vertexai-claude:claude-3-5-sonnet@20240620
  • bedrock:anthropic.claude-3-5-sonnet-20240620-v1:0
  • zhipuai:glm-4-0520
  • lingyiwanwu:yi-large*
  • lingyiwanwu:yi-medium*
  • lingyiwanwu:yi-spark

All embedding/reranker models are ignored

New Configuration

repl_prelude: null               # Overrides the `prelude` setting specifically for conversations started in REPL
agent_prelude: null              # Set a session to use when starting a agent. (e.g. temp, default)

# Regex for seletecting dangerous functions
# User confirmation is required when executing these functions
# e.g. 'execute_command|execute_js_code' 'execute_.*'
dangerously_functions_filter: null
# Per-Agent configuration
agents:
  - name: todo-sh
    model: null
    temperature: null
    top_p: null
    dangerously_functions_filter: null

# Define document loaders to control how RAG and `.file`/`--file` load files of specific formats.
document_loaders:
  # You can add custom loaders using the following syntax:
  #   <file-extension>: <command-to-load-the-file>
  # Note: Use `$1` for input file and `$2` for output file. If `$2` is omitted, use stdout as output.
  pdf: 'pdftotext $1 -'                         # Load .pdf file, see https://poppler.freedesktop.org
  docx: 'pandoc --to plain $1'                  # Load .docx file
  # xlsx: 'ssconvert $1 $2'                     # Load .xlsx file
  # html: 'pandoc --to plain $1'                # Load .html file
  recursive_url: 'rag-crawler $1 $2'            # Load websites, see https://github.com/sigoden/rag-crawler

# ---- RAG ----
rag_embedding_model: null         # Specifies the embedding model to use
rag_reranker_model: null          # Specifies the rerank model to use
rag_top_k: 4                      # Specifies the number of documents to retrieve
rag_chunk_size: null              # Specifies the chunk size
rag_chunk_overlap: null           # Specifies the chunk overlap
rag_min_score_vector_search: 0    # Specifies the minimum relevance score for vector-based searching
rag_min_score_keyword_search: 0   # Specifies the minimum relevance score for keyword-based searching
rag_min_score_rerank: 0           # Specifies the minimum relevance score for reranking
rag_template: ...

clients:
  - name: localai
    models:
      - name: xxxx                                  # Embedding model
        type: embedding
        max_input_tokens: 2048
        default_chunk_size: 2000                        
        max_batch_size: 100
      - name: xxxx                                  # Reranker model
        type: reranker 
        max_input_tokens: 2048

New REPL Commands

.edit session            Edit the current session with an editor

.rag                     Init or use the RAG
.info rag                View RAG info
.rebuild rag             Rebuild the RAG to sync document changes
.exit rag                Leave the RAG

.agent                   Use a agent
.info agent              View agent info
.starter                 Use the conversation starter
.exit agent              Leave the agent

.continue                Continue the response
.regenerate              Regenerate the last response

New CLI Options

  -a, --agent <AGENT>        Start a agent
  -R, --rag <RAG>            Start a RAG
      --list-agents          List all agents
      --list-rags            List all RAGs

Break Changing

Some client fields have changed

clients:
  - name: myclient
    patches: 
      <regex>:
-       request_body:
+       chat_completions_body:           
    models:
    - name: mymodel
      max_output_tokens: 4096
-     pass_max_tokens: true
+     require_max_tokens: true

The way to identify dangerous functions has changed

Previous we treats function name that starts with may_ as execute type (dangerously). This method requires modifying function names, which is inflexible.

Now we makes it configurable. In config.yaml, you can now define which functions are considered dangerous and require user confirmation .

dangerously_functions_filter: 'execute_.*'

New Features

  • support RAG (#560)
  • custom more path to file/dirs with environment variables (#565)
  • support agent (#579)
  • add config dangerously_functions (#582)
  • add config repl_prelude and agent_prelude (#584)
  • add .starter repl command (#594)
  • add .edit session repl command (#606)
  • abandon auto_copy (#607)
  • add .continue repl command (#608)
  • add .regenerate repl command (#610)
  • support lingyiwanwu client (#613)
  • qianwen support function calling (#616)
  • support rerank (#620)
  • cloudflare support embeddings (#623)
  • serve embeddings api (#624)
  • ernie support embeddings and rereank (#630)
  • ernie support function calling (#631)
  • support rag-dedicated clients (jina and voyageai) (#645)
  • custom rag document loaders (#650)
  • rag load websites (#655)
  • implement native rag url loader (#660)
  • .file/--file support URLs (#665)
  • support .rebuild rag repl command (#672)

Bug Fixes

  • infinite loop of function calls on poor LLM (#585)
  • cohere tool use (#605)
  • gemini with functions that have empty parameters (#666)

v0.18.0

01 Jun 02:55
38797e3
Compare
Choose a tag to compare

Break Changing

Add custom request parameters based on patch, other than extra_fields

We used to add request parameters to models using extra_fields, but this approach lacked flexibility. We've now switched to a patch mechanism, which allows for customizing request parameters for one or more models.

The following examples enable web search functionality for all Cohere models.

  - type: cohere
    patches:
      ".*":
        request_body:
          connectors:
            - id: web-search

Remove all tokenizers

Different platforms may utilize varying tokenizers for their models, even across versions of the same model. For example, gpt-4 uses o200k_base while gpt-4-turbo and gpt-3.5-turbo employ cl100k_base.

AIChat supports 100 models, It's impossible to support all tokenizers, so we're removing them entirely and switching to a estimation algorithm.

Function Calling

Function calling supercharges LLMs by connecting them to external tools and data sources. This unlocks a world of possibilities, enabling LLMs to go beyond their core capabilities and tackle a wider range of tasks.

We have created a new repository to help you make the most of this feature: https://github.com/sigoden/llm-functions

Here's a glimpse of what function calling can do for you:

image

New Models

  • gemini:gemini-1.5-flash-latest
  • gemini-1.5-flash-preview-0514
  • qianwen:qwen-long

Features

  • Allow binding model to the role (#505)
  • Remove tiktoken (#506)
  • Support function calling (#514)
  • Webui add toolbox(copy-bt/regenerate-btn) to message (#521)
  • Webui operates independently from aichat (#527)
  • Allow patching req body with client config (#534)

Bug Fixes

  • No builtin roles if no roles.yaml (#509)
  • Unexpect enter repl if have pipe-in but no text args (#512)
  • Panic when check api error (#520)
  • Webui issue with image (#523)
  • Webui message body do not autoscroll to bottom sometimes (#525)
  • JSON stream parser and refine client modules (#538)
  • Bedrock issues (#544)

New Contributors

Full Changelog: v0.17.0...v0.18.0

v0.17.0

13 May 22:43
154c1e0
Compare
Choose a tag to compare

Break Changing

  • always use stream unless set --no-stream explicitly (#415)
  • vertexai config changed: replace api_base with project_id/location

Self-Hosted Server

AIChat comes with a built-in lightweight web server:

  • Provide access to all LLMs using OpenAI format API
  • Host LLM playground/arena web applications
$ aichat --serve
Chat Completions API: http://127.0.0.1:8000/v1/chat/completions
LLM Playground:       http://127.0.0.1:8000/playground
LLM ARENA:            http://127.0.0.1:8000/arena

New Clients

bedrock, vertex-claude, cloudflare, groq, perplexity, replicate, deepseek, zhipuai, anyscale, deepinfra, fireworks, openrouter, octoai, together

New REPL Command

.prompt                  Create a temporary role using a prompt
.set max_output_tokens
> .prompt your are a js console

%%> Date.now()
1658333431437

.set max_output_tokens 4096

New CLI Options

--serve [<ADDRESS>]    Serve the LLM API and WebAPP
--prompt <PROMPT>      Use the system prompt

New Configuration Fields

# Set default top-p parameter
top_p: null
# Command that will be used to edit the current line buffer with ctrl+o
# if unset fallback to $EDITOR and $VISUAL
buffer_editor: null

New Features

  • add completion scripts (#411)
  • shell commands support revision
  • add .prompt repl command (#420)
  • customize model's max_output_tokens (#428)
  • builtin models can be overwritten by models config (#429)
  • serve all LLMs as OpenAI-compatible API (#431)
  • support customizing top_p parameter (#434)
  • run without config file by set AICHAT_CLIENT (#452)
  • add --prompt option (#454)
  • non-streaming returns tokens usage (#458)
  • .model repl completions show max tokens and price (#462)

v0.16.0

11 Apr 00:41
a3f63a5
Compare
Choose a tag to compare

New Models

  • openai:gpt-4-turbo
  • gemini:gemini-1.0-pro-latest (replace gemini:gemini-pro)
  • gemini:gemini-1.0-pro-vision-latest (replace gemini:gemini-pro-vision)
  • gemini:gemini-1.5-pro-latest
  • vertexai:gemini-1.5-pro-preview-0409
  • cohere:command-r
  • cohere:command-r-plus

New Config

ctrlc_exit: false                # Whether to exit REPL when Ctrl+C is pressed

New Features

  • use ctrl+enter to newline in REPL (#394)
  • support cohere (#397)
  • -f/--file take one value and do not enter REPL (#399)

Full Changelog: v0.15.0...v0.16.0

v0.15.0

07 Apr 14:12
78d6e1b
Compare
Choose a tag to compare

Breaking Changes

Rename client localai to openai-compatible (#373)

clients:
--  type: localai
++  type: openai-compatible
++  name: localai

Gemini/VertexAI clients add block_threshold configuration (#375)

block_threshold: BLOCK_ONLY_HIGH # Optional field

New Models

  • claude:claude-3-haiku-20240307
  • ernie:ernie-4.0-8k
  • ernie:ernie-3.5-8k
  • ernie:ernie-3.5-4k
  • ernie:ernie-speed-8k
  • ernie:ernie-speed-128k
  • ernie:ernie-lite-8k
  • ernie:ernie-tiny-8k
  • moonshot:moonshot-v1-8k
  • moonshot:moonshot-v1-32k
  • moonshot:moonshot-v1-128k

New Config

save_session: null              # Whether to save the session, if null, asking

CLI Changes

New REPL Commands

.save session [name]                  
.set save_session <null|true|false>   
.role <name> <text...>          # Works in session

New CLI Options

--save-session                  Whether to save the session

Fix Bugs

  • erratic behaviour when using temp role in a session (#347)
  • color on non-truecolor terminal (#363)
  • not dirty session when updating properties (#379)
  • incorrectly render text contains tabs (#384)

Full Changelog: v0.14.0...v0.15.0