Releases: letta-ai/letta
v0.3.20
💪 Performance improvements for gpt-4o
and gpt-4o-mini
Improved compatibility with gpt-4o
and gpt-4o-mini
models: We updated the prompt format for weaker models so that the inner thoughts of the agents are properly generated (previously, these models could only generate None
for inner thoughts).
🐛 Bugfixes for the CLI
Fixed issues with creating, listing and deleting humans and personas.
What's Changed
- fix: fix bug in migration script memory for 0.3.18 by @sarahwooders in #1548
- fix: create source by @sarahwooders in #1553
- feat: patch missing inner thoughts on new openai models by @cpacker in #1562
- fix: fix CLI commands by migrating to Python client by @sarahwooders in #1563
- feat: add character limits for persona/human to /config response by @goetzrobin in #1546
- chore: bump version 0.3.20 by @sarahwooders in #1568
Full Changelog: 0.3.19...0.3.20
v0.3.19
Support for custom memory classes
MemGPT now supports customizeable memory classes by extending the BaseMemory
class. This allows developers to both define custom memory fields (instead of just human/persona) as well as custom memory editing functions (rather than core_memory_[append/replace]
. Note that custom memory editing functions will need to have properly formatted docstrings so that the function can be added as a custom tool to the agent.
Default ChatMemory
class
Agents will default to using the ChatMemory
class, which has the original human/memory fields and memory editing functions in MemGPT:
from memgpt.memory import BaseMemory
class ChatMemory(BaseMemory):
def __init__(self, persona: str, human: str, limit: int = 2000):
self.memory = {
"persona": MemoryModule(name="persona", value=persona, limit=limit),
"human": MemoryModule(name="human", value=human, limit=limit),
}
def core_memory_append(self, name: str, content: str) -> Optional[str]:
"""
Append to the contents of core memory.
Args:
name (str): Section of the memory to be edited (persona or human).
content (str): Content to write to the memory. All unicode (including emojis) are supported.
Returns:
Optional[str]: None is always returned as this function does not produce a response.
"""
self.memory[name].value += "\n" + content
return None
def core_memory_replace(self, name: str, old_content: str, new_content: str) -> Optional[str]:
"""
Replace the contents of core memory. To delete memories, use an empty string for new_content.
Args:
name (str): Section of the memory to be edited (persona or human).
old_content (str): String to replace. Must be an exact match.
new_content (str): Content to write to the memory. All unicode (including emojis) are supported.
Returns:
Optional[str]: None is always returned as this function does not produce a response.
"""
self.memory[name].value = self.memory[name].value.replace(old_content, new_content)
return None
Improve agent creation interface
Custom tools and memory classes can now both be specified in the agent creation API:
from memgpt.memory import ChatMemory
# create agent with default tools/memory
basic_agent = client.create_agent()
# create agent with custom tools and memory
tool = client.create_tool(...)
memory = CustomMemory(human="I am Sarah", persona="I am Sam", organization="MemGPT")
custom_agent = client.create_agent(
name="my_agent", memory=memory, tools=[tool.name]
)
The memory editing bools from the extended BaseMemory
class are automatically added as tools to the agent to use.
Deprecation of Presets
Since specification of tools, memory, and the system prompt is now moving into the agent creation interface, we are no longer supporting presets as a mechanism to create agents.
Migration Script
We provide a migration script for migrating agents from v0.3.18 to this version (due to changes in the AgentState
schema).
What's Changed
- chore: bump version to 0.3.18 by @sarahwooders in #1483
- fix: fix main.yml test by @sarahwooders in #1484
- feat: move tool functions to user by @sarahwooders in #1487
- feat: migration script for 0.3.17 by @sarahwooders in #1489
- feat: refactor
CoreMemory
to support generalized memory fields and memory editing functions by @sarahwooders in #1479 - fix: use timestamp passed by request for user message created date by @goetzrobin in #1503
- fix: patch type error by @cpacker in #1506
- fix: dos2unix text files by @cpacker in #1507
- chore:
.gitattributes
by @cpacker in #1511 - fix: bug fixing for #1455 - not able to serialize json for Azure by @ljhskyso in #1495
- chore: update dev portal by @cpacker in #1514
- fix: remove duplicate
send_message
functions by @sarahwooders in #1519 - fix: use params in get_all_users(GET) endpoint by @yuleisheng in #1518
- fix: update docs to say 'stream_steps = True' by @yuleisheng in #1531
- fix: Fixed issue #1523 by @Vinayak21574 in #1526
- fix: fix example scripts by @sarahwooders in #1536
- fix: add memory tools from dev portal by @sarahwooders in #1540
- feat: migration script for version 0.3.18 by @sarahwooders in #1541
- chore: bump version 0.3.19 by @sarahwooders in #1542
New Contributors
- @ljhskyso made their first contribution in #1495
- @yuleisheng made their first contribution in #1518
- @Vinayak21574 made their first contribution in #1526
Full Changelog: 0.3.18...0.3.19
v0.3.18
This release introduces tool creation from inside Python scripts, returning usage statistics, and many bug fixes.
🔧 Tool creation in the Python Client
We added support for directly creating tools in Python:
def print_tool(message: str):
"""
Args:
message (str): The message to print.
Returns:
str: The message that was printed.
"""
print(message)
return message
tool = client.create_tool(print_tool, tags=["extras"])
agent_state = client.create_agent(tools=[tool.name]))
📊 Usage Statistics
Sending a message to an agent now also returns usage statistics for computing cost metrics:
class MemGPTUsageStatistics(BaseModel):
completion_tokens: int
prompt_tokens: int
total_tokens: int
step_count: int
What's Changed
- feat: Qdrant storage connector by @Anush008 in #1023
- fix: remove server locking on agents by @sarahwooders in #1442
- fix: various fixes to python client and add tutorial notebooks by @sarahwooders in #1377
- feat: cursor pagination of get_all_users in /admin/users route by @ajanitshimanga in #1441
- fix: allow concurrent processing for
async def send_message
function by @sarahwooders in #1445 - fix: update
rdme-openapi.yml
to correct Python + poetry version by @sarahwooders in #1446 - feat: Migrating CLI to run on MemGPT Client for
memgpt [list/add/delete]
(#1428) by @sarahwooders in #1449 - fix: update
Dockerfile
to Python 3.12 by @sarahwooders in #1456 - fix: debug logs in server (#1452) by @sarahwooders in #1457
- feat: add tools from the Python client by @sarahwooders in #1463
- fix: simple_summary_wrapper function_call KeyError by @djkazic in #1265
- fix: add
ensure_ascii
to missingjson.dumps
calls by @cpacker in #1466 - feat: add more tool calling support to
LocalClient
by @sarahwooders in #1465 - fix: fix ugly dev tool print in cli by @cpacker in #1469
- fix: patch
/pop
,/retry
, and/rethink
by @cpacker in #1471 - fix: check tool call type by @sarahwooders in #1451
- fix: drop print from #1465 by @cpacker in #1472
- feat: dev portal fixes for server changes by @sarahwooders in #1474
- fix: more server patches for dev portal by @sarahwooders in #1475
- feat: add token streaming to the MemGPT API by @cpacker in #1280
- feat: include usage statistics in message response by @sarahwooders in #1482
New Contributors
- @Anush008 made their first contribution in #1023
- @ajanitshimanga made their first contribution in #1441
- @djkazic made their first contribution in #1265
Full Changelog: 0.3.17...0.3.18
v0.3.17
🦙 You can now use MemGPT with the Ollama embeddings endpoint!
What's Changed
- fix: Append encoding='utf-8' to open by @bear0330 in #1423
- fix: #1532 upload sources file error "I/O operation on closed file" by @scenaristeur in #1425
- fix: add back
memgpt/configs
folder by @sarahwooders in #1431 - feat: split up endpoint tests and remove OpenAI dependency for main pytest tests by @sarahwooders in #1432
- feat: Ollama embeddings api + Ollama tests by @KrishnaM251 @sarahwooders in #1433
- docs: update compat checklist by @cpacker in #1434
- chore: bump version 0.3.17 by @sarahwooders in #1435
New Contributors
- @bear0330 made their first contribution in #1423
- @KrishnaM251 made their first contribution in #1433
Full Changelog: 0.3.16...0.3.17
v0.3.16
🧿 Milvus integration: you can now use Milvus to back the MemGPT vector database! For more information, see: https://memgpt.readme.io/docs/storage#milvus
What's Changed
- docs: Update README.md by @KPCOFGS in #1396
- fix: creation of invalid tool in tool builder by @VigroX in #1402
- docs: Update README.md by @alexpdev in #1403
- feat(JSON Response): Enable JSON Response format for all Openai Calls… by @lenaxia in #1401
- fix: patch #1401 by @cpacker in #1406
- docs: Update python_client.md by @scenaristeur in #1413
- fix: get_keys_response is a list not a dict by @scenaristeur in #1412
- docs: update quickstart-server instructions by @MEllis-github in #1409
- feat:
resend
example by @cpacker in #1416 - fix: add missing attribution for #1416 by @cpacker in #1417
- feat: Milvus storage connector (#1198) by @sarahwooders in #1400
- chore: bump version by @cpacker in #1421
New Contributors
- @KPCOFGS made their first contribution in #1396
- @VigroX made their first contribution in #1402
- @alexpdev made their first contribution in #1403
- @lenaxia made their first contribution in #1401
- @scenaristeur made their first contribution in #1413
- @MEllis-github made their first contribution in #1409
Full Changelog: 0.3.15...0.3.16
v0.3.15
🦙 Llama 3 support and bugfixes
What's Changed
- fix: Update static_files.py by @madgrizzle in #1340
- fix: fix summarizer for tool_calls by @sarahwooders in #1350
- fix: patch typos in system prompt by @D-Octopus in #1348
- fix: update docker test by @cpacker in #1354
- docs: Update git clone link in README.md by @untilhamza in #1368
- docs: Update README.md by @ykamakazi in #1367
- fix: various breaking bugs with local LLM implementation and postgres docker. by @madgrizzle in #1355
- fix: patch #1355 by @cpacker in #1373
- feat: add more tool functionality for python client by @sarahwooders in #1361
- feat: update portal by @cpacker in #1376
- fix: make auth endpoint work with user API keys by @cpacker in #1385
- docs: update README: Twitter Button, Consolidate call-to-action, Reorganize Content by @WIND-D in #1387
- feat: Llama3 by @kir-gadjello in #1316
- docs: Update storage.md by @madgrizzle in #1359
- chore: bump version by @cpacker in #1390
New Contributors
- @madgrizzle made their first contribution in #1340
- @D-Octopus made their first contribution in #1348
- @ykamakazi made their first contribution in #1367
- @WIND-D made their first contribution in #1387
- @kir-gadjello made their first contribution in #1316
Full Changelog: 0.3.14...0.3.15
v0.3.14
🐜 Bug-fix release
What's Changed
- docs: add install instructions for docker by @cpacker in #1317
- chore: bump version by @cpacker in #1318
- chore: update pypi description by @sarahwooders in #1321
- fix: hardcoded version in server_config.yaml by @cpacker in #1323
- docs: update README.md by @eltociear in #1328
- fix: allow passing full postgres URI and only override config URI is env variables provided by @sarahwooders in #1327
- fix: modify quickstart config paths by @sarahwooders in #1329
- docs: update readme with service diagram + dev portal teaser by @cpacker in #1332
- fix: remove unnecessary openai print by @sarahwooders in #1333
- fix: cleanup stray prints by @cpacker in #1335
- fix: remove requirement to specify version in
~/.memgpt/config
by @sarahwooders in #1337 - chore: bump version to
0.3.14
+ strip version from server yaml by @cpacker in #1334 - docs: Update README.md by @cpacker in #1338
Full Changelog: 0.3.13...0.3.14
v0.3.13
🖥️ MemGPT Dev Portal (alpha build)
Please note the dev portal is in alpha and this is not an official release!
This adds support for viewing the dev portal when the MemGPT service is running. You can view the dev portal on memgpt.localhost
(if running with docker) or localhost:8283
(if running with memgpt server
).
Make sure you install MemGPT with pip install pymemgpt
and run memgpt quickstart [--backend openai]
or memgpt configure
before running the server.
There are two options to deploy the server:
Option 1: Run with docker compose
- Install and run docker
- Clone the repo:
git clone [email protected]:cpacker/MemGPT.git
- Run
docker compose up
- Go to
memgpt.localhost
in the browser to view the developer portal
Option 2: Run with the CLI:
- Run
memgpt server
- Go to
localhost:8283
in the browser to view the developer portal
What's Changed
- fix: hardcode MemGPT version in
config/server_config.yaml
by @sarahwooders in #1292 - feat: Add personal assistant demo code from meetup by @cpacker in #1294
- chore: better database errors by @cpacker in #1299
- ci: update workflows (add
autoflake
andisort
) by @cpacker in #1300 - fix: patch tests by @cpacker in #1304
- fix: patch
embedding_model
null issue in tests by @cpacker in #1305 - feat: update portal by @cpacker in #1306
- fix: refactor
create(..)
call to LLMs to not requireAgentState
by @sarahwooders in #1307 - feat: add testing for LLM + embedding endpoints by @sarahwooders in #1308
- docs: Documentation Typo in Storage URL by @sanegaming in #1298
- feat: code cleanup + make server password print green by @sarahwooders in #1312
- chore: fix README to reflect current project status by @sarahwooders in #1313
Full Changelog: 0.3.12...0.3.13
0.3.12
🐳 Cleaned up workflow for creating a MemGPT service with docker compose up
:
- Reverse proxy added so you can open the dev portal at
http://memgpt.localhost
- Docker development with
docker compose -f dev-compose.yaml up --build
(built from local code) - Postgres data mounted to
.pgdata
folder - OpenAI keys passed to server via environment variables (in
compose.yaml
)
🪲 Bugfixes for Groq API and server
What's Changed
- fix: Clean up and simplify docker entrypoint (#1235) by @norton120 in #1259
- fix: add DB prefill for default user, preset, humans, and persona for server by @sarahwooders in #1273
- feat: misc server updates by @cpacker in #1275
- feat: use background tasks for processing uploaded files to REST API by @sarahwooders in #1263
- fix: misc bugs by @cpacker in #1276
- chore: run autoflake + isort by @cpacker in #1279
- feat: disallow creation of tools with the same name by @sarahwooders in #1285
- feat: add workflow to build + test docker container by @sarahwooders in #1278
- fix: hardcoded stop tokens to patch Groq API's new 4 stop token limit for
/completions
by @cpacker in #1288
New Contributors
- @norton120 made their first contribution in #1235
Full Changelog: 0.3.11...0.3.12
0.3.11
🚰 We now support streaming in the CLI when using OpenAI (+ OpenAI proxy) endpoints! You can turn on streaming mode with memgpt run --stream
What's Changed
- fix: remove default persona/human from
memgpt configure
and add functionality for modifying humans/presets more clearly by @sarahwooders in #1253 - fix: update
ChatCompletionResponse
to makemodel
field optional by @sarahwooders in #1258 - fix: Fixed NameError: name 'attach' is not defined by @taddeusb90 in #1255
- fix: push/pull container from
memgpt/memgpt-server:latest
by @sarahwooders in #1267 - fix: remove message UTC validation temporarily to fix dev portal + add
-d
flag todocker compose up
for tests by @sarahwooders in #1268 - chore: bump version by @sarahwooders in #1269
- feat: add streaming support for OpenAI-compatible endpoints by @cpacker in #1262
New Contributors
- @taddeusb90 made their first contribution in #1255
Full Changelog: 0.3.10...0.3.11