Skip to content

Releases: letta-ai/letta

v0.3.20

25 Jul 16:58
3504a02
Compare
Choose a tag to compare

💪 Performance improvements for gpt-4o and gpt-4o-mini

Improved compatibility with gpt-4o and gpt-4o-mini models: We updated the prompt format for weaker models so that the inner thoughts of the agents are properly generated (previously, these models could only generate None for inner thoughts).

🐛 Bugfixes for the CLI

Fixed issues with creating, listing and deleting humans and personas.

What's Changed

Full Changelog: 0.3.19...0.3.20

v0.3.19

14 Jul 23:48
5a30f7e
Compare
Choose a tag to compare

Support for custom memory classes

MemGPT now supports customizeable memory classes by extending the BaseMemory class. This allows developers to both define custom memory fields (instead of just human/persona) as well as custom memory editing functions (rather than core_memory_[append/replace]. Note that custom memory editing functions will need to have properly formatted docstrings so that the function can be added as a custom tool to the agent.

Default ChatMemory class

Agents will default to using the ChatMemory class, which has the original human/memory fields and memory editing functions in MemGPT:

from memgpt.memory import BaseMemory

class ChatMemory(BaseMemory):

    def __init__(self, persona: str, human: str, limit: int = 2000):
        self.memory = {
            "persona": MemoryModule(name="persona", value=persona, limit=limit),
            "human": MemoryModule(name="human", value=human, limit=limit),
        }

    def core_memory_append(self, name: str, content: str) -> Optional[str]:
        """
        Append to the contents of core memory.

        Args:
            name (str): Section of the memory to be edited (persona or human).
            content (str): Content to write to the memory. All unicode (including emojis) are supported.

        Returns:
            Optional[str]: None is always returned as this function does not produce a response.
        """
        self.memory[name].value += "\n" + content
        return None

    def core_memory_replace(self, name: str, old_content: str, new_content: str) -> Optional[str]:
        """
        Replace the contents of core memory. To delete memories, use an empty string for new_content.

        Args:
            name (str): Section of the memory to be edited (persona or human).
            old_content (str): String to replace. Must be an exact match.
            new_content (str): Content to write to the memory. All unicode (including emojis) are supported.

        Returns:
            Optional[str]: None is always returned as this function does not produce a response.
        """
        self.memory[name].value = self.memory[name].value.replace(old_content, new_content)
        return None

Improve agent creation interface

Custom tools and memory classes can now both be specified in the agent creation API:

from memgpt.memory import ChatMemory

# create agent with default tools/memory 
basic_agent = client.create_agent()

# create agent with custom tools and memory
tool = client.create_tool(...)
memory = CustomMemory(human="I am Sarah", persona="I am Sam", organization="MemGPT")
custom_agent = client.create_agent(
    name="my_agent", memory=memory, tools=[tool.name]
)

The memory editing bools from the extended BaseMemory class are automatically added as tools to the agent to use.

Deprecation of Presets

Since specification of tools, memory, and the system prompt is now moving into the agent creation interface, we are no longer supporting presets as a mechanism to create agents.

Migration Script

We provide a migration script for migrating agents from v0.3.18 to this version (due to changes in the AgentState schema).

What's Changed

New Contributors

Full Changelog: 0.3.18...0.3.19

v0.3.18

27 Jun 03:54
8880792
Compare
Choose a tag to compare

This release introduces tool creation from inside Python scripts, returning usage statistics, and many bug fixes.

🔧 Tool creation in the Python Client

We added support for directly creating tools in Python:

def print_tool(message: str):
    """ 
    Args: 
        message (str): The message to print.
        
    Returns:
        str: The message that was printed.
        
    """
    print(message)
    return message

tool = client.create_tool(print_tool, tags=["extras"])
agent_state = client.create_agent(tools=[tool.name]))

📊 Usage Statistics

Sending a message to an agent now also returns usage statistics for computing cost metrics:

class MemGPTUsageStatistics(BaseModel):
    completion_tokens: int
    prompt_tokens: int
    total_tokens: int
    step_count: int

What's Changed

New Contributors

Full Changelog: 0.3.17...0.3.18

v0.3.17

05 Jun 06:18
e1cbe64
Compare
Choose a tag to compare

🦙 You can now use MemGPT with the Ollama embeddings endpoint!

What's Changed

New Contributors

Full Changelog: 0.3.16...0.3.17

v0.3.16

26 May 23:05
ec894cd
Compare
Choose a tag to compare

🧿 Milvus integration: you can now use Milvus to back the MemGPT vector database! For more information, see: https://memgpt.readme.io/docs/storage#milvus

What's Changed

New Contributors

Full Changelog: 0.3.15...0.3.16

v0.3.15

16 May 22:56
c6325fe
Compare
Choose a tag to compare

🦙 Llama 3 support and bugfixes

What's Changed

New Contributors

Full Changelog: 0.3.14...0.3.15

v0.3.14

03 May 22:52
0a4adcb
Compare
Choose a tag to compare

🐜 Bug-fix release

What's Changed

Full Changelog: 0.3.13...0.3.14

v0.3.13

01 May 20:43
dfb4224
Compare
Choose a tag to compare

🖥️ MemGPT Dev Portal (alpha build)

Please note the dev portal is in alpha and this is not an official release!

This adds support for viewing the dev portal when the MemGPT service is running. You can view the dev portal on memgpt.localhost (if running with docker) or localhost:8283 (if running with memgpt server).

Make sure you install MemGPT with pip install pymemgpt and run memgpt quickstart [--backend openai] or memgpt configure before running the server.

There are two options to deploy the server:

Option 1: Run with docker compose

  1. Install and run docker
  2. Clone the repo: git clone [email protected]:cpacker/MemGPT.git
  3. Run docker compose up
  4. Go to memgpt.localhost in the browser to view the developer portal

Option 2: Run with the CLI:

  1. Run memgpt server
  2. Go to localhost:8283 in the browser to view the developer portal

What's Changed

Full Changelog: 0.3.12...0.3.13

0.3.12

23 Apr 04:42
274596c
Compare
Choose a tag to compare

🐳 Cleaned up workflow for creating a MemGPT service with docker compose up:

  • Reverse proxy added so you can open the dev portal at http://memgpt.localhost
  • Docker development with docker compose -f dev-compose.yaml up --build (built from local code)
  • Postgres data mounted to .pgdata folder
  • OpenAI keys passed to server via environment variables (in compose.yaml)

🪲 Bugfixes for Groq API and server

What's Changed

New Contributors

Full Changelog: 0.3.11...0.3.12

0.3.11

19 Apr 03:48
aeb4a94
Compare
Choose a tag to compare

🚰 We now support streaming in the CLI when using OpenAI (+ OpenAI proxy) endpoints! You can turn on streaming mode with memgpt run --stream

screencast

What's Changed

  • fix: remove default persona/human from memgpt configure and add functionality for modifying humans/presets more clearly by @sarahwooders in #1253
  • fix: update ChatCompletionResponse to make model field optional by @sarahwooders in #1258
  • fix: Fixed NameError: name 'attach' is not defined by @taddeusb90 in #1255
  • fix: push/pull container from memgpt/memgpt-server:latest by @sarahwooders in #1267
  • fix: remove message UTC validation temporarily to fix dev portal + add -d flag to docker compose up for tests by @sarahwooders in #1268
  • chore: bump version by @sarahwooders in #1269
  • feat: add streaming support for OpenAI-compatible endpoints by @cpacker in #1262

New Contributors

Full Changelog: 0.3.10...0.3.11