Skip to content

Commit

Permalink
Merge branch 'main' into fix/groupchat-model-registration
Browse files Browse the repository at this point in the history
  • Loading branch information
marklysze authored Aug 28, 2024
2 parents 40131dc + 4f9383a commit f4a4ecd
Show file tree
Hide file tree
Showing 144 changed files with 9,322 additions and 1,871 deletions.
40 changes: 40 additions & 0 deletions .github/workflows/contrib-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -612,3 +612,43 @@ jobs:
with:
file: ./coverage.xml
flags: unittests

BedrockTest:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest, windows-2019]
python-version: ["3.9", "3.10", "3.11", "3.12"]
exclude:
- os: macos-latest
python-version: "3.9"
steps:
- uses: actions/checkout@v4
with:
lfs: true
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install packages and dependencies for all tests
run: |
python -m pip install --upgrade pip wheel
pip install pytest-cov>=5
- name: Install packages and dependencies for Amazon Bedrock
run: |
pip install -e .[boto3,test]
- name: Set AUTOGEN_USE_DOCKER based on OS
shell: bash
run: |
if [[ ${{ matrix.os }} != ubuntu-latest ]]; then
echo "AUTOGEN_USE_DOCKER=False" >> $GITHUB_ENV
fi
- name: Coverage
run: |
pytest test/oai/test_bedrock.py --skip-openai
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
flags: unittests
7 changes: 4 additions & 3 deletions CITATION.cff
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ preferred-citation:
given-names: "Qingyun"
affiliation: "Penn State University, University Park PA USA"
- family-names: "Bansal"
given-names: "Gargan"
given-names: "Gagan"
affiliation: "Microsoft Research, Redmond WA USA"
- family-names: "Zhang"
given-names: "Jieyu"
Expand Down Expand Up @@ -43,6 +43,7 @@ preferred-citation:
- family-names: "Wang"
given-names: "Chi"
affiliation: "Microsoft Research, Redmond WA USA"
booktitle: "ArXiv preprint arXiv:2308.08155"
booktitle: "COLM"
title: "AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework"
year: 2023
year: 2024
url: "https://aka.ms/autogen-pdf"
15 changes: 10 additions & 5 deletions CONTRIBUTORS.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,18 @@
| Xiaoyun Zhang | [LittleLittleCloud](https://github.com/LittleLittleCloud) | Microsoft | AutoGen.Net, group chat | Yes | [Backlog - AutoGen.Net](https://github.com/microsoft/autogen/issues) - Available most of the time (PST) |
| Yiran Wu | [yiranwu0](https://github.com/yiranwu0) | Penn State University | alt-models, group chat, logging | Yes | |
| Beibin Li | [BeibinLi](https://github.com/BeibinLi) | Microsoft Research | alt-models | Yes | |
| Gagan Bansal | [gagb](https://github.com/gagb) | Microsoft Research | Complex Tasks | | |
| Gagan Bansal | [gagb](https://github.com/gagb) | Microsoft Research | All | | |
| Adam Fourney | [afourney](https://github.com/afourney) | Microsoft Research | Complex Tasks | | |
| Ricky Loynd | [rickyloynd-microsoft](https://github.com/rickyloynd-microsoft) | Microsoft Research | Teachability | | |
| Eric Zhu | [ekzhu](https://github.com/ekzhu) | Microsoft Research | Infra | | |
| Jack Gerrits | [jackgerrits](https://github.com/jackgerrits) | Microsoft Research | Infra | | |
| David Luong | [David Luong](https://github.com/DavidLuong98) | Microsoft | AutoGen.Net | | |

| Eric Zhu | [ekzhu](https://github.com/ekzhu) | Microsoft Research | All, Infra | | |
| Jack Gerrits | [jackgerrits](https://github.com/jackgerrits) | Microsoft Research | All, Infra | | |
| David Luong | [DavidLuong98](https://github.com/DavidLuong98) | Microsoft | AutoGen.Net | | |
| Davor Runje | [davorrunje](https://github.com/davorrunje) | airt.ai | Tool calling, IO | | Available most of the time (Central European Time) |
| Friederike Niedtner | [Friderike](https://www.microsoft.com/en-us/research/people/fniedtner/) | Microsoft Research | PM | | |
| Rafah Hosn | [Rafah](https://www.microsoft.com/en-us/research/people/raaboulh/) | Microsoft Research | PM | | |
| Robin Moeur | [Robin](https://www.linkedin.com/in/rmoeur/) | Microsoft Research | PM | | |
| Jingya Chen | [jingyachen](https://github.com/JingyaChen) | Microsoft | UX Design, AutoGen Studio | | |
| Suff Syed | [suffsyed](https://github.com/suffsyed) | Microsoft | UX Design, AutoGen Studio | | |

## I would like to join this list. How can I help the project?
> We're always looking for new contributors to join our team and help improve the project. For more information, please refer to our [CONTRIBUTING](https://microsoft.github.io/autogen/docs/contributor-guide/contributing) guide.
Expand Down
36 changes: 30 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,35 @@
<a name="readme-top"></a>


<div align="center">

<img src="https://microsoft.github.io/autogen/img/ag.svg" alt="AutoGen Logo" width="100">


[![PyPI version](https://badge.fury.io/py/pyautogen.svg)](https://badge.fury.io/py/pyautogen)
[![Build](https://github.com/microsoft/autogen/actions/workflows/python-package.yml/badge.svg)](https://github.com/microsoft/autogen/actions/workflows/python-package.yml)
![Python Version](https://img.shields.io/badge/3.8%20%7C%203.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue)
[![Downloads](https://static.pepy.tech/badge/pyautogen/week)](https://pepy.tech/project/pyautogen)

[![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core)


[![Discord](https://img.shields.io/discord/1153072414184452236?logo=discord&style=flat)](https://aka.ms/autogen-dc)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40pyautogen)](https://twitter.com/pyautogen)

[![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core)
</div>

# AutoGen

[📚 Cite paper](#related-papers).
<!-- <p align="center">
<img src="https://github.com/microsoft/autogen/blob/main/website/static/img/flaml.svg" width=200>
<br>
</p> -->
:fire: June 6, 2024: WIRED publishes a new article on AutoGen: [Chatbot Teamwork Makes the AI Dream Work](https://www.wired.com/story/chatbot-teamwork-makes-the-ai-dream-work/) based on interview with [Adam Fourney](https://github.com/afourney).

:fire: June 4th, 2024: Microsoft Research Forum publishes new update and video on [AutoGen and Complex Tasks](https://www.microsoft.com/en-us/research/video/autogen-update-complex-tasks-and-agents/) presented by [Adam Fourney](https://github.com/afourney).

:fire: May 29, 2024: DeepLearning.ai launched a new short course [AI Agentic Design Patterns with AutoGen](https://www.deeplearning.ai/short-courses/ai-agentic-design-patterns-with-autogen), made in collaboration with Microsoft and Penn State University, and taught by AutoGen creators [Chi Wang](https://github.com/sonichi) and [Qingyun Wu](https://github.com/qingyun-wu).

:fire: May 24, 2024: Foundation Capital published an article on [Forbes: The Promise of Multi-Agent AI](https://www.forbes.com/sites/joannechen/2024/05/24/the-promise-of-multi-agent-ai/?sh=2c1e4f454d97) and a video [AI in the Real World Episode 2: Exploring Multi-Agent AI and AutoGen with Chi Wang](https://www.youtube.com/watch?v=RLwyXRVvlNk).
Expand Down Expand Up @@ -244,16 +259,25 @@ In addition, you can find:

## Related Papers

[AutoGen Studio](https://www.microsoft.com/en-us/research/publication/autogen-studio-a-no-code-developer-tool-for-building-and-debugging-multi-agent-systems/)

```
@inproceedings{dibia2024studio,
title={AutoGen Studio: A No-Code Developer Tool for Building and Debugging Multi-Agent Systems},
author={Victor Dibia and Jingya Chen and Gagan Bansal and Suff Syed and Adam Fourney and Erkang (Eric) Zhu and Chi Wang and Saleema Amershi},
year={2024},
booktitle={Pre-Print}
}
```

[AutoGen](https://aka.ms/autogen-pdf)

```
@inproceedings{wu2023autogen,
title={AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework},
author={Qingyun Wu and Gagan Bansal and Jieyu Zhang and Yiran Wu and Beibin Li and Erkang Zhu and Li Jiang and Xiaoyun Zhang and Shaokun Zhang and Jiale Liu and Ahmed Hassan Awadallah and Ryen W White and Doug Burger and Chi Wang},
year={2023},
eprint={2308.08155},
archivePrefix={arXiv},
primaryClass={cs.AI}
year={2024},
booktitle={COLM},
}
```

Expand Down Expand Up @@ -351,7 +375,7 @@ may be either trademarks or registered trademarks of Microsoft in the United Sta
The licenses for this project do not grant you rights to use any Microsoft names, logos, or trademarks.
Microsoft's general trademark guidelines can be found at http://go.microsoft.com/fwlink/?LinkID=254653.

Privacy information can be found at https://privacy.microsoft.com/en-us/
Privacy information can be found at https://go.microsoft.com/fwlink/?LinkId=521839

Microsoft and any contributors reserve all other rights, whether under their respective copyrights, patents,
or trademarks, whether by implication, estoppel, or otherwise.
Expand Down
2 changes: 2 additions & 0 deletions TRANSPARENCY_FAQS.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,8 @@ While AutoGen automates LLM workflows, decisions about how to use specific LLM o
- Current version of AutoGen was evaluated on six applications to illustrate its potential in simplifying the development of high-performance multi-agent applications. These applications are selected based on their real-world relevance, problem difficulty and problem solving capabilities enabled by AutoGen, and innovative potential.
- These applications involve using AutoGen to solve math problems, question answering, decision making in text world environments, supply chain optimization, etc. For each of these domains AutoGen was evaluated on various success based metrics (i.e., how often the AutoGen based implementation solved the task). And, in some cases, AutoGen based approach was also evaluated on implementation efficiency (e.g., to track reductions in developer effort to build). More details can be found at: https://aka.ms/AutoGen/TechReport
- The team has conducted tests where a “red” agent attempts to get the default AutoGen assistant to break from its alignment and guardrails. The team has observed that out of 70 attempts to break guardrails, only 1 was successful in producing text that would have been flagged as problematic by Azure OpenAI filters. The team has not observed any evidence that AutoGen (or GPT models as hosted by OpenAI or Azure) can produce novel code exploits or jailbreak prompts, since direct prompts to “be a hacker”, “write exploits”, or “produce a phishing email” are refused by existing filters.
- We also evaluated [a team of AutoGen agents](https://github.com/microsoft/autogen/tree/gaia_multiagent_v01_march_1st/samples/tools/autogenbench/scenarios/GAIA/Templates/Orchestrator) on the [GAIA benchmarks](https://arxiv.org/abs/2311.12983), and got [SOTA results](https://huggingface.co/spaces/gaia-benchmark/leaderboard) as of
March 1, 2024.

## What are the limitations of AutoGen? How can users minimize the impact of AutoGen’s limitations when using the system?
AutoGen relies on existing LLMs. Experimenting with AutoGen would retain common limitations of large language models; including:
Expand Down
3 changes: 1 addition & 2 deletions autogen/agentchat/contrib/capabilities/transform_messages.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,8 @@
import copy
from typing import Dict, List

from autogen import ConversableAgent

from ....formatting_utils import colored
from ...conversable_agent import ConversableAgent
from .transforms import MessageTransform


Expand Down
92 changes: 92 additions & 0 deletions autogen/agentchat/contrib/capabilities/transforms.py
Original file line number Diff line number Diff line change
Expand Up @@ -445,3 +445,95 @@ def _compress_text(self, text: str) -> Tuple[str, int]:
def _validate_min_tokens(self, min_tokens: Optional[int]):
if min_tokens is not None and min_tokens <= 0:
raise ValueError("min_tokens must be greater than 0 or None")


class TextMessageContentName:
"""A transform for including the agent's name in the content of a message."""

def __init__(
self,
position: str = "start",
format_string: str = "{name}:\n",
deduplicate: bool = True,
filter_dict: Optional[Dict] = None,
exclude_filter: bool = True,
):
"""
Args:
position (str): The position to add the name to the content. The possible options are 'start' or 'end'. Defaults to 'start'.
format_string (str): The f-string to format the message name with. Use '{name}' as a placeholder for the agent's name. Defaults to '{name}:\n' and must contain '{name}'.
deduplicate (bool): Whether to deduplicate the formatted string so it doesn't appear twice (sometimes the LLM will add it to new messages itself). Defaults to True.
filter_dict (None or dict): A dictionary to filter out messages that you want/don't want to compress.
If None, no filters will be applied.
exclude_filter (bool): If exclude filter is True (the default value), messages that match the filter will be
excluded from compression. If False, messages that match the filter will be compressed.
"""

assert isinstance(position, str) and position is not None
assert position in ["start", "end"]
assert isinstance(format_string, str) and format_string is not None
assert "{name}" in format_string
assert isinstance(deduplicate, bool) and deduplicate is not None

self._position = position
self._format_string = format_string
self._deduplicate = deduplicate
self._filter_dict = filter_dict
self._exclude_filter = exclude_filter

# Track the number of messages changed for logging
self._messages_changed = 0

def apply_transform(self, messages: List[Dict]) -> List[Dict]:
"""Applies the name change to the message based on the position and format string.
Args:
messages (List[Dict]): A list of message dictionaries.
Returns:
List[Dict]: A list of dictionaries with the message content updated with names.
"""
# Make sure there is at least one message
if not messages:
return messages

messages_changed = 0
processed_messages = copy.deepcopy(messages)
for message in processed_messages:
# Some messages may not have content.
if not transforms_util.is_content_right_type(
message.get("content")
) or not transforms_util.is_content_right_type(message.get("name")):
continue

if not transforms_util.should_transform_message(message, self._filter_dict, self._exclude_filter):
continue

if transforms_util.is_content_text_empty(message["content"]) or transforms_util.is_content_text_empty(
message["name"]
):
continue

# Get and format the name in the content
content = message["content"]
formatted_name = self._format_string.format(name=message["name"])

if self._position == "start":
if not self._deduplicate or not content.startswith(formatted_name):
message["content"] = f"{formatted_name}{content}"

messages_changed += 1
else:
if not self._deduplicate or not content.endswith(formatted_name):
message["content"] = f"{content}{formatted_name}"

messages_changed += 1

self._messages_changed = messages_changed
return processed_messages

def get_logs(self, pre_transform_messages: List[Dict], post_transform_messages: List[Dict]) -> Tuple[str, bool]:
if self._messages_changed > 0:
return f"{self._messages_changed} message(s) changed to incorporate name.", True
else:
return "No messages changed to incorporate name.", False
Loading

0 comments on commit f4a4ecd

Please sign in to comment.