-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added Azure AI Chat Completion Client #4723
base: main
Are you sure you want to change the base?
Added Azure AI Chat Completion Client #4723
Conversation
@yanivvak can you review this? |
@ekzhu @rohanthacker
|
@@ -50,6 +50,9 @@ video-surfer = [ | |||
grpc = [ | |||
"grpcio~=1.62.0", # TODO: update this once we have a stable version. | |||
] | |||
azure-ai-inference = [ | |||
"azure-ai-inference>=1.0.0b6", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a specific reason for this version lower bound?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No this is not intentional, this is what gets added when i run uv add "azure-ai-inference" --optional azure-ai-inference
.
Is there another command I should use? I'm not experienced with uv
usage=usage, | ||
cached=False, | ||
) | ||
yield result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update object-level usage data.
usage=usage, | ||
cached=False, | ||
) | ||
return response |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update object-level usage data.
raise ValueError("Model does not support JSON output") | ||
|
||
if json_output is True: | ||
create_args["response_format"] = ChatCompletionsResponseFormatJSON() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to check existing response_format
value to make sure we aren't overwriting it.
return name | ||
|
||
|
||
class AzureAIChatCompletionClient(ChatCompletionClient): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doc string is required. See examples: https://microsoft.github.io/autogen/dev/reference/python/autogen_ext.agents.openai.html
@lspinheiro could you help reviewing this PR? |
|
||
class AzureAIChatCompletionClient(ChatCompletionClient): | ||
def __init__(self, **kwargs: Unpack[AzureAIChatCompletionClientConfig]): | ||
if "endpoint" not in kwargs: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this part could benefit from some better separation of concerns between config validation and instantiation. e.g.
class AzureAIChatCompletionClient(ChatCompletionClient):
def __init__(self, **kwargs: Unpack[AzureAIClientConfiguration]):
config = self._validate_config(kwargs)
self._client = self._create_client(config)
self._create_args = self._prepare_create_args(config)
# ...
@staticmethod
def _validate_config(config: Mapping[str, Any]) -> AzureAIClientConfiguration:
# Validation logic here
return config
Looks quite good and is consistent with the openai client. I have a minor comment about the config validation. @jackgerrits may have more options since a lot of the design decisions here are driven by his original implementation of the openai client. If anything doesn't make since in this context he would be the best person to evaluate. |
Related issue number
#4683 Adds initial support for Azure AI Chat Completion Client
Checks