You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
The latest library of SemanticKernel does not support Azure o1 series models.
Reason for error: Unsupported parameter: 'max_tokens' is not supported with this model Use 'max_completion_tokens' instead.
To Reproduce
static async Task Main(string[] args)
{
var kernel = Kernel.CreateBuilder()
.AddAzureOpenAIChatCompletion(
"deploy",
"endpoint",
"key"
).Build();
KernelArguments arguments = new(new AzureOpenAIPromptExecutionSettings() { MaxTokens = 4096});
Console.WriteLine(await kernel.InvokePromptAsync("What is the Semantic Kernel? Include citations to the relevant information where it is referenced in the response.", arguments));
}
github-actionsbot
changed the title
Bug: SemanticKernel does not support Azure o1 series models
.Net: Bug: SemanticKernel does not support Azure o1 series models
Nov 19, 2024
If that helps as part of the issue above, seems that to avoid this error on happening would be to not set the max tokens in the settings and you should be able to execute.
Another approach that you might want to use is intercept the httprequest call with a Custom HttpHandler and modify the content body of your request to send max_completion_tokens instead of max_tokens when targeting the o1 model.
Describe the bug
The latest library of SemanticKernel does not support Azure o1 series models.
Reason for error: Unsupported parameter: 'max_tokens' is not supported with this model Use 'max_completion_tokens' instead.
To Reproduce
Platform
Additional context
Microsoft.SemanticKernel.HttpOperationException
HResult=0x80131500
Message=HTTP 400 (invalid_request_error: unsupported_parameter)
Parameter: max_tokens
Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.
d__0.MoveNext() (D:\sandbox\AOAI\Program.cs):行 35Source=Microsoft.SemanticKernel.Connectors.OpenAI
スタック トレース:
場所 Microsoft.SemanticKernel.Connectors.OpenAI.ClientCore.d__73
1.MoveNext() 場所 Microsoft.SemanticKernel.Connectors.OpenAI.ClientCore.<GetChatMessageContentsAsync>d__16.MoveNext() 場所 Microsoft.SemanticKernel.KernelFunctionFromPrompt.<GetChatCompletionResultAsync>d__25.MoveNext() 場所 Microsoft.SemanticKernel.KernelFunctionFromPrompt.<InvokeCoreAsync>d__6.MoveNext() 場所 System.Threading.Tasks.ValueTask
1.get_Result()場所 Microsoft.SemanticKernel.KernelFunction.<>c__DisplayClass27_0.<b__0>d.MoveNext()
場所 Microsoft.SemanticKernel.Kernel.d__34.MoveNext()
場所 Microsoft.SemanticKernel.Kernel.d__33.MoveNext()
場所 Microsoft.SemanticKernel.KernelFunction.d__27.MoveNext()
場所 AOAI.Program.
この例外は、最初にこの呼び出し履歴
[外部コード] でスローされました
内部例外 1:
ClientResultException: HTTP 400 (invalid_request_error: unsupported_parameter)
Parameter: max_tokens
Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.
The text was updated successfully, but these errors were encountered: