-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(tests): Add Tools to the chat #97
Conversation
- Add streaming and non-streaming mode tests - Implement error handling tests for both modes - Add tooling options support with integration tests - Update type definitions for ToolingOptions integration The test suite now covers core functionality, error cases, and tooling features, ensuring robust handling of LLM interactions across different scenarios.
type: "tool"; | ||
toolName: keyof Record<string, CoreTool>; | ||
}; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not adding all the tooling support, just the basics, in case you can ping me, that I can add more options
@@ -32,8 +32,8 @@ export class LLMService { | |||
debug?.startLLMResponse(); | |||
return ( | |||
optionsWithDefault.streaming | |||
? this.makeStreamingLLMRequest(prompt, callbacks) | |||
: this.makeLLMRequest(prompt, callbacks.onComplete) | |||
? this.makeStreamingLLMRequest(prompt, callbacks, optionsWithDefault.toolingOptions) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
those are the main changes, a 3rd parameter
@@ -0,0 +1,476 @@ | |||
import { describe, test, expect, mock, beforeEach } from "bun:test"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've tried to make few tests here, but kinda hard without fully understanding of the service
- Introduced toolingOptions in RAGChat configuration to allow for more flexible tool usage. - Expanded ToolingOptions type to include detailed step results, tool calls, and response metadata. - Updated type definitions to improve clarity and usability for developers.
LanguageModelResponseMetadataWithHeaders, | ||
LanguageModelUsage, | ||
ProviderMetadata, | ||
} from "ai"; | ||
|
||
declare const __brand: unique symbol; | ||
type Brand<B> = { [__brand]: B }; | ||
export type Branded<T, B> = T & Brand<B>; | ||
type OptionalAsync<T> = T | Promise<T>; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ai do not export, I've tried other ways to import this interface without duplicating code, if you have any ideas
Last comment, just notice, not sure if is because of the |
Update, this is the intended behavior, when we are using tools, we pass messages, and the context is retrieved by calling it as a tool as well |
The idea is to catch up with other ai sdk allowing for tooling use, this implementation are using openai only, other models still implementing it with different apis
For example
The test suite now covers core functionality, error cases, and tooling features, ensuring robust handling of LLM interactions across different scenarios.
Limitations:
onToolCall has not been called from the frontend, we might need to update out streamText wrapper and the nextjs connector
https://github.com/vercel/ai/blob/b9cba49c0db07890e46aa9ea6e53023f1f051a2a/content/cookbook/20-rsc/21-stream-text-with-chat-prompt.mdx#L4