Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(tests): Add Tools to the chat #97

Closed
wants to merge 2 commits into from

Conversation

ronal2do
Copy link
Contributor

@ronal2do ronal2do commented Dec 13, 2024

The idea is to catch up with other ai sdk allowing for tooling use, this implementation are using openai only, other models still implementing it with different apis

For example

import { RAGChat } from "@/lib/rag-chat";
import { aiUseChatAdapter } from "@upstash/rag-chat/nextjs";
import type { Message } from "ai";
import { z } from "zod";
import { tool } from 'ai';
import { openai } from "@ai-sdk/openai";

// 👇 allow streaming responses up to 30 seconds
export const maxDuration = 30;

const ragChat = new RAGChat({
  model: openai("gpt-4-turbo"), // <- import { openai } from "@ai-sdk/openai";
});

export async function POST(request: Request) {
  const { messages } = await request.json();

  const question = (messages as Message[]).at(-1)?.content;
  if (!question) throw new Error("No question in the request");

  const myAbortSignal = new AbortController().signal;

  const response = await ragChat.chat(question, 
    { 
      streaming: true,

      toolingOptions: {
        abortSignal: myAbortSignal,
        tools: {
          weather: tool({
            description: 'Get the weather in a location',
            parameters: z.object({ location: z.string() }),
            
            execute: async ({ location }, { abortSignal }) => {
              return fetch(
                // you might need an API key
                `https://api.weatherapi.com/v1/current.json?q=${location}`,
                { signal: abortSignal }, // forward the abort signal to fetch
              );
            },
          }),
        },
        maxSteps: 5,
        // Force the model to attempt tool usage (for testing)
        toolChoice: "auto",
        // Attach context data
        // Handle step completion
        onStepFinish: ({ text, toolCalls, toolResults, finishReason }) => {
          console.log("onStepFinish", {
            text,
            toolCalls,
            toolResults,
            finishReason,
          });
        },
        // Handle completion
        onFinish: ({ text, toolCalls, toolResults }) => {
          console.log("onFinish", {
            text,
            toolCalls,
            toolResults,
          });
        },
      }
    }
  );

  return aiUseChatAdapter(response);
}
  • Add streaming and non-streaming mode tests
  • Implement error handling tests for both modes
  • Add tooling options support with integration tests
  • Update type definitions for ToolingOptions integration

The test suite now covers core functionality, error cases, and tooling features, ensuring robust handling of LLM interactions across different scenarios.

Screenshot 2024-12-13 at 12 56 34
@rkt/app:dev:  POST /api/public/leads 200 in 1735ms
@rkt/app:dev: Step finished: {
@rkt/app:dev:   text: '',
@rkt/app:dev:   toolCalls: [
@rkt/app:dev:     {
@rkt/app:dev:       type: 'tool-call',
@rkt/app:dev:       toolCallId: 'call_L8V9K27KegppPNexhCBbcO6F',
@rkt/app:dev:       toolName: 'discountCupomCode',
@rkt/app:dev:       args: [Object]
@rkt/app:dev:     }
@rkt/app:dev:   ],
@rkt/app:dev:   toolResults: [
@rkt/app:dev:     {
@rkt/app:dev:       type: 'tool-result',
@rkt/app:dev:       toolCallId: 'call_L8V9K27KegppPNexhCBbcO6F',
@rkt/app:dev:       toolName: 'discountCupomCode',
@rkt/app:dev:       args: [Object],
@rkt/app:dev:       result: [Object]
@rkt/app:dev:     }
@rkt/app:dev:   ],
@rkt/app:dev:   finishReason: 'tool-calls'
@rkt/app:dev: }

Limitations:

onToolCall has not been called from the frontend, we might need to update out streamText wrapper and the nextjs connector
https://github.com/vercel/ai/blob/b9cba49c0db07890e46aa9ea6e53023f1f051a2a/content/cookbook/20-rsc/21-stream-text-with-chat-prompt.mdx#L4

- Add streaming and non-streaming mode tests
- Implement error handling tests for both modes
- Add tooling options support with integration tests
- Update type definitions for ToolingOptions integration

The test suite now covers core functionality, error cases, and tooling features,
ensuring robust handling of LLM interactions across different scenarios.
type: "tool";
toolName: keyof Record<string, CoreTool>;
};
};
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not adding all the tooling support, just the basics, in case you can ping me, that I can add more options

@@ -32,8 +32,8 @@ export class LLMService {
debug?.startLLMResponse();
return (
optionsWithDefault.streaming
? this.makeStreamingLLMRequest(prompt, callbacks)
: this.makeLLMRequest(prompt, callbacks.onComplete)
? this.makeStreamingLLMRequest(prompt, callbacks, optionsWithDefault.toolingOptions)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

those are the main changes, a 3rd parameter

@@ -0,0 +1,476 @@
import { describe, test, expect, mock, beforeEach } from "bun:test";
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tried to make few tests here, but kinda hard without fully understanding of the service

@ronal2do ronal2do marked this pull request as draft December 13, 2024 12:27
- Introduced toolingOptions in RAGChat configuration to allow for more flexible tool usage.
- Expanded ToolingOptions type to include detailed step results, tool calls, and response metadata.
- Updated type definitions to improve clarity and usability for developers.
LanguageModelResponseMetadataWithHeaders,
LanguageModelUsage,
ProviderMetadata,
} from "ai";

declare const __brand: unique symbol;
type Brand<B> = { [__brand]: B };
export type Branded<T, B> = T & Brand<B>;
type OptionalAsync<T> = T | Promise<T>;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ai do not export, I've tried other ways to import this interface without duplicating code, if you have any ideas

@ronal2do ronal2do marked this pull request as ready for review December 17, 2024 11:58
@ronal2do
Copy link
Contributor Author

Last comment, just notice, not sure if is because of the openai from ai-sdk, but when the chat load tools it no longer takes the context as part of the response. maybe this is an expected behavior, but I think this can be a reason to close this PR.

@ronal2do
Copy link
Contributor Author

Update, this is the intended behavior, when we are using tools, we pass messages, and the context is retrieved by calling it as a tool as well

@ronal2do ronal2do closed this Dec 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant