Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gpt 4o mini #92

Open
wants to merge 7 commits into
base: staging
Choose a base branch
from
Open

Gpt 4o mini #92

wants to merge 7 commits into from

Conversation

Emmanuel-Develops
Copy link
Collaborator

@Emmanuel-Develops Emmanuel-Develops commented Jul 24, 2024

This PR implements the groundwork for unit testing

Adds

  • extractor.test.ts
    Ensures the gpt extractor output adheres to some predefined structure. I included some mock data (mocking different chat history context) and the tests do infact make calls to openAI against this mock data. (useful if we change models or prompt, so we have an objective test)

  • apiMessageSeparator.test.ts
    This tests for the function that separates the output into the 3 components. (body, follow-up questions, and links).
    This is a unit functional test that should be part of our CI/CD.

A consequence of these tests is that they make our prompts better and outputs relatively deterministic (at least for the extractor). For example, usually for every model change we edit the prompts for that particular model, using the extractor test we have a objective measurement that informs us if the prompt works, and consequently makes the prompts robust as switching between models and running the tests can highlight vague instructions in the prompts.

note: as at the time of writing, all mainstream models used as OPENAI_EXTRACTOR_MODEL passes the extractor test

Copy link

vercel bot commented Jul 24, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
chat-btc ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jul 30, 2024 3:19pm

jest.config.ts Outdated
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's remove the unused comments in this file

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its autogenerated by jest initialization. They are config options with description of what they do, it looks it will be helpful for any future changes

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can always get them from the docs. No need keeping them if we aren't using it


on:
pull_request:
branches: [main]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't we test on staging as well?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This gh-action runs the external-dependencies tests directory.
The external dependencies test directory should be limited to only PR to main because we call openai chat completion in there and that has a cost/million token.

I'm looking at this directory to hold tests that requires API calls or dependencies you wouldn't want running too often, and on PR to main seems like a good approach

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants