Skip to content
This repository has been archived by the owner on Apr 7, 2024. It is now read-only.

[PRO-157] Improvements to Review results #75

Open
NicHaley opened this issue Jan 17, 2024 · 0 comments
Open

[PRO-157] Improvements to Review results #75

NicHaley opened this issue Jan 17, 2024 · 0 comments
Assignees
Labels
Improvement Created by Linear-GitHub Sync Medium priority Created by Linear-GitHub Sync

Comments

@NicHaley
Copy link
Contributor

NicHaley commented Jan 17, 2024

The AI for the Reviews feature will sometimes return subpar results. For example:

  • The AI might not catch some clear rule violations (eg. it does not find a spelling mistake)
  • It returns a weird result. For example, it may say a sentence is missing a comma, when it isn't
  • The results might not be consistent for the same block of text. This is despite having temperature set to 0, and the seed param set in the GPT request

In general, results seem to be worse when running against a lot of content in one request (eg. when running against an entire file).

Some ideas for how this can be improved:

  • Chunk requests into even small chunks. Perhaps dividing by paragraph or line count.
    • Subdividing will use more tokens on the first request, but should result in fewer tokens across multiple requests due to caching
  • Asking for a max limit on results
  • Fine-tuning
  • Prompt updates. For example, would a 'chain of thought'-style prompt yield better results?

From SyncLinear.com | PRO-157

@NicHaley NicHaley self-assigned this Jan 17, 2024
@NicHaley NicHaley added Improvement Created by Linear-GitHub Sync Medium priority Created by Linear-GitHub Sync labels Jan 17, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Improvement Created by Linear-GitHub Sync Medium priority Created by Linear-GitHub Sync
Projects
None yet
Development

No branches or pull requests

1 participant