Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLGuidance should emit warnings/logs when lexemes/subgrammars are cut short by max_tokens #5

Open
hudson-ai opened this issue Jul 26, 2024 · 0 comments
Assignees

Comments

@hudson-ai
Copy link
Collaborator

Any time the max_tokens constraint is "active", there is a good chance that we have just garbled the model's output (e.g. JSON may no longer validate).

LLGuidance should let us know when this happens so we can warn or raise something to the user

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants