Replies: 3 comments
-
3// Is there a way to discourage LLM from giving us any other specific output? Let's say, I use a word "polite" in the instructions (just an example) and observe that LLM is quoting this instruction in the output (which I do not want). Can I somehow discourage LLM from generating the "polite" token? |
Beta Was this translation helpful? Give feedback.
-
I see it on this discussion from 2020: |
Beta Was this translation helpful? Give feedback.
-
I was going to create a new thread, but since this seems close to what I wanted to add, might as well write it here. All of these would be nice to have, and would allow for even better control of local models when they go haywire in generation.
|
Beta Was this translation helpful? Give feedback.
-
1// Is this possible to enforce the LLM to finish the generation? At some point of time, LLM decides to finish it by itself, but can WE enforce it by putting a growing pressure to finish the ouptut? I expect that there is something like END token and we can push pressure on the LLM to use it. Just using stop=... is not that useful, because the output is brutally cut.
2// Are there any methods of enforcing approximate length of the LLM output, at all?..
Beta Was this translation helpful? Give feedback.
All reactions