You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a great project, I was really trying to get AutoGPT or SuperAGI to run locally but it always failed to achieve anything due to the bias towards the openAI services. It's good to see someone taking on the task to really build for local models.
I think, as a suggestion, LMQL (alternatively guidance, less active recently though) might be a great library to help actually get the most out of 7b or (hopefully) even 3b models. These libraries basically guide the LLM towards a clearly structured output, either by prescribing a response structure (e.g. a valid JSON), output types or even limiting the output to a set of predefined choices. Through LMQL I was able to have 7b models reliably output answers in such a way as to meet my needs, even when the underlying model was too "limited" to actually do it on its own. Because while these models have reasoning capabilities they often fail to follow the instructions. So LMQL forces it to fill out a predefined template... and that works quite well.
I'd rather have a local AGI running on a 7b (or maybe even a 3b) model and taking more refining/correcting steps rather than an expensive web service. I would really encourage you to have a look, I think this could massively increase quality.
The text was updated successfully, but these errors were encountered:
This is a great project, I was really trying to get AutoGPT or SuperAGI to run locally but it always failed to achieve anything due to the bias towards the openAI services. It's good to see someone taking on the task to really build for local models.
I think, as a suggestion, LMQL (alternatively guidance, less active recently though) might be a great library to help actually get the most out of 7b or (hopefully) even 3b models. These libraries basically guide the LLM towards a clearly structured output, either by prescribing a response structure (e.g. a valid JSON), output types or even limiting the output to a set of predefined choices. Through LMQL I was able to have 7b models reliably output answers in such a way as to meet my needs, even when the underlying model was too "limited" to actually do it on its own. Because while these models have reasoning capabilities they often fail to follow the instructions. So LMQL forces it to fill out a predefined template... and that works quite well.
I'd rather have a local AGI running on a 7b (or maybe even a 3b) model and taking more refining/correcting steps rather than an expensive web service. I would really encourage you to have a look, I think this could massively increase quality.
The text was updated successfully, but these errors were encountered: