-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Plugin in PyCharm and local model in Windows. #95
Comments
Yes we want CPU support, and a small inference server code without much dependencies would be great. The current work is in #77 |
I had something else in mind. It does not matter on what to run the model locally (GPU or CPU), it is important that the plugin can work with a local model running not only in a docker container in WSL, because - and why do this when there is already oobabooga, where we can locally run models in a variety of formats. Refact is also launched in oobabooga, but it's not clear how to connect the plugin to it via the API. |
We'll actually solve this! New plugins with a rust binary will use standard API. (HF or OpenAI style) |
Is it possible to connect the plugin via the oobabooga-webui or koboldcpp API to a locally running model (Refact-1.6b, starcoder, etc.)?
If possible, how? Or is it possible to work with local models only as described here?
The text was updated successfully, but these errors were encountered: