Hello!
Finished my masters and can finally get back to working on Dot! This update is mainly focused on fixing bugs and compatibility issues, including a full rework of the backend. As always, Dot can be installed through the website, or the binaries can be found in this very release.
π¦ FEATURES:
- Fully reworked backend. llama-cpp-python has been replaced with node-llama-cpp, which should address the compatibility issues Dot was plagued with since its initial release. This rework means many new features have been added (and some removed).
- Improved file support. Dot now supports the following files: .pdf, .docx, .pptx, .md, .js, .py, .ts, .mjs, .c, .cpp, .cs, .java, .go, .rb, .swift, .kt, .php, .html, .css, .xml, .json, .yaml, .yml, .r, .sh, .bat, .pl, .rs, .scala, .sql.
- Text streaming! Tokens will stream as they are generated:
Screen.Recording.2024-12-09.at.14.26.06.mov
- A loading bar will now appear when uploading documents:
Screen.Recording.2024-12-09.at.14.28.26.mov
π¦ CHANGES:
- Phi-3.5 is now the default LLM that will be used by Dot. It is not only extremely light, but it works surprisingly well for RAG tasks.
- For now, all Text-To-Speech and Speech Recognition features have been removed. This is due to the backend changes and the overall unreliability of these features. I will work on adding them back!
π KNOWN BUGS:
- Loading a folder that contains unsupported documents will sometimes fail.
- PC will default to CPU, even if there is a valid GPU.
- Some weird tokens might appear from time to time, especially when using BigDot. These and the previous bug are due to the version of node-llama-cpp currently used by Dot. I am already working on implementing version 3.0.0, which will address these issues.
π WORKING ON:
- Node-llama-cpp 3.0.0 implementation.
- Fixing bugs.
Anyway, hope you enjoy! Please let me know if there are any issues. :)