Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Windows build #18

Open
mdegans opened this issue Jun 15, 2024 · 1 comment
Open

Windows build #18

mdegans opened this issue Jun 15, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@mdegans
Copy link
Owner

mdegans commented Jun 15, 2024

Nothing should actually prevent a windows build. It's more matter of using the right compiler, dependencies, and setting up CI. MSVC does not build out of the box. This is likely an issue with drama_llama and the llama-cpp-sys-3 bindings as well.

@mdegans mdegans added the enhancement New feature or request label Jun 15, 2024
mdegans added a commit that referenced this issue Jun 18, 2024
At least on aarch64 Windows (macOS + Parallels), Weave now builds and runs, although without any acceleration. Next is Windows + amd64 + CUDA.
@mdegans
Copy link
Owner Author

mdegans commented Jun 18, 2024

Next step is Windows + amd64 + CUDA. I do have a Microsoft arm64 dev box which I believe has an NPU. I'm unsure if llama.cpp supports it or if there's a backend that does.

That's something for later. Very few people have such a setup, although it will likely become more popular with the new "Copilot" PCs that require an NPU and might be Arm64.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant