-
Notifications
You must be signed in to change notification settings - Fork 341
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Binary Update Jan 2024 #460
Binary Update Jan 2024 #460
Conversation
…6118/job/20919306901, commit `a1d6df129bcd3d42cda38c09217d8d4ec4ea3bdd`
Testing:
|
Ran the LLama.Examples using phi-2.Q5_K_M.gguf .. works perfectly on CPU. |
@martindevans It works on osx-arm64 metal. |
Tested CUDA on an A2000, works flawlessly. |
Thanks for testing @jasoncouture. Was that test on Windows or Linux? |
@jasoncouture has been doing some work on binaries (renaming them in #465 and adding CLBLAST in #468). I'll re-run the build step and update this PR based on those changes soon. |
Sorry, @martindevans , that was windows. |
I also tested linux CPU, that also works fine. |
Closing this PR, various changes have been made to the binaries in some other PRs. I'll open a new one soon. |
Built with https://github.com/SciSharp/LLamaSharp/actions/runs/7674546118/job/20919306901, llama.cpp commit
a1d6df129bcd3d42cda38c09217d8d4ec4ea3bdd