Llama function calls #137953
Replies: 2 comments
-
llama3.1 function calling is unique, unless it's supported by the OpenAI library it won't work and I don't believe it is. If we had a way to send in "raw" messages, then you can format the prompt accordingly to get tool working to work. Feasible if you're running the model locally |
Beta Was this translation helpful? Give feedback.
-
💬 Your Product Feedback Has Been Submitted 🎉 Thank you for taking the time to share your insights with us! Your feedback is invaluable as we build a better GitHub experience for all our users. Here's what you can expect moving forward ⏩
Where to look to see what's shipping 👀
What you can do in the meantime 💻
As a member of the GitHub community, your participation is essential. While we can't promise that every suggestion will be implemented, we want to emphasize that your feedback is instrumental in guiding our decisions and priorities. Thank you once again for your contribution to making GitHub even better! We're grateful for your ongoing support and collaboration in shaping the future of our platform. ⭐ |
Beta Was this translation helpful? Give feedback.
-
Select Topic Area
Bug
Body
I don't think that the llama 3.1 function calls are working at all. Followed the example available here
Got no function calls at all when using some open models like Llama 3.1 (8b and 70b), no matter what i did. Command R Plus also throws an error to the same script, even though it actually calls the function.
Code
Outputs:
gpt-4o-mini (worked)
Mistral-nemo (worked)
Llama 3.1 8b (no function calls)
Command R Plus (Error when summarizing)
Beta Was this translation helpful? Give feedback.
All reactions