Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update docs/user-guide/openai-api/langchain.md: set the model name to NA in the API server #14

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

PeterD1524
Copy link

@PeterD1524 PeterD1524 commented Jul 4, 2024

The command to start the API server should set the model name of the chat model to NA (with the option -m, --model-name <MODEL_NAME>) because the chatbot web app assumes that the model is NA.
Otherwise, you will get some output like this saying that the model NA does not exist in the chat graphs:

[2024-07-02 06:58:10.695] [wasi_logging_stdout] [error] llama_core: llama_core::chat in llama-core/src/chat.rs:1033: The model NA does not exist in the chat graphs.
[2024-07-02 06:58:10.695] [wasi_logging_stdout] [error] chat_completions: llama_api_server::backend::ggml in llama-api-server/src/backend/ggml.rs:428: Failed chat completions in non-stream mode. Reason: The model NA does not exist in the chat graphs.
[2024-07-02 06:58:10.695] [wasi_logging_stdout] [error] response: llama_api_server::error in llama-api-server/src/error.rs:25: 500 Internal Server Error: Failed chat completions in non-stream mode. Reason: The model NA does not exist in the chat graphs.
[2024-07-02 06:58:10.695] [wasi_logging_stdout] [info] chat_completions_handler: llama_api_server::backend::ggml in llama-api-server/src/backend/ggml.rs:317: Send the chat completion response.
[2024-07-02 06:58:10.695] [wasi_logging_stdout] [error] response: llama_api_server in llama-api-server/src/main.rs:522: version: HTTP/1.1, body_size: 128, status: 500, is_informational: false, is_success: false, is_redirection: false, is_client_error: false, is_server_error: true

https://discord.com/channels/846973236280950824/938097393369489448/1257592298385051718

NA is the default model name in LlamaEdgeChatService:
langchain-ai/langchain/libs/community/langchain_community/chat_models/llama_edge.py#L72-L83

A better solution might be to have the chatbot web app use the /v1/models endpoint to get the list of available models and display a dropdown to select the model to use, but I don't know how to do this with LangChain.

… model in the LlamaEdge API server to `NA`

Signed-off-by: PeterD1524 <qaz246135@gmail.com>
@alabulei1 alabulei1 requested a review from apepkuss July 5, 2024 02:51
@apepkuss
Copy link

apepkuss commented Jul 5, 2024

Thanks for this PR. We plan to update LlamaEdgeService in the near future as LlamaEdge itself gets a lot of improvements in recent releases. We'll come back to review this PR after finishing the necessary updates on LlamaEdgeService. So, please be patient for the review. Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants