You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’m trying to get Mistral to answer questions based on connected and indexed Zendesk articles within Danswer, running locally through Ollama.
My setup is as follows:
-Docker containers from within the docker_compose folder. Includes inference server, api-server, etc.
-Ollama container where mistral is at local port:11434.
-I fill in the Custom fields to get past the initial dialog box when you first connect to local port:3000 where Danswer is. And it seems to connect since there's no error at this point.
-When I ask a question in Search, it pulls up the articles, but the AI throws an error. And when asking a question in Knowledge Chat, it thinks for a minute, and then resets, returns a blank, nothing.
I've tried:
-Changing environment variables.
-Allocating more resources in the containers.
-Trying the above on a different machine.
-Checking the logs in api-server container and ollama container for any clues.
Any ideas would be appreciated, thanks.
The text was updated successfully, but these errors were encountered:
Hey man, thanks for your response! I'm determined to get this setup working.
I'll try the .env approach as suggested in the guide first, adding this variable there, then I'll inspect the api-server logs for any additional details about what's happening when I punch in a question.
Will update with results.
EDIT
Additional info:
I added PROXY_READ_TIMEOUT settings to the app.conf file in backend/data/nginx, set it to 60000 or something ridiculous like that.
Hopefully this rules out resource issue, per the guide, I used a .wslconfig file in Users local directory setting RAM to 20 GB and cores to 4, and see these settings reflected at the bottom of Docker app.
The error in API server before adding the suggested variable was either "Returned None" or "Returned empty string", then nginx shows a timeout error in log. I believe this would narrow the issue down to AI - related so I'm hoping I see something new in the logs of api-server container.
Got these errors when trying to get a response from the Knowledge-Mistral Chat feature:
- ERROR: Could not trace chat history.
- And in NGINX: upstream timed out (110: operation timed out) while reading upstream
I'm going to try a different model within the Ollama container, Llama 2, and see if I'm getting the same issues. Also should add, the dataset is ~ 900 articles through a Zendesk connector, not sure if that's too much, it doesn't seem like it would be, but could.
🥶
Hello,
I’m trying to get Mistral to answer questions based on connected and indexed Zendesk articles within Danswer, running locally through Ollama.
My setup is as follows:
-Docker containers from within the docker_compose folder. Includes inference server, api-server, etc.
-Ollama container where mistral is at local port:11434.
-I fill in the Custom fields to get past the initial dialog box when you first connect to local port:3000 where Danswer is. And it seems to connect since there's no error at this point.
-When I ask a question in Search, it pulls up the articles, but the AI throws an error. And when asking a question in Knowledge Chat, it thinks for a minute, and then resets, returns a blank, nothing.
I've tried:
-Changing environment variables.
-Allocating more resources in the containers.
-Trying the above on a different machine.
-Checking the logs in api-server container and ollama container for any clues.
Any ideas would be appreciated, thanks.
The text was updated successfully, but these errors were encountered: