Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues with Ollama Mistral Integration: Search Queries Failing and Knowledge Chat Returning Blank Responses #2821

Open
vorsyybl opened this issue Oct 16, 2024 · 2 comments

Comments

@vorsyybl
Copy link

vorsyybl commented Oct 16, 2024

Hello,

I’m trying to get Mistral to answer questions based on connected and indexed Zendesk articles within Danswer, running locally through Ollama.

My setup is as follows:
-Docker containers from within the docker_compose folder. Includes inference server, api-server, etc.
-Ollama container where mistral is at local port:11434.
-I fill in the Custom fields to get past the initial dialog box when you first connect to local port:3000 where Danswer is. And it seems to connect since there's no error at this point.
-When I ask a question in Search, it pulls up the articles, but the AI throws an error. And when asking a question in Knowledge Chat, it thinks for a minute, and then resets, returns a blank, nothing.

I've tried:
-Changing environment variables.
-Allocating more resources in the containers.
-Trying the above on a different machine.
-Checking the logs in api-server container and ollama container for any clues.

Any ideas would be appreciated, thanks.

@rkuo-danswer
Copy link
Contributor

Are there any logs related to your issues in the API server?

Typically if the error is AI related, we want the env var LOG_DANSWER_MODEL_INTERACTIONS to be set to True, then look in the logs for potential clues.

@vorsyybl
Copy link
Author

vorsyybl commented Oct 23, 2024

Hey man, thanks for your response! I'm determined to get this setup working.

I'll try the .env approach as suggested in the guide first, adding this variable there, then I'll inspect the api-server logs for any additional details about what's happening when I punch in a question.

Will update with results.

EDIT
Additional info:

  • I added PROXY_READ_TIMEOUT settings to the app.conf file in backend/data/nginx, set it to 60000 or something ridiculous like that.
  • Hopefully this rules out resource issue, per the guide, I used a .wslconfig file in Users local directory setting RAM to 20 GB and cores to 4, and see these settings reflected at the bottom of Docker app.
  • The error in API server before adding the suggested variable was either "Returned None" or "Returned empty string", then nginx shows a timeout error in log. I believe this would narrow the issue down to AI - related so I'm hoping I see something new in the logs of api-server container.
  • Got these errors when trying to get a response from the Knowledge-Mistral Chat feature:
    - ERROR: Could not trace chat history.
    - And in NGINX: upstream timed out (110: operation timed out) while reading upstream

I'm going to try a different model within the Ollama container, Llama 2, and see if I'm getting the same issues. Also should add, the dataset is ~ 900 articles through a Zendesk connector, not sure if that's too much, it doesn't seem like it would be, but could.
🥶

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants