Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Twinny tries ollama URL for oobabooga embeddings #336

Open
allo- opened this issue Oct 1, 2024 · 9 comments
Open

Twinny tries ollama URL for oobabooga embeddings #336

allo- opened this issue Oct 1, 2024 · 9 comments

Comments

@allo-
Copy link

allo- commented Oct 1, 2024

Describe the bug
I configured oobabooga as embedding provider:

  • Host: 127.0.0.1
  • Port: 5000
  • Path: /v1/embeddings
  • Model name: all-mpnet-base-v2

When I now select the provider and click "Embed workspace documents" vscode still requests http://0.0.0.0:11434/api/embed

To Reproduce
Try to use oobabooga as embedding provider.

Expected behavior
The configured provider, URL, port and path should be used..

API Provider
Oobabooga

Chat or Auto Complete?
Embedding

Model Name
all-mpnet-base-v2

Desktop (please complete the following information):

  • OS: Linux

Additional context
Chat works as expected with oobabooga (chat) provider.

@rjmacarthy
Copy link
Collaborator

Hey, thanks for the report. However, I'm sorry but I cannot replicate this?

@allo-
Copy link
Author

allo- commented Oct 3, 2024

I can test next week more. Anything I should start with?

What I did:
I configured the embedding provider and then opened the panel and pressed the button for embedding the workspace. It didn't do anything and only showed a notification that didn't went away. Then I saw an error in the vscode developer tools (electron console) and patched a console.log for the URLs into the js file, and saw that it seems to use the oolama port. By listening there with netcat I was able to verify that it indeed tries port 11434 instead of the configured port.

I use ooba for chat and it works fine and I do not have a FIM model configured yet.

@rjmacarthy
Copy link
Collaborator

Thanks for the detailed response. Please could you let me know what version you are using?

@allo-
Copy link
Author

allo- commented Oct 9, 2024

twinny-3.17.20-linux-x64

@vkx86
Copy link
Contributor

vkx86 commented Oct 9, 2024

Had the same issue with embedding with twinny <-> LM Studio.
At end forked the repo and commented-out lines 157-160 in src/extension/provider-manager.ts and use local extension build - now documents ingestion and embedding requests runs perfectly with LM Studio. I'm still not sure it works in chat, though...
@rjmacarthy - plz take note that commented code overwrites user-defined path

@allo-
Copy link
Author

allo- commented Oct 9, 2024

The question is whether the default settings shouldn't be overwritten anyway when I configure an own provider.

@rjmacarthy
Copy link
Collaborator

rjmacarthy commented Oct 10, 2024

Hey, sorry about this bug. I thought I'd removed that code in a previous version, I just released version v3.17.24 which should address it.

Many thanks,

@allo-
Copy link
Author

allo- commented Oct 10, 2024

I still have the problem with v3.17.24.

@allo-
Copy link
Author

allo- commented Oct 18, 2024

I think the problem may start somewhere else. When I select the embedding provider after I clicked the cylinder icon, it switched back to blank after a few seconds. It neither tries to reach the ooba port nor the ollama port.

I think trying to embed without a selected provider then defaults to ollama.

Under what conditions is the selection of the embedding provider changed? Are there some checks that fail for the configured provider? I wonder what may be checked, because there is no http request that could fail in the time from selecting the ooba provider and twinny switching back to the empty item in the select box.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants