You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm coming back to this issue again. I have my ollama server on the URL as present in the original code. However, crawl4ai still uses the local llama that has been downloaded on the machine running the original code instead of running llama from the given API base.
However, this code from the Litellm docs, works perfectly.
from litellm import completion
response = completion(
model="ollama/llama2",
messages=[{ "content": "respond in 20 words. who are you?","role": "user"}],
api_base="http://localhost:11434" # replacing this local host to my server endpoint
)
print(response)
I tried passing the api_base parameter in the AsyncWebCrawler but it doesn't help.
The text was updated successfully, but these errors were encountered:
Praj-17
changed the title
Hi, I'm coming back to this issue again. I have my ollama server on the URL as present in the original code. However, crawl4ai still uses the local llama that has been downloaded on the machine running the original code instead of running llama from the given API base.
Ollama uses localhost as api_base instead of given api_base
Oct 19, 2024
Hi, I'm coming back to this issue again. I have my ollama server on the URL as present in the original code. However,
crawl4ai
still uses the local llama that has been downloaded on the machine running the original code instead of running llama from the given API base.However, this code from the Litellm docs, works perfectly.
I tried passing the api_base parameter in the
AsyncWebCrawler
but it doesn't help.Any suggestions @unclecode ?
Originally posted by @Praj-17 in #166 (comment)
The text was updated successfully, but these errors were encountered: