Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error with docker URLs? #8

Open
dermot-mcg opened this issue Jan 27, 2024 · 3 comments
Open

Error with docker URLs? #8

dermot-mcg opened this issue Jan 27, 2024 · 3 comments

Comments

@dermot-mcg
Copy link

I get the following issue at the front end (hosting on a VPS, not locally, which has other containers for other services and thus a shared network - Nginx Proxy Manager):

An error occurred. Either the engine you requested does not exist or there was another issue processing your request. If this issue persists please contact us through our help center at help.openai.com.

docker-compose.yml

version: '3'

services:

  chatgpt-client:
    image: soulteary/chatgpt
    restart: always
    environment:
      APP_PORT: 8090
      # the ChatGPT client domain, keep the same with chatgpt-client: `APP_HOSTNAME` option
      APP_HOSTNAME: "http://localhost:8090"
      # the ChatGPT backend upstream, or connect a sparrow dev server `"http://host.docker.internal:8091"`
      APP_UPSTREAM: "http://sparrow:8091"
    networks:
      - nginxproxymanager_default

  sparrow:
    image: soulteary/sparrow
    restart: always
    environment:
      # [Basic Settings]
      # => The ChatGPT Web Client Domain
      WEB_CLIENT_HOSTNAME: "http://chatgpt-client:8090"
      # => Service port, default: 8091
      # APP_PORT: 8091

      # [Private OpenAI API Server Settings] *optional
      # => Enable OpenAI 3.5 API
      ENABLE_OPENAI_API: "on"
      # => OpenAI API Key
      OPENAI_API_KEY: "sk-iTsAsEcReT"
      # => Enable OpenAI API Proxy
      # OPENAI_API_PROXY_ENABLE: "on"
      # => OpenAI API Proxy Address, eg: `"http://127.0.0.1:1234"` or ""
      # OPENAI_API_PROXY_ADDR: "http://127.0.0.1:1234"
    logging:
        driver: "json-file"
        options:
            max-size: "10m"
    networks:
      - nginxproxymanager_default

networks:
  nginxproxymanager_default:
    external: true
@maxsyst
Copy link

maxsyst commented Mar 13, 2024

I meet the same problem like you .

http://localhost:8090/backend-api/conversations?offset=0&limit=2
image

@maxsyst
Copy link

maxsyst commented Apr 2, 2024

any one can suggestion ?

@officialdanielamani
Copy link

officialdanielamani commented Apr 20, 2024

Seams I may found the solution.

Example for VPS address is "192.168.0.10"

Change the localhost to the VPS address.

version: '3'

services:

  chatgpt-client:
    image: soulteary/chatgpt
    restart: always
    ports:
      - 8090:8090
    environment:
      # service port
      APP_PORT: 8090
      # the ChatGPT client domain, keep the same with chatgpt-client: `APP_HOSTNAME` option
      APP_HOSTNAME: "http://192.168.0.10:8090"
      # the ChatGPT backend upstream, or connect a sparrow dev server `"http://host.docker.internal:8091"`
      APP_UPSTREAM: "http://sparrow:8091"

  sparrow:
    image: soulteary/sparrow
    restart: always
    environment:
      # [Basic Settings]
      # => The ChatGPT Web Client Domain
      WEB_CLIENT_HOSTNAME: "http://192.168.0.10:8090"
      # => Service port, default: 8091
      APP_PORT: 8091

      # [Advanced Settings] *optional
      # => Enable the new UI
      FEATURE_NEW_UI: "on"
      # => Enable history list
      ENABLE_HISTORY_LIST: "on"
      # => Enable i18n
      ENABLE_I18N: "on"
      # => Enable the data control
      ENABLE_DATA_CONTROL: "on"
      # => Enable the model switch
      ENABLE_MODEL_SWITCH: "on"
      # enable the openai official model (include the plugin model)
      # ENABLE_OPENAI_OFFICIAL_MODEL: "on"

      # [Plugin Settings] *optional
      # => Enable the plugin
      # ENABLE_PLUGIN: "on"
      # => Enable the plugin browsing
      # ENABLE_PLUGIN_BROWSING: "on"
      # => Enable the plugin code interpreter
      # ENABLE_PLUGIN_CODE_INTERPRETER: "on"
      # enable the plugin model dev feature
      # ENABLE_PLUGIN_PLUGIN_DEV: "on"

      # [Private OpenAI API Server Settings] *optional
      # => Enable OpenAI 3.5 API
      ENABLE_OPENAI_API: "on"
      # => OpenAI API Key
      OPENAI_API_KEY: "sk-CHANGE ME WITH OWN API KEY"
      # => Enable OpenAI API Proxy
      # OPENAI_API_PROXY_ENABLE: "on"
      # => OpenAI API Proxy Address, eg: `"http://127.0.0.1:1234"` or ""
      # OPENAI_API_PROXY_ADDR: "http://127.0.0.1:1234"

      # [Private Midjourney Server Settings] *optional
      # => Enable Midjourney
      # ENABLE_MIDJOURNEY: "on"
      # => Enable Midjourney Only
      # ENABLE_MIDJOURNEY_ONLY: "on"
      # => Midjourney API Key
      # MIDJOURNEY_API_SECRET: "your-secret"
      # => Midjourney API Address, eg: `"ws://...."`, or `"ws://host.docker.internal:8092/ws"`
      # MIDJOURNEY_API_URL: "ws://localhost:8092/ws"

      # [Private Midjourney Server Settings] *optional
      # => Enable FlagStudio
      # ENABLE_FLAGSTUDIO: "on"
      # => Enable FlagStudio only
      # ENABLE_FLAGSTUDIO_ONLY: "off"
      # => FlagStudio API Key
      # FLAGSTUDIO_API_KEY: "your-flagstudio-api-key"

      # [Private Claude Server Settings] *optional
      # => Enable Claude
      # ENABLE_CLAUDE: "on"
      # => Enable Claude Only
      # ENABLE_CLAUDE_ONLY: "on"
      # => Claude API Key
      # CLAUDE_API_SECRET: "your-secret"
      # => Claude API Address, eg: `"ws://...."`, or `"ws://host.docker.internal:8093/ws"`
      # CLAUDE_API_URL: "ws://localhost:8093/ws"

    logging:
        driver: "json-file"
        options:
            max-size: "10m"

After that I go to the VPS page where the docker is enable:

If ENABLE_OPENAI_OFFICIAL_MODEL is enable, error The administrator has disabled the export capability of this model. will show up.

To change model just use
/?model=gpt-4 for GPT-4
/?model=gpt-3 for GPT-3

Screenshot 2024-04-20 211757

Quick note: Ctrl R the page after update the docker. Try on incognito tab with no extension to rule out problem extension blocking request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants