Skip to content

Commit

Permalink
added missing openai values in settings (#13)
Browse files Browse the repository at this point in the history
  • Loading branch information
3x3cut0r committed May 22, 2024
1 parent 0c20e9b commit 5e8e0fa
Show file tree
Hide file tree
Showing 4 changed files with 57 additions and 27 deletions.
11 changes: 8 additions & 3 deletions privategpt/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -142,10 +142,15 @@ ENV PYTHONPATH="$PYTHONPATH:/private_gpt/" \
OPENAI_API_BASE="https://api.openai.com/v1" \
OPENAI_API_KEY="sk-1234" \
OPENAI_MODEL="gpt-3.5-turbo" \
OLLAMA_LLM_MODEL="mistral:latest" \
OLLAMA_EMBEDDING_MODEL="" \
OPENAI_REQUEST_TIMEOUT="120.0" \
OPENAI_EMBEDDING_API_BASE="" \
OPENAI_EMBEDDING_API_KEY="" \
OPENAI_EMBEDDING_MODEL="text-embedding-3-small" \
OLLAMA_API_BASE="http://localhost:11434" \
OLLAMA_EMBEDDING_API_BASE="http://localhost:11434" \
OLLAMA_EMBEDDING_API_BASE="" \
OLLAMA_LLM_MODEL="mistral:latest" \
OLLAMA_EMBEDDING_MODEL="nomic-embed-text" \
OLLAMA_KEEP_ALIVE="5m" \
OLLAMA_TFS_Z="1.0" \
OLLAMA_NUM_PREDICT="128" \
OLLAMA_TOP_K="40" \
Expand Down
9 changes: 7 additions & 2 deletions privategpt/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -220,13 +220,18 @@ secret: "Basic c2VjcmV0OmtleQ=="
- `OPENAI_API_BASE` - Base URL of OpenAI API. Example: https://api.openai.com/v1 - **Default: https://api.openai.com/v1**
- `OPENAI_API_KEY` - Your API Key for the OpenAI API. Example: sk-1234 - **Default: sk-1234**
- `OPENAI_MODEL` - OpenAI Model to use. (see [OpenAI Models Overview](https://platform.openai.com/docs/models/overview)). Example: gpt-4 - **Default: gpt-3.5-turbo**
- `OPENAI_REQUEST_TIMEOUT` - Time elapsed until openailike server times out the request. Default is 120s. Format is float. - **Default: 120.0**
- `OPENAI_EMBEDDING_API_BASE` - Base URL of OpenAI API. Example: https://api.openai.com/v1 - **Default: same as OPENAI_API_BASE**
- `OPENAI_EMBEDDING_API_KEY` - Your API Key for the OpenAI Embedding API. Example: sk-1234. - **Default: same as OPENAI_API_KEY**
- `OPENAI_EMBEDDING_MODEL` - OpenAI embedding Model to use. Example: text-embedding-3-large - **Default: text-embedding-3-small**

###### Ollama

- `OLLAMA_API_BASE` - Base URL of Ollama API. Example: http://192.168.1.100:11434 - **Default: http://localhost:11434**
- `OLLAMA_EMBEDDING_API_BASE` - Base URL of Ollama Embedding API. Example: http://192.168.1.100:11434 - **Default: same as OLLAMA_API_BASE**
- `OLLAMA_LLM_MODEL` - Ollama model to use. (see [Ollama Library](https://ollama.com/library)). Example: 'llama2-uncensored' - **Default: mistral:latest**
- `OLLAMA_EMBEDDING_MODEL` - Model to use. Example: 'nomic-embed-text'. - **Default: nomic-embed-text**
- `OLLAMA_API_BASE` - Base URL of Ollama API. Example: http://192.168.1.100:11434 - **Default: http://localhost:11434**
- `OLLAMA_EMBEDDING_API_BASE` - Base URL of Ollama Embedding API. Example: http://192.168.1.100:11434 - **Default: http://localhost:11434**
- `OLLAMA_KEEP_ALIVE` - Time the model will stay loaded in memory after a request. examples: 5m, 5h, '-1' - **Default: 5m**
- `OLLAMA_TFS_Z` - Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. - **Default: 1.0**
- `OLLAMA_NUM_PREDICT` - Maximum number of tokens to predict when generating text. (Default: 128, -1 = infinite generation, -2 = fill context) - **Default: 128**
- `OLLAMA_TOP_K` - Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. - **Default: 40**
Expand Down
55 changes: 35 additions & 20 deletions privategpt/docker-entrypoint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -186,59 +186,74 @@ sed -i "99s|^.*$| api_key: ${OPENAI_API_KEY:-"sk-1234"}|" /home/worker/app/sett
# OPENAI_MODEL
sed -i "100s|^.*$| model: ${OPENAI_MODEL:-"gpt-3.5-turbo"}|" /home/worker/app/settings.yaml

# OLLAMA_LLM_MODEL
sed -i "103s|^.*$| llm_model: ${OLLAMA_LLM_MODEL:-"mistral:latest"}|" /home/worker/app/settings.yaml
# OPENAI_REQUEST_TIMEOUT
sed -i "101s|^.*$| request_timeout: ${OPENAI_REQUEST_TIMEOUT:-"120.0"}|" /home/worker/app/settings.yaml

# OLLAMA_EMBEDDING_MODEL
sed -i "104s|^.*$| embedding_model: ${OLLAMA_EMBEDDING_MODEL:-"nomic-embed-text"}|" /home/worker/app/settings.yaml
# OPENAI_EMBEDDING_API_BASE
sed -i "102s|^.*$| embedding_api_base: ${OPENAI_EMBEDDING_API_BASE:-$OPENAI_API_BASE}|" /home/worker/app/settings.yaml

# OPENAI_EMBEDDING_API_KEY
sed -i "103s|^.*$| embedding_api_key: ${OPENAI_EMBEDDING_API_KEY:-$OPENAI_API_KEY}|" /home/worker/app/settings.yaml

# OPENAI_EMBEDDING_MODEL
sed -i "104s|^.*$| embedding_model: ${OPENAI_EMBEDDING_MODEL:-"text-embedding-3-small"}|" /home/worker/app/settings.yaml

# OLLAMA_API_BASE
sed -i "105s|^.*$| api_base: ${OLLAMA_API_BASE:-"http://localhost:11434"}|" /home/worker/app/settings.yaml
sed -i "107s|^.*$| api_base: ${OLLAMA_API_BASE:-"http://localhost:11434"}|" /home/worker/app/settings.yaml

# OLLAMA_EMBEDDING_API_BASE
sed -i "106s|^.*$| embedding_api_base: ${OLLAMA_EMBEDDING_API_BASE:-"http://localhost:11434"}|" /home/worker/app/settings.yaml
sed -i "108s|^.*$| embedding_api_base: ${OLLAMA_EMBEDDING_API_BASE:-$OLLAMA_API_BASE}|" /home/worker/app/settings.yaml

# OLLAMA_LLM_MODEL
sed -i "109s|^.*$| llm_model: ${OLLAMA_LLM_MODEL:-"mistral:latest"}|" /home/worker/app/settings.yaml

# OLLAMA_EMBEDDING_MODEL
sed -i "110s|^.*$| embedding_model: ${OLLAMA_EMBEDDING_MODEL:-"nomic-embed-text"}|" /home/worker/app/settings.yaml

# OLLAMA_KEEP_ALIVE
sed -i "111s|^.*$| embedding_model: ${OLLAMA_KEEP_ALIVE:-"5m"}|" /home/worker/app/settings.yaml

# OLLAMA_TFS_Z
sed -i "107s|^.*$| tfs_z: ${OLLAMA_TFS_Z:-"1.0"}|" /home/worker/app/settings.yaml
sed -i "112s|^.*$| tfs_z: ${OLLAMA_TFS_Z:-"1.0"}|" /home/worker/app/settings.yaml

# OLLAMA_NUM_PREDICT
sed -i "108s|^.*$| num_predict: ${OLLAMA_NUM_PREDICT:-"128"}|" /home/worker/app/settings.yaml
sed -i "113s|^.*$| num_predict: ${OLLAMA_NUM_PREDICT:-"128"}|" /home/worker/app/settings.yaml

# OLLAMA_TOP_K
sed -i "109s|^.*$| top_k: ${OLLAMA_TOP_K:-"40"}|" /home/worker/app/settings.yaml
sed -i "114s|^.*$| top_k: ${OLLAMA_TOP_K:-"40"}|" /home/worker/app/settings.yaml

# OLLAMA_TOP_P
sed -i "110s|^.*$| top_p: ${OLLAMA_TOP_P:-"0.9"}|" /home/worker/app/settings.yaml
sed -i "115s|^.*$| top_p: ${OLLAMA_TOP_P:-"0.9"}|" /home/worker/app/settings.yaml

# OLLAMA_REPEAT_LAST_N
sed -i "111s|^.*$| repeat_last_n: ${OLLAMA_REPEAT_LAST_N:-"64"}|" /home/worker/app/settings.yaml
sed -i "116s|^.*$| repeat_last_n: ${OLLAMA_REPEAT_LAST_N:-"64"}|" /home/worker/app/settings.yaml

# OLLAMA_REPEAT_PENALTY
sed -i "112s|^.*$| repeat_penalty: ${OLLAMA_REPEAT_PENALTY:-"1.1"}|" /home/worker/app/settings.yaml
sed -i "117s|^.*$| repeat_penalty: ${OLLAMA_REPEAT_PENALTY:-"1.1"}|" /home/worker/app/settings.yaml

# OLLAMA_REQUEST_TIMEOUT
sed -i "113s|^.*$| request_timeout: ${OLLAMA_REQUEST_TIMEOUT:-"120.0"}|" /home/worker/app/settings.yaml
sed -i "118s|^.*$| request_timeout: ${OLLAMA_REQUEST_TIMEOUT:-"120.0"}|" /home/worker/app/settings.yaml

# AZOPENAI_API_KEY
sed -i "116s|^.*$| api_key: ${AZOPENAI_API_KEY:-"sk-1234"}|" /home/worker/app/settings.yaml
sed -i "121s|^.*$| api_key: ${AZOPENAI_API_KEY:-"sk-1234"}|" /home/worker/app/settings.yaml

# AZOPENAI_AZURE_ENDPOINT
sed -i "117s|^.*$| azure_endpoint: ${AZOPENAI_AZURE_ENDPOINT:-"https://api.myazure.com/v1"}|" /home/worker/app/settings.yaml
sed -i "122s|^.*$| azure_endpoint: ${AZOPENAI_AZURE_ENDPOINT:-"https://api.myazure.com/v1"}|" /home/worker/app/settings.yaml

# AZOPENAI_API_VERSION
sed -i "118s|^.*$| api_version: \"${AZOPENAI_API_VERSION:-"2023_05_15"}\"|" /home/worker/app/settings.yaml
sed -i "123s|^.*$| api_version: \"${AZOPENAI_API_VERSION:-"2023_05_15"}\"|" /home/worker/app/settings.yaml

# AZOPENAI_EMBEDDING_DEPLOYMENT_NAME
sed -i "119s|^.*$| embedding_deployment_name: ${AZOPENAI_EMBEDDING_DEPLOYMENT_NAME:-"my-azure-embedding-deployment-name"}|" /home/worker/app/settings.yaml
sed -i "124s|^.*$| embedding_deployment_name: ${AZOPENAI_EMBEDDING_DEPLOYMENT_NAME:-"my-azure-embedding-deployment-name"}|" /home/worker/app/settings.yaml

# AZOPENAI_EMBEDDING_MODEL
sed -i "120s|^.*$| embedding_model: ${AZOPENAI_EMBEDDING_MODEL:-"text-embedding-3-small"}|" /home/worker/app/settings.yaml
sed -i "125s|^.*$| embedding_model: ${AZOPENAI_EMBEDDING_MODEL:-"text-embedding-3-small"}|" /home/worker/app/settings.yaml

# AZOPENAI_LLM_DEPLOYMENT_NAME
sed -i "121s|^.*$| llm_deployment_name: ${AZOPENAI_LLM_DEPLOYMENT_NAME:-"my-azure-llm-deployment-name"}|" /home/worker/app/settings.yaml
sed -i "126s|^.*$| llm_deployment_name: ${AZOPENAI_LLM_DEPLOYMENT_NAME:-"my-azure-llm-deployment-name"}|" /home/worker/app/settings.yaml

# AZOPENAI_LLM_MODEL
sed -i "122s|^.*$| llm_model: ${AZOPENAI_LLM_MODEL:-"gpt-4"}|" /home/worker/app/settings.yaml
sed -i "127s|^.*$| llm_model: ${AZOPENAI_LLM_MODEL:-"gpt-4"}|" /home/worker/app/settings.yaml

############################
# run app #
Expand Down
9 changes: 7 additions & 2 deletions privategpt/settings_template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -98,12 +98,17 @@ openai:
api_base: https://api.openai.com/v1
api_key: sk-1234
model: gpt-3.5-turbo
request_timeout: 120.0
embedding_api_base: https://api.openai.com/v1
embedding_api_key: sk-1234
embedding_model: text-embedding-3-small

ollama:
llm_model: mistral:latest
embedding_model: nomic-embed-text
api_base: http://localhost:11434
embedding_api_base: http://localhost:11434
llm_model: mistral:latest
embedding_model: nomic-embed-text
keep_alive: 5m
tfs_z: 1.0
num_predict: 128
top_k: 40
Expand Down

0 comments on commit 5e8e0fa

Please sign in to comment.