-
-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: gpt-4o support #366
Comments
Thanks for notifying. I'll have a look |
https://github.blog/changelog/2024-07-05-github-copilot-enterprise-on-gpt-4o/ It is not yet available for personal plans |
Hi, |
gpt-4o isn't available for personal plans afaik. Are you sure it's gpt-4o in vscode? I'll need to install vscode again |
{
"data": [
{
"capabilities": {
"family": "gpt-3.5-turbo",
"limits": {
"max_prompt_tokens": 7168
},
"object": "model_capabilities",
"type": "chat"
},
"id": "gpt-3.5-turbo",
"name": "GPT 3.5 Turbo",
"object": "model",
"version": "gpt-3.5-turbo-0613"
},
{
"capabilities": {
"family": "gpt-3.5-turbo",
"limits": {
"max_prompt_tokens": 7168
},
"object": "model_capabilities",
"type": "chat"
},
"id": "gpt-3.5-turbo-0613",
"name": "GPT 3.5 Turbo (2023-06-13)",
"object": "model",
"version": "gpt-3.5-turbo-0613"
},
{
"capabilities": {
"family": "gpt-4",
"limits": {
"max_prompt_tokens": 6144
},
"object": "model_capabilities",
"type": "chat"
},
"id": "gpt-4",
"name": "GPT 4",
"object": "model",
"version": "gpt-4-0613"
},
{
"capabilities": {
"family": "gpt-4",
"limits": {
"max_prompt_tokens": 6144
},
"object": "model_capabilities",
"type": "chat"
},
"id": "gpt-4-0613",
"name": "GPT 4 (2023-06-13)",
"object": "model",
"version": "gpt-4-0613"
},
{
"capabilities": {
"family": "gpt-4-turbo",
"limits": {
"max_prompt_tokens": 6144
},
"object": "model_capabilities",
"type": "chat"
},
"id": "gpt-4-0125-preview",
"name": "GPT 4 Turbo (2024-01-25 Preview)",
"object": "model",
"version": "gpt-4-0125-preview"
},
{
"capabilities": {
"family": "text-embedding-ada-002",
"limits": {
"max_inputs": 256
},
"object": "model_capabilities",
"type": "embeddings"
},
"id": "text-embedding-ada-002",
"name": "Embedding V2 Ada",
"object": "model",
"version": "text-embedding-ada-002"
},
{
"capabilities": {
"family": "text-embedding-ada-002",
"object": "model_capabilities",
"type": "embeddings"
},
"id": "text-embedding-ada-002-index",
"name": "Embedding V2 Ada (Index)",
"object": "model",
"version": "text-embedding-ada-002"
},
{
"capabilities": {
"family": "text-embedding-3-small",
"object": "model_capabilities",
"type": "embeddings"
},
"id": "text-embedding-3-small",
"name": "Embedding V3 small",
"object": "model",
"version": "text-embedding-3-small"
},
{
"capabilities": {
"family": "text-embedding-3-small",
"object": "model_capabilities",
"type": "embeddings"
},
"id": "text-embedding-3-small-inference",
"name": "Embedding V3 small (Inference)",
"object": "model",
"version": "text-embedding-3-small"
}
],
"object": "list"
} CC @deathbeam Maybe we can have a |
btw we might want to set |
Nice idea @gptlang I like it 👏 |
Hello everyone, maybe I missed some news, but I wanted to know if the In case you are interested, I will forward this discussion I started: |
This is a great suggestion, maybe by using |
I'm not very good at lua. I'm getting this error: Any clue how to fix this? @jellydn Branch is here: https://github.com/CopilotC-Nvim/CopilotChat.nvim/tree/feat/model-prompt |
@gptlang You need to schedule it with vim.schedule
|
Also this can be used as a source of inspiration: https://github.com/oflisback/obsidian-bridge.nvim/blob/main/lua/obsidian-bridge/network.lua |
I've attempted to use this setting:
Has anyone else done the same? I read here that the model should contain information indexed up to December 2023, but when I ask questions like: "What is the latest build of Python?" "What is the latest build of the Linux Kernel?" I respectively get the answers: "Python 3.10.4" "5.15" This is peculiar because it seems that the model's knowledge actually stops around March 2022. Have you noticed this as well? Could you conduct some tests? |
Some info about the model training data: https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4 |
AI is generally very unreliable. This was with a prompt change to say: "Knowledge Cutoff: December 2023". However, word it differently and you'll get correct more up to date info (see here) |
I'm still having issues. I asked the questions using For somewhat reason the "new" This is my setup: {
"CopilotC-Nvim/CopilotChat.nvim",
branch = "canary",
cmd = "CopilotChat",
opts = function()
local user = vim.env.USER or "User"
user = user:sub(1, 1):upper() .. user:sub(2)
return {
model = "gpt-4-0125-preview",
auto_insert_mode = true,
show_help = true,
question_header = " " .. user .. " ",
answer_header = " Copilot ",
window = {
width = 0.4,
},
selection = function(source)
local select = require("CopilotChat.select")
return select.visual(source) or select.buffer(source)
end,
}
end,
keys = {
{ "<c-s>", "<CR>", ft = "copilot-chat", desc = "Submit Prompt", remap = true },
{ "<leader>a", "", desc = "+ai", mode = { "n", "v" } },
{
"<leader>aa",
function()
return require("CopilotChat").toggle()
end,
desc = "Toggle (CopilotChat)",
mode = { "n", "v" },
},
{
"<leader>ax",
function()
return require("CopilotChat").reset()
end,
desc = "Clear (CopilotChat)",
mode = { "n", "v" },
},
{
"<leader>aq",
function()
local input = vim.fn.input("Quick Chat: ")
if input ~= "" then
require("CopilotChat").ask(input)
end
end,
desc = "Quick Chat (CopilotChat)",
mode = { "n", "v" },
},
-- Show help actions with telescope
{ "<leader>ad", M.pick("help"), desc = "Diagnostic Help (CopilotChat)", mode = { "n", "v" } },
-- Show prompts actions with telescope
{ "<leader>ap", M.pick("prompt"), desc = "Prompt Actions (CopilotChat)", mode = { "n", "v" } },
},
config = function(_, opts)
local chat = require("CopilotChat")
require("CopilotChat.integrations.cmp").setup()
vim.api.nvim_create_autocmd("BufEnter", {
pattern = "copilot-chat",
callback = function()
vim.opt_local.relativenumber = false
vim.opt_local.number = false
end,
})
chat.setup(opts)
end,
} Am I doing something wrong? |
I added the following line to the system prompt:
Yes, I know it's not yet December. However, it fails to answer the question with |
I believe there may be a misunderstanding or error in my configuration. Here is the setup I am currently using: local COPILOT_INSTRUCTIONS = string.format(
[[You are an AI programming assistant.
When asked for your name, you must respond with "GitHub Copilot".
Follow the user's requirements carefully & to the letter.
Follow Microsoft content policies.
Avoid content that violates copyrights.
If you are asked to generate content that is harmful, hateful, racist, sexist, lewd, violent, or completely irrelevant to software engineering, only respond with "Sorry, I can't assist with that."
Keep your answers short and impersonal.
You can answer general programming questions and perform the following tasks:
* Ask a question about the files in your current workspace
* Explain how the code in your active editor works
* Generate unit tests for the selected code
* Propose a fix for the problems in the selected code
* Scaffold code for a new workspace
* Create a new Jupyter Notebook
* Find relevant code to your query
* Propose a fix for the a test failure
* Ask questions about Neovim
* Generate query parameters for workspace search
* Ask how to do something in the terminal
* Explain what just happened in the terminal
You use the GPT-4 version of OpenAI's GPT models.
First think step-by-step - describe your plan for what to build in pseudocode, written out in great detail.
Then output the code in a single code block. This code block should not contain line numbers (line numbers are not necessary for the code to be understood, they are in format number: at beginning of lines).
Minimize any other prose.
Use Markdown formatting in your answers.
Make sure to include the programming language name at the start of the Markdown code blocks.
Avoid wrapping the whole response in triple backticks.
The user works in an IDE called Neovim which has a concept for editors with open files, integrated unit test support, an output pane that shows the output of running the code as well as an integrated terminal.
The user is working on a %s machine. Please respond with system specific commands if applicable.
The active document is the source code the user is looking at right now.
You can only give one reply for each conversation turn.
Knowledge Cutoff: December 2024
]],
vim.loop.os_uname().sysname
)
local my_opts = {
model = "gpt-4-0125-preview", -- Model to use
system_prompt = COPILOT_INSTRUCTIONS,
} As you can see, I have also appended the following line to the system prompt:
However, the output I receive is as follows:
I suspect that I may be making a mistake somewhere. Could @gptlang kindly share the system_prompt, prompt, and model used in your test? I would like to replicate your setup to see if I can achieve the same results. |
You're doing it right, but the difference is in the question. If you ask it what the latest version is, it'll get it wrong. However, if you ask it when 3.12 was released, it'll get the date right (without any prior chats that will lead it to say otherwise.) |
You were correct. At this point, I assume it wasn't working initially because there was a history of another chat active. However, aside from the knowledge of events, I wonder if giving it a manual knowledge cutoff (set to 2024) could also make it more "intelligent". |
CC @jellydn Do you think we should put in a "fake" knowledge cutoff? |
No, we shouldn't do that. |
@hrez I got that error too ( opts = {
- model = 'gpt-4o',
+ model = 'gpt-4o-2024-05-13',
} You can find the latest model aliases here: https://platform.openai.com/docs/models/gpt-4o |
But 4o is currently available for Enterprise GitHub Copilot subscribers, or also Personal plans? |
Since nobody is reviewing it and it works for me, I'll just be merging it this will automatically rewrite gpt-4o as an alias |
Hi,
copilot chat now supports gpt-4o. Could you add its support? - I think it would need to use https://openaipublic.blob.core.windows.net/encodings/o200k_base.tiktoken
Thanks.
The text was updated successfully, but these errors were encountered: