Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: gpt-4o support #366

Closed
hrez opened this issue Jul 14, 2024 · 25 comments · Fixed by #368
Closed

feat: gpt-4o support #366

hrez opened this issue Jul 14, 2024 · 25 comments · Fixed by #368

Comments

@hrez
Copy link

hrez commented Jul 14, 2024

Hi,
copilot chat now supports gpt-4o. Could you add its support? - I think it would need to use https://openaipublic.blob.core.windows.net/encodings/o200k_base.tiktoken

Thanks.

@acheong08
Copy link

Thanks for notifying. I'll have a look

@acheong08
Copy link

GitHub Copilot Enterprise features on GitHub.com and Copilot Chat in GitHub Mobile are now powered by the latest model from OpenAI, GPT-4o.

https://github.blog/changelog/2024-07-05-github-copilot-enterprise-on-gpt-4o/

It is not yet available for personal plans

gptlang added a commit that referenced this issue Jul 14, 2024
@gptlang gptlang closed this as completed Jul 14, 2024
@hrez
Copy link
Author

hrez commented Jul 15, 2024

Hi,
Thanks for the quick implementation.
Does gpt-4o work for you? Because I get "model not supported".
It works in vscode's latest chat plugin as far as I can tell.
So I wonder if it needs headers tweaked or it's just my neovim setup.

@gptlang gptlang reopened this Jul 15, 2024
@gptlang
Copy link
Member

gptlang commented Jul 15, 2024

gpt-4o isn't available for personal plans afaik. Are you sure it's gpt-4o in vscode? I'll need to install vscode again

@gptlang
Copy link
Member

gptlang commented Jul 15, 2024

{
    "data": [
        {
            "capabilities": {
                "family": "gpt-3.5-turbo",
                "limits": {
                    "max_prompt_tokens": 7168
                },
                "object": "model_capabilities",
                "type": "chat"
            },
            "id": "gpt-3.5-turbo",
            "name": "GPT 3.5 Turbo",
            "object": "model",
            "version": "gpt-3.5-turbo-0613"
        },
        {
            "capabilities": {
                "family": "gpt-3.5-turbo",
                "limits": {
                    "max_prompt_tokens": 7168
                },
                "object": "model_capabilities",
                "type": "chat"
            },
            "id": "gpt-3.5-turbo-0613",
            "name": "GPT 3.5 Turbo (2023-06-13)",
            "object": "model",
            "version": "gpt-3.5-turbo-0613"
        },
        {
            "capabilities": {
                "family": "gpt-4",
                "limits": {
                    "max_prompt_tokens": 6144
                },
                "object": "model_capabilities",
                "type": "chat"
            },
            "id": "gpt-4",
            "name": "GPT 4",
            "object": "model",
            "version": "gpt-4-0613"
        },
        {
            "capabilities": {
                "family": "gpt-4",
                "limits": {
                    "max_prompt_tokens": 6144
                },
                "object": "model_capabilities",
                "type": "chat"
            },
            "id": "gpt-4-0613",
            "name": "GPT 4 (2023-06-13)",
            "object": "model",
            "version": "gpt-4-0613"
        },
        {
            "capabilities": {
                "family": "gpt-4-turbo",
                "limits": {
                    "max_prompt_tokens": 6144
                },
                "object": "model_capabilities",
                "type": "chat"
            },
            "id": "gpt-4-0125-preview",
            "name": "GPT 4 Turbo (2024-01-25 Preview)",
            "object": "model",
            "version": "gpt-4-0125-preview"
        },
        {
            "capabilities": {
                "family": "text-embedding-ada-002",
                "limits": {
                    "max_inputs": 256
                },
                "object": "model_capabilities",
                "type": "embeddings"
            },
            "id": "text-embedding-ada-002",
            "name": "Embedding V2 Ada",
            "object": "model",
            "version": "text-embedding-ada-002"
        },
        {
            "capabilities": {
                "family": "text-embedding-ada-002",
                "object": "model_capabilities",
                "type": "embeddings"
            },
            "id": "text-embedding-ada-002-index",
            "name": "Embedding V2 Ada (Index)",
            "object": "model",
            "version": "text-embedding-ada-002"
        },
        {
            "capabilities": {
                "family": "text-embedding-3-small",
                "object": "model_capabilities",
                "type": "embeddings"
            },
            "id": "text-embedding-3-small",
            "name": "Embedding V3 small",
            "object": "model",
            "version": "text-embedding-3-small"
        },
        {
            "capabilities": {
                "family": "text-embedding-3-small",
                "object": "model_capabilities",
                "type": "embeddings"
            },
            "id": "text-embedding-3-small-inference",
            "name": "Embedding V3 small (Inference)",
            "object": "model",
            "version": "text-embedding-3-small"
        }
    ],
    "object": "list"
}

CC @deathbeam

Maybe we can have a :CopilotChatModels to fetch https://api.githubcopilot.com/models & display/select available models. Otherwise it's not obvious what model names are.

@gptlang
Copy link
Member

gptlang commented Jul 15, 2024

btw we might want to set gpt-4-0125 as default since it's more recent and ranks higher on leaderboards rather than just gpt-4 (alias for GPT 4 (2023-06-13))

@jellydn
Copy link
Contributor

jellydn commented Jul 15, 2024

Maybe we can have a :CopilotChatModels to fetch https://api.githubcopilot.com/models &
display/select available models. Otherwise it's not obvious what model names are.

Nice idea @gptlang I like it 👏

@pidgeon777
Copy link

Hello everyone, maybe I missed some news, but I wanted to know if the GPT-4o model is also available for the Personal plan.

In case you are interested, I will forward this discussion I started:

https://github.com/orgs/community/discussions/123670

@pidgeon777
Copy link

Maybe we can have a :CopilotChatModels to fetch https://api.githubcopilot.com/models &
display/select available models. Otherwise it's not obvious what model names are.

Nice idea @gptlang I like it 👏

This is a great suggestion, maybe by using vim.ui.select or similar.

@gptlang
Copy link
Member

gptlang commented Jul 16, 2024

I'm not very good at lua. I'm getting this error: E5560: Vimscript function must not be called in a lua loop callback

Any clue how to fix this? @jellydn

Branch is here: https://github.com/CopilotC-Nvim/CopilotChat.nvim/tree/feat/model-prompt

@jarviliam
Copy link

@gptlang You need to schedule it with vim.schedule

      vim.schedule(function()
        vim.ui.select(tbl, {
          prompt = 'Select model:',
        }, function(choice) end)
      end)

@pidgeon777
Copy link

Also this can be used as a source of inspiration:

https://github.com/oflisback/obsidian-bridge.nvim/blob/main/lua/obsidian-bridge/network.lua

@pidgeon777
Copy link

I've attempted to use this setting:

model = "gpt-4-0125-preview"

Has anyone else done the same? I read here that the model should contain information indexed up to December 2023, but when I ask questions like:

"What is the latest build of Python?"

"What is the latest build of the Linux Kernel?"

I respectively get the answers:

"Python 3.10.4"

"5.15"

This is peculiar because it seems that the model's knowledge actually stops around March 2022.

Have you noticed this as well? Could you conduct some tests?

@pidgeon777
Copy link

Some info about the model training data:

https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4

@gptlang
Copy link
Member

gptlang commented Jul 16, 2024

AI is generally very unreliable.

image

This was with a prompt change to say: "Knowledge Cutoff: December 2023".

image

However, word it differently and you'll get correct more up to date info (see here)

@pidgeon777
Copy link

I'm still having issues. I asked the questions using :CopilotChat <Question>:

image

For somewhat reason the "new" gpt-4 model seems to not be active.

This is my setup:

{
    "CopilotC-Nvim/CopilotChat.nvim",
    branch = "canary",
    cmd = "CopilotChat",
    opts = function()
      local user = vim.env.USER or "User"
      user = user:sub(1, 1):upper() .. user:sub(2)
      return {
        model = "gpt-4-0125-preview",
        auto_insert_mode = true,
        show_help = true,
        question_header = "" .. user .. " ",
        answer_header = "  Copilot ",
        window = {
          width = 0.4,
        },
        selection = function(source)
          local select = require("CopilotChat.select")
          return select.visual(source) or select.buffer(source)
        end,
      }
    end,
    keys = {
      { "<c-s>", "<CR>", ft = "copilot-chat", desc = "Submit Prompt", remap = true },
      { "<leader>a", "", desc = "+ai", mode = { "n", "v" } },
      {
        "<leader>aa",
        function()
          return require("CopilotChat").toggle()
        end,
        desc = "Toggle (CopilotChat)",
        mode = { "n", "v" },
      },
      {
        "<leader>ax",
        function()
          return require("CopilotChat").reset()
        end,
        desc = "Clear (CopilotChat)",
        mode = { "n", "v" },
      },
      {
        "<leader>aq",
        function()
          local input = vim.fn.input("Quick Chat: ")
          if input ~= "" then
            require("CopilotChat").ask(input)
          end
        end,
        desc = "Quick Chat (CopilotChat)",
        mode = { "n", "v" },
      },
      -- Show help actions with telescope
      { "<leader>ad", M.pick("help"), desc = "Diagnostic Help (CopilotChat)", mode = { "n", "v" } },
      -- Show prompts actions with telescope
      { "<leader>ap", M.pick("prompt"), desc = "Prompt Actions (CopilotChat)", mode = { "n", "v" } },
    },
    config = function(_, opts)
      local chat = require("CopilotChat")
      require("CopilotChat.integrations.cmp").setup()

      vim.api.nvim_create_autocmd("BufEnter", {
        pattern = "copilot-chat",
        callback = function()
          vim.opt_local.relativenumber = false
          vim.opt_local.number = false
        end,
      })

      chat.setup(opts)
    end,
  }

Am I doing something wrong?

@gptlang
Copy link
Member

gptlang commented Jul 17, 2024

I added the following line to the system prompt:

Knowledge Cutoff: December 2024

Yes, I know it's not yet December. However, it fails to answer the question with Knowledge Cutoff: December 2023... I only just now realized my previous typo. Giving it a fake knowledge cutoff improves performance. Not sure if we should do that though

@pidgeon777
Copy link

I believe there may be a misunderstanding or error in my configuration. Here is the setup I am currently using:

local COPILOT_INSTRUCTIONS = string.format(
  [[You are an AI programming assistant.
When asked for your name, you must respond with "GitHub Copilot".
Follow the user's requirements carefully & to the letter.
Follow Microsoft content policies.
Avoid content that violates copyrights.
If you are asked to generate content that is harmful, hateful, racist, sexist, lewd, violent, or completely irrelevant to software engineering, only respond with "Sorry, I can't assist with that."
Keep your answers short and impersonal.
You can answer general programming questions and perform the following tasks: 
* Ask a question about the files in your current workspace
* Explain how the code in your active editor works
* Generate unit tests for the selected code
* Propose a fix for the problems in the selected code
* Scaffold code for a new workspace
* Create a new Jupyter Notebook
* Find relevant code to your query
* Propose a fix for the a test failure
* Ask questions about Neovim
* Generate query parameters for workspace search
* Ask how to do something in the terminal
* Explain what just happened in the terminal
You use the GPT-4 version of OpenAI's GPT models.
First think step-by-step - describe your plan for what to build in pseudocode, written out in great detail.
Then output the code in a single code block. This code block should not contain line numbers (line numbers are not necessary for the code to be understood, they are in format number: at beginning of lines).
Minimize any other prose.
Use Markdown formatting in your answers.
Make sure to include the programming language name at the start of the Markdown code blocks.
Avoid wrapping the whole response in triple backticks.
The user works in an IDE called Neovim which has a concept for editors with open files, integrated unit test support, an output pane that shows the output of running the code as well as an integrated terminal.
The user is working on a %s machine. Please respond with system specific commands if applicable.
The active document is the source code the user is looking at right now.
You can only give one reply for each conversation turn.
Knowledge Cutoff: December 2024
]],
  vim.loop.os_uname().sysname
)

local my_opts = {
  model = "gpt-4-0125-preview", -- Model to use
  system_prompt = COPILOT_INSTRUCTIONS,
}

As you can see, I have also appended the following line to the system prompt:

Knowledge Cutoff: December 2024

However, the output I receive is as follows:

  User ───

What is the current version of Python based on your knowledge cutoff?

  Copilot ───

As of my last update in December 2024, the current version of Python would be Python 3.11.

I suspect that I may be making a mistake somewhere.

Could @gptlang kindly share the system_prompt, prompt, and model used in your test? I would like to replicate your setup to see if I can achieve the same results.

@gptlang
Copy link
Member

gptlang commented Jul 18, 2024

You're doing it right, but the difference is in the question. If you ask it what the latest version is, it'll get it wrong. However, if you ask it when 3.12 was released, it'll get the date right (without any prior chats that will lead it to say otherwise.)

@pidgeon777
Copy link

image

  User ───

When was Python 3.12 released?

  Copilot ───

Python 3.12 was released on October 2, 2023.

You were correct. At this point, I assume it wasn't working initially because there was a history of another chat active.

However, aside from the knowledge of events, I wonder if giving it a manual knowledge cutoff (set to 2024) could also make it more "intelligent".

@gptlang
Copy link
Member

gptlang commented Jul 23, 2024

However, aside from the knowledge of events, I wonder if giving it a manual knowledge cutoff (set to 2024) could also make it more "intelligent".

CC @jellydn Do you think we should put in a "fake" knowledge cutoff?

@jellydn
Copy link
Contributor

jellydn commented Jul 23, 2024

Do you think we should put in a "fake" knowledge cutoff?

No, we shouldn't do that.

@thenbe
Copy link

thenbe commented Jul 27, 2024

Hi, Thanks for the quick implementation. Does gpt-4o work for you? Because I get "model not supported". It works in vscode's latest chat plugin as far as I can tell. So I wonder if it needs headers tweaked or it's just my neovim setup.

@hrez I got that error too (model not supported). To fix it, I had to use the full model name instead of the alias. I'm not sure why the alias wasn't working.

opts = {
-	model = 'gpt-4o',
+	model = 'gpt-4o-2024-05-13',
}

You can find the latest model aliases here: https://platform.openai.com/docs/models/gpt-4o

@pidgeon777
Copy link

Hi, Thanks for the quick implementation. Does gpt-4o work for you? Because I get "model not supported". It works in vscode's latest chat plugin as far as I can tell. So I wonder if it needs headers tweaked or it's just my neovim setup.

@hrez I got that error too (model not supported). To fix it, I had to use the full model name instead of the alias. I'm not sure why the alias wasn't working.

opts = {
-	model = 'gpt-4o',
+	model = 'gpt-4o-2024-05-13',
}

You can find the latest model aliases here: https://platform.openai.com/docs/models/gpt-4o

But 4o is currently available for Enterprise GitHub Copilot subscribers, or also Personal plans?

@gptlang
Copy link
Member

gptlang commented Jul 28, 2024

I'm not sure why the alias wasn't working.

Since nobody is reviewing it and it works for me, I'll just be merging it

this will automatically rewrite gpt-4o as an alias

WilliamHsieh added a commit to WilliamHsieh/dotfiles that referenced this issue Jul 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants