██████╗██╗ ██╗ █████╗ ████████╗ ██████╗ ██████╗ ██████╗ ██████╗ ███████╗
██╔════╝██║ ██║██╔══██╗╚══██╔══╝ ╚════██╗ ██╔════╝██╔═══██╗██╔══██╗██╔════╝
██║ ███████║███████║ ██║ █████╔╝ ██║ ██║ ██║██║ ██║█████╗
██║ ██╔══██║██╔══██║ ██║ ██╔═══╝ ██║ ██║ ██║██║ ██║██╔══╝
╚██████╗██║ ██║██║ ██║ ██║ ███████╗ ╚██████╗╚██████╔╝██████╔╝███████╗
╚═════╝╚═╝ ╚═╝╚═╝ ╚═╝ ╚═╝ ╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚══════╝
中文 | English
Chat2Code is a tool that allows programmers to have a conversation with code using natural language.
Programmers often have soul-searching questions like:
🤔"What does this function do?"
🤔"What is the implementation principle of this function?"
🤔"How can I implement this function?"
🤔"Are there any functions that implement this feature?"
Usually, there is no one around you who can give you the answer anytime, anywhere.
Chat2Code: "I'm here to help you💻"
-
Pre-analyze the code
- Traverse the files in the directory that need to be analyzed through filtering rules
- Cut the chunk into an appropriate size
- Embedding the text of the chunk
- Store the index and vector of the chunk in the local cache.
-
Q&A
- Natural language description of code-related issues is more accurate and intuitive, and the large language model combines key code to answer questions.
- Index cache can avoid consuming a large number of API tokens for analysis every time.
- Using openai's Embedding for code analysis is more accurate.
- Local vector query does not rely on external interfaces, making it faster and safer.
- Install
go install github.com/byebyebruce/chat2code/cmd/chat2code@latest
- Set the OPENAI_API_KEY environment variable
export OPENAI_API_KEY=xxxx
. If you want to set OpenAI base urlexport OPENAI_API_BASE=https://xxx
- Run and pass in a code directory chat2code
chat2code load {code_dir}
- Ok, chat with your code
chat2code