Skip to content

Commit

Permalink
Merge branch 'next'
Browse files Browse the repository at this point in the history
  • Loading branch information
knoopx committed Nov 6, 2023
2 parents 6a72cdb + d8dc1f4 commit 1ca9ba2
Show file tree
Hide file tree
Showing 74 changed files with 2,911 additions and 1,310 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -174,3 +174,5 @@ dist
# Finder (MacOS) folder config
.DS_Store

models/
```
124 changes: 73 additions & 51 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,71 +1,93 @@
# LLM Workbench

A one-stop-shop for all your LLM needs. Unleash the power of FOSS language models on your local machine.
LLM Workbench is a user-friendly web interface designed for large language models, built with React and MobX, styled using Shadcn UI. It serves as a one-stop solution for all your large language model needs, enabling you to harness the power of free, open-source language models on your local machine.

# Usage
### Getting Started

To start your journey, choose between a HuggingFace Text Inference Generation Endpoint or Ollama.

#### HuggingFace Text Inference Generation Endpoint

```bash
OLLAMA_ORIGINS="https://knoopx.github.io" ollama serve
docker run --gpus all --shm-size 1g -p 8080:80 -v (pwd)/models:/data ghcr.io/huggingface/text-generation-inference:1.1.0 --trust-remote-code --model-id TheBloke/deepseek-coder-33B-instruct-AWQ --quantize awq
```

or add to `/etc/systemd/system/ollama.service`:
#### Ollama

```bash
OLLAMA_ORIGINS="https://knoopx.github.io" ollama serve
```

or add this line to `/etc/systemd/system/ollama.service`:

```bash
Environment=OLLAMA_ORIGINS="https://knoopx.github.io"
```

and restart Ollama:
Restart Ollama using these commands:

```
```bash
systemctl daemon-reload
systemctl restart ollama
```

# Features

- UI
- Simple, clean interface
- Dark mode
- LLM API Client
- Ollama
- Chat Interface
- Output Streaming
- Regenerate/Continue/Undo/Clear
- Markdown Rendering
- Complete generation control
- System prompts
- Conversation History
- Chat prompt template
## 🎭 Features

### 💬 Chat Interface

- **Simple, clean interface**: We've designed a user-friendly interface that makes it easy for you to interact with the AI model.
- **Output streaming**: See the generated text in real-time as you type your prompt.
- **Regenerate/Continue/Undo/Clear**: Use these buttons to control the generation process.
- **Markdown Rendering**: The AI will generate text that supports Markdown formatting, making it easy for you to create styled content.
- **Generation canceling**: Stop the generation process at any time by clicking the "Cancel" button.
- **Dark mode**: Prefer working in the dark? Toggle on Dark mode for a more comfortable experience.
- **Attachments**: Attach files to your chat messages (pdf, docx, and plain-text supported only).

### 🛹 Playground

- **Copilot-alike inline completion**: Type your prompt and let the AI suggest completions as you type.
- **Tab to accept**: Press the Tab key to accept the suggested completion.
- **Cltr+Enter to re-generate**: Hit Ctrl+Enter to re-generate the response with the same prompt.

### 🤖 Agents

- **Connection Adapters**: We support various connection adapters, including Ollama and HuggingFace TGI (local or remote).
- **Complete generation control**: Customize the agent behavior with system prompts, conversation history, and chat prompt templates.

# Future Ideas

- window.ai integration
- Copy to clipboard overlay over messages
- Full page prompt editor
- Collapsible side panels
- Cancel generation
- Import/Export chats
- Some kind of Agents
- One-shot Tasks/Apps
- Tools
- LLM Connection adapters
- Prompt management
- Model management
- RAG/Embeddings/Vector Search
- https://github.com/jacoblee93/fully-local-pdf-chatbot
- https://github.com/tantaraio/voy
- https://do-me.github.io/SemanticFinder/
- https://github.com/yusufhilmi/client-vector-search
- https://github.com/mwilliamson/mammoth.js
- Other pipelines
- TTS
- Re-formating
- https://huggingface.co/ldenoue/distilbert-base-re-punctuate
- Summarization
- https://huggingface.co/ldenoue/distilbart-cnn-6-6
- Translation
- Speech Recognition
- https://huggingface.co/docs/transformers.js/api/pipelines#pipelinesautomaticspeechrecognitionpipeline
- NER
- https://huggingface.co/Xenova/bert-base-NER
- https://winkjs.org/wink-nlp/wink-nlp-in-browsers.html
- Import/Export chats - Importing and exporting chat data for convenience.
- Token Counter - A feature to count tokens in text.
- Copy fenced block to clipboard - The ability to copy a code block and paste it into the clipboard.
- Collapsible side panels - Side panels that can be expanded or collapsed for better organization.
- [window.ai](https://windowai.io/) integration

Code Interpreters:

- Hugging Face agents ([@huggingface/agents](https://github.com/huggingface/agents))
- Aider ([paul-gauthier/aider](https://github.com/paul-gauthier/aider))
- Functionary ([MeetKai/functionary](https://github.com/MeetKai/functionary))
- NexusRaven model ([Nexusflow/NexusRaven-13B](https://huggingface.co/Nexusflow/NexusRaven-13B))
- Open procedures database ([KillianLucas/open-procedures](https://raw.githubusercontent.com/KillianLucas/open-procedures/main/procedures_db.json))
- ReACT ([www.promptingguide.ai/techniques/react](https://www.promptingguide.ai/techniques/react))

Model management features:

- Hugging Face Hub ([@huggingface/hub](https://huggingface.co/docs/huggingface.js/hub/modules))
- GPT4All catalog ([nomic-ai/gpt4all](https://raw.githubusercontent.com/nomic-ai/gpt4all/main/gpt4all-chat/metadata/models2.json))
- LM Studio catalog [lmstudio-ai/model-catalog](https://raw.githubusercontent.com/lmstudio-ai/model-catalog/main/catalog.json)

RAG, embeddings and vector search:

- Client vector search ([yusufhilmi/client-vector-search](https://github.com/yusufhilmi/client-vector-search)) - in-browser vector database.
- Fully local PDF chatbot ([jacoblee93/fully-local-pdf-chatbot](https://github.com/jacoblee93/fully-local-pdf-chatbot)) - related.
- SemanticFinder ([do-me.github.io/SemanticFinder](https://do-me.github.io/SemanticFinder/)) - related.

Other potential pipelines to consider:

- TTS (Text-to-Speech) - convert text into speech.
- Reformatting - such as punctuation and re-punctuation models ([ldenoue/distilbert-base-re-punctuate](https://huggingface.co/ldenoue/distilbert-base-re-punctuate)).
- Summarization - summarize long text into shorter versions ([ldenoue/distilbart-cnn-6-6](https://huggingface.co/ldenoue/distilbart-cnn-6-6)).
- Translation - convert text between languages.
- Automatic speech recognition pipeline ([transformers.js](https://huggingface.co/docs/transformers.js/api/pipelines#pipelinesautomaticspeechrecognitionpipeline)) - convert spoken words into written text.
- Named Entity Recognition (NER) - identify and classify entities in text ([Xenova/bert-base-NER](https://huggingface.co/Xenova/bert-base-NER), [wink-nlp](https://winkjs.org/wink-nlp/wink-nlp-in-browsers.html)).
Binary file modified bun.lockb
Binary file not shown.
13 changes: 6 additions & 7 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
},
"dependencies": {
"@hookform/resolvers": "^3.3.2",
"@jongleberry/pipe": "^1.1.0",
"@huggingface/inference": "^2.6.4",
"@microflash/rehype-starry-night": "^3.0.0",
"@radix-ui/react-accordion": "^1.1.2",
"@radix-ui/react-alert-dialog": "^1.0.5",
Expand Down Expand Up @@ -65,16 +65,14 @@
"cmdk": "^0.2.0",
"date-fns": "^2.30.0",
"dedent": "^1.5.1",
"install": "^0.13.0",
"he": "^1.2.0",
"liquidjs": "^10.9.3",
"lucide-react": "^0.290.0",
"mammoth": "^1.6.0",
"minisearch": "^6.2.0",
"mobx": "^6.10.2",
"mobx-react": "^9.0.1",
"mobx-state-tree": "^5.3.0",
"mustache": "^4.2.0",
"papaparse": "^5.4.1",
"pdf-parse": "^1.1.1",
"pdf.js": "^0.1.0",
"pdfjs-dist": "^3.11.174",
"react": "^18.2.0",
"react-day-picker": "^8.9.1",
Expand All @@ -86,6 +84,7 @@
"rehype-highlight": "^7.0.0",
"rehype-katex": "^7.0.0",
"rehype-parse": "^9.0.0",
"rehype-remark": "^10.0.0",
"rehype-stringify": "^10.0.0",
"remark-emoji": "^4.0.1",
"remark-gfm": "^4.0.0",
Expand All @@ -95,8 +94,8 @@
"tailwind-merge": "^1.14.0",
"tailwindcss-animate": "^1.0.7",
"unified-stream": "^3.0.0",
"use-debounce": "^9.0.4",
"vite": "^4.5.0",
"voy-search": "^0.6.3",
"zod": "^3.22.4"
}
}
26 changes: 26 additions & 0 deletions scripts/pull.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
import subprocess
from argparse import ArgumentParser
from pathlib import Path

from huggingface_hub import HfApi, hf_hub_download

parser = ArgumentParser()
parser.add_argument("repo_id", type=str)
parser.add_argument("--quant", type=str, default="Q4_K_M")

args = parser.parse_args()
api = HfApi()

files = api.list_repo_files(args.repo_id)

for file in files:
if args.quant in file and ".gguf" in file:
target_path = Path("models") / args.repo_id
target_path.mkdir(parents=True, exist_ok=True)

hf_hub_download(args.repo_id, file, local_dir=target_path)
model_file = target_path / "Modelfile"
model_file.write_text(f"FROM {file}")

model_name = Path(file).stem.replace(f".{args.quant}", "")
subprocess.run(["ollama", "create", model_name, "-f", str(model_file)])
22 changes: 22 additions & 0 deletions src/app/AgentConversationAccordionItem.tsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
import { observer } from "mobx-react";
import { useStore } from "@/store";
import { MdOutlineHistory } from "react-icons/md";
import { AgentHistory } from "../components/AgentConversation";
import { AppAccordionItem } from "./AppAccordionItem";

export const AgentConversationAccordionItem = observer(() => {
const store = useStore();
const {
state: { resource: agent },
} = store;

return (
<AppAccordionItem
id="agent-history"
icon={MdOutlineHistory}
title="Conversation History"
>
<AgentHistory agent={agent} />
</AppAccordionItem>
);
});
18 changes: 18 additions & 0 deletions src/app/AgentPromptTemplate.tsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
import { Agent } from "@/store/Agent"
import { Instance } from "mobx-state-tree"
import { observer } from "mobx-react"
import { AutoTextarea } from "@/components/AutoTextarea"

export const AgentPromptTemplate: React.FC<{
agent: Instance<typeof Agent>
}> = observer(({ agent }) => {
return (
<AutoTextarea
className="flex-auto font-mono text-xs whitespace-pre"
value={agent.promptTemplate}
onChange={(e) =>
agent.update({ promptTemplate: e.target.value })
}
/>
)
})
22 changes: 22 additions & 0 deletions src/app/AgentPromptTemplateAccordionItem.tsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
import { observer } from "mobx-react";
import { useStore } from "@/store";
import { TbPrompt } from "react-icons/tb";
import { AppAccordionItem } from "./AppAccordionItem";
import { AgentPromptTemplate } from "./AgentPromptTemplate";

const AgentPromptTemplateAccordionItem = observer(() => {
const store = useStore();
const {
state: { resource: agent },
} = store;

return (
<AppAccordionItem
id="prompt-template"
icon={TbPrompt}
title="Prompt Template"
>
<AgentPromptTemplate agent={agent} />
</AppAccordionItem>
);
});
Loading

0 comments on commit 1ca9ba2

Please sign in to comment.