Skip to content

Streaming API and Web page for Large Language Models based on transformers+flask+gradio.

Notifications You must be signed in to change notification settings

GaoM-303/LLM_Stream_Service

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM Stream Service

Easy to deploy with just a basic knowledge of Python

Streaming API and Web page for Large Language Models based on Python.

This repository contains:

  1. Flask API: REAL streaming generation of LLM and streaming response interface.
  2. Gradio APP: easy LLM web page.
  3. Request: fast back-end requests.

Quick Start

Take Llama3 for example:

  1. Follow Llama3 download to download Meta-Llama-3-8B-Instruct model, or from huggingface / modelscope.
  2. Follow Llama3 quick-start to install dependencies for Llama3.

Start our project:

  1. Install dependencies for this repository:

    pip install flask gradio transformers
  2. [Optional] Modify the settings in settings.py.

  3. Run Flask service:

    python llm_service.py --host 0.0.0.0 --port 8800 --ckpts /Meta-Llama-3-8B-Instruct
  4. Run Gradio app:

    gradio llm_app.py --address http://127.0.0.1:80/
  5. Service invocation:

    python llm_request.py --address http://127.0.0.1:80/

Journey and Challenges

  1. The initial streaming output scheme adopted by the project was the TextIteratorStreamer that comes with the official transformers library. However, the generation speed was still very slow. After researching, I found that the TextIteratorStreamer actually converts "print-ready text" into a streaming structure, meaning that the LLM first needs to generate the entire text block before converting it, which is not what I wanted. I wanted the LLM to yield each token as it is generated.

  2. Subsequently, I came across LowinLi's project that truly implemented streaming output for pretrained models. When I eagerly applied it to the Llama3 model, it threw an error. After debugging, I found that Llama3 has two eos_tokens, which caused the loop to generate negative ids. Thus, I made modifications based on this project, cleaned up redundancies, adapted it for Llama3, and made it easier to read and understand.

Thanks 🙇

About

Streaming API and Web page for Large Language Models based on transformers+flask+gradio.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages