Skip to content

StableDiffusionWebUI accelerated using TensorRT

License

Notifications You must be signed in to change notification settings

AI-Creators-Society/Lsmith

 
 

Repository files navigation

Lsmith is a fast StableDiffusionWebUI using high-speed inference technology with TensorRT

  1. Benchmark
  2. Installation
  3. Usage

Benchmark

benchmark

Installation

Docker (All platform) | Easy

  1. Clone repository
git clone https://github.com/ddPn08/Lsmith.git
cd Lsmith
git submodule update --init --recursive
  1. Launch using Docker compose
docker compose up

Linux | Difficult

requirements

  • node.js (recommended version is 18)
  • pnpm
  • Python 3.10
  • pip
  • CUDA
  • cuDNN < 8.6.0
  • TensorRT 8.5.x
  1. Follow the instructions on this page to build TensorRT OSS and get libnvinfer_plugin.so.
  2. Clone Lsmith repository
git clone https://github.com/ddPn08/Lsmith.git
cd Lsmith
git submodule update --init --recursive
  1. Enter the repository directory.
cd Lsmith
  1. Enter frontend directory and build frontend
cd frontend
pnpm i
pnpm build --out-dir ../dist
  1. Run launch.sh with the path to libnvinfer_plugin.so in the LD_PRELOAD variable.
ex.)
LD_PRELOAD="/lib/src/TensorRT/build/out/libnvinfer_plugin.so.8" bash launch.sh --host 0.0.0.0

Windows | Currently unavailable...

We are looking for a way to do that. Use Docker instead for now.


Usage

Once started, access <ip address>:<port number> (ex http://localhost:8000) to open the WebUI.

First of all, we need to convert our existing diffusers model to the tensorrt engine.

Building the TensorRT engine

  1. Click on the "engine" tab
  2. Enter Hugging Face's Diffusers model ID in Model ID (ex: CompVis/stable-diffusion-v1-4)
  3. Enter your Hugging Face access token in HuggingFace Access Token (required for some repositories). Access tokens can be obtained or created from this page.
  4. Click the Build button to start building the engine.
    • There may be some warnings during the engine build, but you can safely ignore them unless the build fails.
    • The build can take tens of minutes. For reference it takes an average of 15 minutes on the RTX3060 12GB.

Generate images

  1. Select the model in the header dropdown.
  2. Click on the "txt2img" tab
  3. Click "Generate" button.



Special thanks to the technical members of the AI絵作り研究会, a Japanese AI image generation community.

About

StableDiffusionWebUI accelerated using TensorRT

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 54.4%
  • TypeScript 43.1%
  • Dockerfile 0.7%
  • Shell 0.7%
  • JavaScript 0.6%
  • CSS 0.3%
  • HTML 0.2%