EmbeddedLLM: API server for Embedded Device Deployment. Currently support CUDA/OpenVINO/IpexLLM/DirectML/CPU
-
Updated
Oct 6, 2024 - Python
EmbeddedLLM: API server for Embedded Device Deployment. Currently support CUDA/OpenVINO/IpexLLM/DirectML/CPU
An Image Classification Project model built using PyTorch and DirectML backend. Secondary purpose is to draw attention to AMD+DirectML and perform benchmarks on AMD GPUs.
actively maintained python package to easily retrain OpenAI's GPT-2 text-generating model on new texts using tensorflow v1 (with AMD / Intel GPU using directml)
Function C API for running Python functions on desktop, mobile, web, and in the cloud. Register at https://fxn.ai
This tutorial covers creating an object detection plugin for a Unity game engine project using ONNX Runtime and DirectML.
export any your YOLOv7 model to TensorFlow, TensorFlowJs, ONNX, OpenVINO, RKNN,...
GUI for upscaling ONNX models with NVIDIA TensorRT and Vapoursynth
Little console app to run an ONNX model through ONNX Runtime via the DirectML execution provider.
GPU-accelerated javascript runtime for StableDiffusion. Uses modified ONNX runtime to support CUDA and DirectML.
A simple Windows / Xbox app for generating AI images with Stable Diffusion.
Efficient CPU/GPU ML Runtimes for VapourSynth (with built-in support for waifu2x, DPIR, RealESRGANv2/v3, Real-CUGAN, RIFE, SCUNet and more!)
This repository contains a pure C++ ONNX implementation of multiple offline AI models, such as StableDiffusion (1.5 and XL), ControlNet, Midas, HED and OpenPose.
Stable Diffusion web UI
Add a description, image, and links to the directml topic page so that developers can more easily learn about it.
To associate your repository with the directml topic, visit your repo's landing page and select "manage topics."