Dynamic RAG for enterprise. Ready to run with Docker,⚡in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, and more.
-
Updated
Oct 9, 2024
Dynamic RAG for enterprise. Ready to run with Docker,⚡in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, and more.
🐢 Open-Source Evaluation & Testing for ML models & LLMs
[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).
The Security Toolkit for LLM Interactions
Agentic LLM Vulnerability Scanner / AI red teaming kit
A secure low code honeypot framework, leveraging AI for System Virtualization.
An easy-to-use Python framework to generate adversarial jailbreak prompts.
Papers and resources related to the security and privacy of LLMs 🤖
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
🏴☠️ Hacking Guides, Demos and Proof-of-Concepts 🥷
This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses
AI-driven Threat modeling-as-a-Code (TaaC-AI)
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
Framework for LLM evaluation, guardrails and security
The fastest && easiest LLM security and privacy guardrails for GenAI apps.
A benchmark for prompt injection detection systems.
This repository contains various attack against Large Language Models.
A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.
SecGPT: An execution isolation architecture for LLM-based systems
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."