llm-security
Here are 57 public repositories matching this topic...
🐢 Open-Source Evaluation & Testing for ML models & LLMs
-
Updated
Oct 11, 2024 - Python
The Security Toolkit for LLM Interactions
-
Updated
Oct 10, 2024 - Python
Litmus is a comprehensive LLM testing and evaluation tool designed for GenAI Application Development. It provides a robust platform with a user-friendly UI for streamlining the process of building and assessing the performance of your LLM-powered applications.
-
Updated
Oct 10, 2024 - Vue
Dynamic RAG for enterprise. Ready to run with Docker,⚡in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, and more.
-
Updated
Oct 9, 2024
[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).
-
Updated
Oct 8, 2024 - Jupyter Notebook
🏴☠️ Hacking Guides, Demos and Proof-of-Concepts 🥷
-
Updated
Oct 8, 2024 - Jupyter Notebook
A secure low code honeypot framework, leveraging AI for System Virtualization.
-
Updated
Oct 7, 2024 - Go
SecGPT: An execution isolation architecture for LLM-based systems
-
Updated
Oct 7, 2024 - Python
This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking course.
-
Updated
Oct 7, 2024 - Jupyter Notebook
The fastest && easiest LLM security and privacy guardrails for GenAI apps.
-
Updated
Oct 6, 2024 - Python
Papers related to Large Language Models in all top venues
-
Updated
Oct 5, 2024
Whispers in the Machine: Confidentiality in LLM-integrated Systems
-
Updated
Sep 30, 2024 - Python
Agentic LLM Vulnerability Scanner / AI red teaming kit
-
Updated
Sep 28, 2024 - Python
Your best llm security paper library
-
Updated
Sep 18, 2024
安全手册,企业安全实践、攻防与安全研究知识库
-
Updated
Sep 18, 2024 - CSS
LMAP (large language model mapper) is like NMAP for LLM, is an LLM Vulnerability Scanner and Zero-day Vulnerability Fuzzer.
-
Updated
Sep 29, 2024
A benchmark for prompt injection detection systems.
-
Updated
Sep 10, 2024 - Jupyter Notebook
利用分类法和敏感词检测法对生成式大模型的输入和输出内容进行安全检测,尽早识别风险内容。The input and output contents of generative large model are checked by classification method and sensitive word detection method to identify content risk as early as possible.
-
Updated
Sep 9, 2024 - Java
Improve this page
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."