Skip to content

Commit

Permalink
Content safety use case doc
Browse files Browse the repository at this point in the history
  • Loading branch information
heblasco committed Nov 16, 2023
1 parent ea9859f commit 7516e82
Show file tree
Hide file tree
Showing 2 changed files with 31 additions and 4 deletions.
4 changes: 2 additions & 2 deletions docs/content/en/docs/Concepts/azure-content-safety.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ date: 2023-11-16
description: >
Used to keep your content safe. Create better online experiences for everyone with powerful AI models that detect offensive or inappropriate content in text and images quickly and efficiently.
categories: [Azure]
tags: [docs, cognitive-search]
tags: [docs, content-safety, azure, ai, content, safety]
weight: 2
---

Expand All @@ -19,7 +19,7 @@ Moderator works both for text and image content. It can be used to detect adult

[Azure AI Content Safety Studio](https://contentsafety.cognitive.azure.com/) is an online tool designed to handle potentially offensive, risky, or undesirable content using cutting-edge content moderation ML models. It provides templates and customized workflows, enabling users to choose and build their own content moderation system. Users can upload their own content or try it out with provided sample content.

<video width="320" height="240" controls>
<video width="500" height="350" controls>
<source src="https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/azure-ai-content-0x720-3266k" type="video/mp4">
Your browser does not support the video tag.
</video>
Expand Down
31 changes: 29 additions & 2 deletions docs/content/en/docs/Use Cases/content-safety.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,35 @@ date: 2023-11-16
description: >
Analyze and moderate text or image, by adding the thresholds for different flags.
categories: [Azure, OpenAI]
tags: [docs, cognitive-search, content-moderator]
tags: [docs, text, image content-safety, azure, ai, content, safety]
weight: 6
---

lorem ipsum dolor sit amet
In today's digital age, online platforms are increasingly becoming hubs for user-generated content, ranging from text and images to videos. While this surge in content creation fosters a vibrant online community, it also brings forth challenges related to content moderation and ensuring a safe environment for users. Azure AI Content Safety offers a robust solution to address these concerns, providing a comprehensive set of tools to analyze and filter content for potential safety risks.

Use Case Scenario: **Social Media Platform Moderation**

Consider a popular social media platform with millions of users actively sharing diverse content daily. To maintain a positive user experience and adhere to community guidelines, the platform employs Azure AI Content Safety to automatically moderate and filter user-generated content.

**Image Moderation:**
Azure AI Content Safety capabilities are leveraged to analyze images uploaded by users. The system can detect and filter out content that violates community standards, such as explicit or violent imagery. This helps prevent the dissemination of inappropriate content and ensures a safer environment for users of all ages.

**Text Moderation:**
The Text Moderator is employed to analyze textual content, including comments, captions, and messages. The platform can set up filters to identify and block content containing hate speech, harassment, or other forms of harmful language. This not only protects users from offensive content but also contributes to fostering a positive online community.

**Customization and Adaptability:**
Azure AI Content Moderator allows platform administrators to customize the moderation rules based on specific community guidelines and evolving content standards. This adaptability ensures that the moderation system remains effective and relevant over time, even as online trends and user behaviors change.

**Real-time Moderation:**
The integration of Azure AI services enables real-time content moderation. As users upload content, the system quickly assesses and filters it before making it publicly available. This swift response time is crucial in preventing the rapid spread of inappropriate or harmful content.

**User Reporting and Feedback Loop:**
Azure AI Content Safety facilitates a user reporting and feedback loop. If a user comes across potentially harmful content that was not automatically detected, they can report it. This feedback helps improve the system's accuracy and adaptability, creating a collaborative approach to content safety.

By implementing Azure AI Content Safety, the social media platform can significantly enhance its content moderation efforts, providing users with a safer and more enjoyable online experience while upholding community standards.

**AI Hub** uses Azure AI Content Safety to moderate the content of the user's query, and to moderate the content of the response generated by our LLM (ChatGPT).

Learn more at the official documentation: [What is Azure AI Content Safety?](https://learn.microsoft.com/en-us/azure/ai-services/content-safety/)

Start Right now: [Azure AI Content Safety Studio](https://contentsafety.cognitive.azure.com/)

0 comments on commit 7516e82

Please sign in to comment.