There’s a lot of fake, poor-quality, and harmful content on the internet. To maintain a safe environment on your website or app, it’s vital to implement content moderation to flag and remove that content. In some places, moderation can even be a legal requirement. But user-generated or third-party content moderation has often relied on user reports and manual efforts from site admins, which can be costly and time-consuming.
Large language models, computer vision, optical character recognition, and other recent advancements in technology make it possible to put effective automated content management strategies in place. Cloudinary has a suite of add-ons and other tools that take advantage of these technologies to moderate your visual media.
In this blog post, we’ll explore how to use the new Cloudinary AI Vision add-on to put content management policies in place for image assets. If you need moderation for video or other assets, you can utilize Google AI Video Moderation, Amazon Rekognition Video Moderation or one of Cloudinary’s many other add-ons. Cloudinary AI Vision simplifies content moderation with its Moderation Mode, allowing businesses to automate repetitive tasks, reduce manual effort, and ensure the standards of your visual media are consistent.
Working with Cloudinary add-ons is an easy process using Upload presets, the Analyze API, or Cloudinary SDKs. To get started:
- Sign up for Cloudinary. If you don’t already have an account, create one here for free.
- Enable the AI Vision Add-on. Go to the Settings > Add-ons > Cloudinary AI Vision page in your Cloudinary account and activate AI Vision.
- Choose your integration method. Use the Analyze API for direct analysis or integrate AI Vision into your upload pipeline for real-time moderation.
- Leverage Cloudinary SDKs. Cloudinary provides SDKs in various languages to streamline API usage, making integration quick and efficient.
- Customize for your needs. Tailor moderation questions to match your platform’s unique requirements.
At the core of AI Vision’s moderation capabilities is the ai_vision_moderation
method. This mode evaluates images against specific yes/no questions, ensuring adherence to guidelines or creative specifications.
Let’s say you’re moderating a platform to ensure no images contain alcohol or nudity. API request:
`curl '<https://<API_KEY>:<API_SECRET>@api.cloudinary.com/v2/analysis/><CLOUD_NAME>/analyze/ai_vision_moderation' -d '{
"source": {
"uri": "<https://res.cloudinary.com/demo/image/upload/woman-business-suit>"
},
"rejection_questions": [
"Does the image contain alcohol?",
"Does the image contain nudity?"
]
}'
**API response:**
`{
"analysis": {
"responses": [
{ "prompt": "Does the image contain alcohol?", "value": "no" },
{ "prompt": "Does the image contain nudity?", "value": "no" }
]
}
}`
In this scenario, AI identifies that the image doesn’t violate the platform’s guidelines. The confidence percentage isn’t included in the API response, but it exceeds the number configured in the preset.
You can also integrate AI Vision directly into your upload pipeline, enabling real-time evaluation of assets as they’re uploaded to your Cloudinary account. Check out this blog post to learn more about Cloudinary asset pipelines.
You can create an upload preset or use Cloudinary SDK’s to enable the ai_vision_moderation
mode as part of the upload process.
Here’s how you would do it using the Node.js SDK:
`const cloudinary = require('cloudinary').v2;
cloudinary.config({
cloud_name: '<CLOUD_NAME>',
api_key: '<API_KEY>',
api_secret: '<API_SECRET>'
});
cloudinary.uploader.upload('path/to/image.jpg', {
moderation: 'ai_vision_moderation',
moderation_questions: [
"Does the image contain explicit content?",
"Does the image contain hate symbols?"
]
}, function(error, result) {
if (error) console.error(error);
else console.log(result.moderation);
});`
The upload response will include moderation details, allowing you to decide whether to approve or reject the content before making it publicly accessible.
AI Vision’s flexibility allows you to define custom moderation policies tailored to your business. For example:
- Retail platforms. “Does the image contain counterfeit items?”
- Social media. “Does the image display hate symbols?”
- Publishing. “Is the image safe for a general audience?”
Here’s a workflow for identifying offensive symbols in images:
API request:
curl 'https://<API_KEY>:<API_SECRET>@api.cloudinary.com/v2/analysis/<CLOUD_NAME>/analyze/ai_vision_moderation' -d '{ "source": { "uri": "https://res.cloudinary.com/demo/image/upload/sample-image" }, "rejection_questions": [ "Does the image display offensive symbols?" ] }'
API response:
{ "analysis": { "responses": [ { "prompt": "Does the image display offensive symbols?", "value": "yes" } ] } }
Cloudinary AI Vision is a powerful, user-friendly solution for modern visual media management. Automate compliance checks, protect brand integrity, and maintain a safe user experience to scale your digital asset management. Contact us today to learn more.