Last updated: Jul-11-2023
Cloudinary is a cloud-based service that provides an end-to-end asset management solution including uploads, storage, transformations, optimizations and delivery. Cloudinary offers a very rich set of image transformation and analysis capabilities and allows you to upload images to the cloud, transform them on the fly and deliver them to your users optimized and cached via a fast CDN.
Amazon Rekognition is a service that makes it easy to add image analysis to your applications. Cloudinary provides an add-on for Amazon Rekognition's image moderation service based on Deep Learning algorithms, fully integrated into Cloudinary's image management and transformation pipeline.
With Amazon Rekognition's AI Moderation add-on, you can extend Cloudinary's powerful cloud-based image transformation and delivery capabilities with automatic artificial intelligence based moderation of your photos. Protect your users from explicit and suggestive adult content in your user uploaded images, making sure that no offensive photos are displayed to your web and mobile viewers.
Before you can use the Amazon Rekognition AI Moderation add-on:
You must have a Cloudinary account. If you don't already have one, you can sign up for a free account.
Keep in mind that many of the examples on this page use our SDKs. For SDK installation and configuration details, see the relevant SDK guide.
If you are new to Cloudinary, you may want to take a look at How to integrate Cloudinary in your app for a walk through on the basics of creating and setting up your account, working with SDKs, and then uploading, transforming and delivering assets.
The following list describes the flow of uploading and displaying moderated images using Cloudinary and the Amazon Rekognition's AI Moderation add-on:
- Your users upload an image to Cloudinary through your application.
- The uploaded image is sent to Amazon Rekognition for moderation.
- The image is either approved or rejected by Amazon Rekognition.
- An optional notification callback is sent to your application with the image moderation result.
- A rejected image does not appear in your media library, but is backed up, consuming storage, so that it can be restored if necessary.
- Moderated images can be listed programmatically using Cloudinary's Admin API or interactively using our online Media Library Web interface.
- You can manually override the automatic moderation using the Admin API or the Media Library.
Amazon Rekognition assigns a moderation confidence score (0 - 100) indicating the chances that an image belongs to an offensive content category.
There are two levels of categories for labelling unsafe content, with each top-level category containing a number of second-level categories, for example under the 'Violence' (
violence) category you have the sub-category 'Physical Violence' (
physical_violence). A full list of all the latest available categories and sub-categories is provided by AWS, see Amazon Rekognition categories.
The top level categories include:
- Explicit Nudity (
- Suggestive (
- Violence (
- Visually Disturbing (
- Rude Gestures (
- Drugs (
- Tobacco (
- Alcohol (
- Gambling (
- Hate Symbols (
The default moderation confidence level to reject an image is 0.5 for all categories, unless specifically overridden (see explanation and examples below). All images classified by Amazon Rekognition with a value greater than the moderation confidence level (in any of the categories) are classified as 'rejected', otherwise their status is set to 'approved'.
To request moderation while uploading an image, with default moderation confidence levels, set the
moderation upload API parameter to
Learn more: Upload presets
You can also optionally:
- Override the default moderation confidence level (0.5) on a per category basis by including the category name and new value as part of the
moderationparameter value. You can override multiple categories, separated by colons.
NoteOverriding the default moderation confidence value of a top-level category will also set all its child categories to the same value, unless you specifically override one of the child categories as well.
- Exclude a category from the moderation check by setting the category's value to
- Return the
moderation_labelsarray, even if no offending content is found by setting:
- Set the Female Swimwear or Underwear child category to a minimum confidence level of 0.85
- Set the Explicit Nudity top-level category to 0.7. This then becomes the confidence level for all its child categories as well.
- Exclude the Revealing Clothes category from the check.
- Check all other categories at the default 0.5 confidence level.
- Confidence levels must be provided as a decimal number between 0.0 and 1.0.
- Images must have a minimum height and width of 80 pixels.
The following snippet shows an example of a response to an upload API call, indicating the results of the request for moderation, and shows that the moderation result has put the image in 'rejected' status.
Cloudinary's Admin API can be used to list all moderated images. You can list either the approved or the rejected images by specifying the second parameter of the
resources_by_moderation API method. For example to list all rejected images:
As the automatic image moderation of the Amazon Rekognition AI Moderation add-on is based on a decision made by an advanced algorithm, in some cases you may want to manually override the moderation decision and either approve a previously rejected image or reject an approved one.
One way to manually override the moderation result is using Cloudinary's Media Library Web interface. From the left navigation menu, select Moderation. Then, from moderation tools list in the top menu, select Rekognition and then select the status (Rejected, or Approved) of the images you want to display.
- When displaying the images rejected by Amazon Rekognition, you can click on the green Approve button to revert the decision and recover the original rejected image.
- When displaying the images approved by Amazon Rekognition, you can click on the red Reject button to revert the decision and prevent a certain image from being publicly available to your users.
Alternatively to using the Media Library interface, you can use Cloudinary's Admin API to manually override the moderation result. The following example uses the
update API method while specifying a public ID of a moderated image and setting the
moderation_status parameter to the