Cloudinary Blog

How to Automate Image Moderation with Amazon Rekognition

Automatically moderate your user uploaded images

Allowing your users to upload their own images to your website can increase user engagement, retention and monetization. However, allowing your users to upload any image they want to, may lead to some of your users uploading inappropriate images to your application. These images may offend other users or even cause your site to violate standards or regulations.

If you have any ads appearing on your site, you also need to protect your advertiser's brands by ensuring that they don't appear alongside any adult content. Some advertising networks are very intolerant and blacklist any website that displays adult content, even if that content was submitted by users.

Cloudinary's image management solution already provides a manual moderation web interface and API to help do this efficiently. However, manual moderation is time consuming and is not instantaneous, and so we wanted to provide an additional option - automatic moderation of images as your users upload them.

Webinar
Marketing Without Barriers Through Dynamic Asset Management

Introducing the Amazon Rekognition AI Moderation add-on

Cloudinary provides an add-on for Amazon Rekognition's image moderation service based on deep learning algorithms, fully integrated into Cloudinary's image management and manipulation pipeline. With Amazon Rekognition's AI Moderation add-on, you can extend Cloudinary's powerful cloud-based image media library and delivery capabilities with automatic, artificial intelligence-based moderation of your photos. Protect your users from explicit and suggestive adult content in your user-uploaded images, making sure that no offensive photos are displayed to your web and mobile viewers.

Moderation with Rekognition

Enabling automatic image moderation by Amazon Rekognition

To request moderation while uploading an image, simply set the moderation upload API parameter to aws_rek:

Ruby:
Copy to clipboard
Cloudinary::Uploader.upload("sample_image.jpg", 
  :moderation => "aws_rek")
PHP:
Copy to clipboard
\Cloudinary\Uploader::upload("sample_image.jpg", 
  array("moderation" => "aws_rek"));
Python:
Copy to clipboard
cloudinary.uploader.upload("sample_image.jpg",
  moderation = "aws_rek")
Node.js:
Copy to clipboard
cloudinary.v2.uploader.upload("sample_image.jpg", 
  {moderation: "aws_rek"},
  function(error, result){console.log(result);});
Java:
Copy to clipboard
cloudinary.uploader().upload("sample_image.jpg", 
  ObjectUtils.asMap("moderation", "aws_rek"));

The uploaded image is automatically sent to Amazon Rekognition for moderation: Amazon Rekognition assigns a moderation confidence score (0 - 100) indicating the chances that an image belongs to an offensive content category. The default moderation confidence level is 0.5 unless specifically overridden (see below), and all images classified by Amazon Rekognition with a value greater than the moderation confidence level are classified as 'rejected'. Otherwise, their status is set to 'approved', with all results included in the upload response.

This means that your user can be instantaneously alerted to the fact if their image was rejected, improving the end user experience (no waiting for approval or suddenly finding their image gone at a later date due to manual moderation). A rejected image is moved to a secondary backup repository, and is not automatically delivered. If you choose, you can manually review rejected images and mark them as approved if appropriate as described later in this article.

Fine-tuning the image moderation

The automatic moderation can be configured for different content categories, allowing you to fine-tune what kinds of images you deem acceptable or objectionable. By default, any image that Amazon Rekognition determines to be adult content with a confidence score of over 50% is automatically rejected. This minimum confidence score can be overridden for any of the categories with a new default value (see the documentation for a breakdown of the available categories). For example, to request moderation for the supermodel image, with the moderation confidence level set to 0.75 for the 'Female Swimwear or Underwear' sub-category, 0.6 for the 'Explicit Nudity' top-level category (overriding the default for all its child categories as well) and exclude the 'Revealing Clothes' category:

Ruby:
Copy to clipboard
Cloudinary::Uploader.upload("supermodel.jpg", 
  :moderation => "aws_rek:female_underwear:0.75:explicit_nudity:0.6:revealing_clothes:ignore")
PHP:
Copy to clipboard
\Cloudinary\Uploader::upload("supermodel.jpg", 
  array("moderation" => "aws_rek:female_underwear:0.75:explicit_nudity:0.6:revealing_clothes:ignore"));
Python:
Copy to clipboard
cloudinary.uploader.upload("supermodel.jpg",
  moderation = "aws_rek:female_underwear:0.75:explicit_nudity:0.6:revealing_clothes:ignore")
Node.js:
Copy to clipboard
cloudinary.v2.uploader.upload("supermodel.jpg", 
  {moderation: "aws_rek:female_underwear:0.75:explicit_nudity:0.6:revealing_clothes:ignore"},
  function(error, result){console.log(result);});
Java:
Copy to clipboard
cloudinary.uploader().upload("supermodel.jpg", 
  ObjectUtils.asMap("moderation", "aws_rek:female_underwear:0.75:explicit_nudity:0.6:revealing_clothes:ignore"));

Manage moderated images

No matter how powerful and reliable an artificial intelligence algorithm is, it can never be 100% accurate, and in some cases of moderation, "accuracy" might be suggestive. Furthermore, you may configure your moderation tool to err on the conservative side, which may result in an occasional rejected image that may have been OK.

While the automated moderation minimizes efforts and provides instantaneous results, you may sometimes want to adjust the moderation status of a specific image manually.

You can also use the API or web interface to alter the automatic moderation decision. You can browse rejected or approved images, and then manually approve or reject them as needed. If you choose to approve a previously rejected image, the original version of the rejected image will be restored from backup, and if you choose to reject a previously approved image, cache invalidation will be performed for the image so it will be erased from all the CDN cache servers.

For more information, see the detailed documentation.

Moderation made easy

Moderating your user-generated images is important to protect your brand and keep your users and advertisers happy. With the Amazon Rekognition AI Moderation add-on, you can provide feedback within the upload stream and fine-tune the filters that you use to determine what kinds of images you deem acceptable or objectionable. Automated moderation can help you to improve photo sharing sites, forums, dating apps, content platforms for children, eCommerce platforms and marketplaces, and more.

aws rekognition

The add-on is available with all Cloudinary plans, with a free tier for you to try it out. If you don't have a Cloudinary account yet, sign up for a free account here.

Recent Blog Posts

Automatically Translating Videos for an International Audience

No matter your business focus—public service, B2B integration, recruitment—multimedia, in particular video, is remarkably effective in communicating with the audience. Before, making video accessible to diverse viewers involved tasks galore, such as eliciting the service of production studios to manually dub, transcribe, and add subtitles. Those operations were costly and slow, especially for globally destined content.

Read more
Cloudinary Helps Minted Manage Its Image-Generation Pipeline at Scale

David first shared his thoughts at our ImageCon coverence last October and this case study is an abbreviated version of Minted’s success using Cloudinary.

Over time, Faithful renderings of the creations of the illustrators, textile designers, painters, packaging designers, marketers, and stay-at-home moms, all of whom are core contributors of the Minted world, was getting harder and harder. Legacy technology wasn’t cutting it any more—and it was time for Cloudinary to step in.

Read more
Highlights on ImageCon 2021 and a Preview of ImageCon 2022

New year, same trend! Visual media will continue to play a monumental role in driving online conversions. To keep up with visual-experience trends and best practices, Cloudinary holds an annual conference called ImageCon, a one-of-a-kind event that helps attendees create the most engaging visual experiences possible.

Read more

New for DAM: Media Library Extension for Chrome

By Sharon Yelenik
A New Media Library Chrome Extension for Cloudinary DAM

With the introduction of the Media Library Extension, a Chrome-browser add-on that streamlines the access to, search for, and management of images and videos, Cloudinary offers yet another effective tool for its Digital Asset Management (DAM) solution. Let’s have a look at how most teams are currently working with media assets and how the new add-on not only boosts efficiency, but also renders the process a pleasure to work with.

Read more
New Features Supercharge Cloudinary’s Digital Asset Management Solution.

Today, I’m thrilled to announce the launch of Apps for Digital Asset Management and a Media Library Extension for the Chrome browser, which enables easy, flexible integration with all web-based applications in addition to making asset discovery more robust and accessible to all.

Read more
Scale and Automate Workflows With Modern Digital Asset Management Systems

With building, growing, and maintaining a strong digital presence being a top priority for all brands, high-quality visual content is paramount. In fact, consumers are 40 times more likely to share visual content on social networks than on other forums. Plus, a recent study from Wyzowl found that 84% of consumers made purchase decisions after watching a video, which explains why many brands are adding more and more visual media to their sites.

Read more