Amazon S3 Image Optimization with Cloudinary

Images play important roles in almost any digital project today. Images boost user experience by quickly conveying information visually, increasing engagement, and clarifying complex ideas more efficiently than text. However, a major challenge developers face is optimizing images for websites and applications to ensure fast loading times and a seamless user experience.

Optimized images improve user experience, website performance, and reduce storage costs. Amazon S3 is a cloud-based enterprise storage solution for media assets, and when combined with Cloudinary’s powerful optimization capabilities, it can significantly improve your workflows. In this article, we’ll explore how Amazon S3 and Cloudinary can work together to make your life easier.

In this article:

The Importance of Image Optimization in Media Workflows

Large, unoptimized images can slow down website loading times, leading to unfavorable user experience. They also consume more storage space and bandwidth, increasing costs. In fact, studies show that a one-second delay in page load time can lead to a 7% reduction in conversions.

As developers, one of the challenges we often face includes managing multiple image versions for different devices and ensuring quick delivery without compromising quality. In media-heavy projects, automating image optimization becomes essential to maintain efficiency and consistency. By resizing, compressing, and transforming images, you can ensure they load quickly, look great, and adapt seamlessly to various screen sizes.

What Is Amazon S3 Image Optimization?

AWS Simple Storage Service (S3) is a cloud-based storage platform designed to handle large volumes of static files and non-media assets, including images, videos, text, and audio. While S3 offers excellent scalability and efficiency, it lacks built-in optimization for media and digital resources. As a result, integrating image optimization techniques with S3 becomes essential to enhance performance, reduce storage costs, and improve the user experience.

S3 image optimization involves processing images stored in Amazon S3 to reduce their file sizes through compression, resizing, and format conversion. This ensures that images are tailored for specific use cases, such as different screen resolutions or device types.

How S3 Image Optimization Works with Cloudinary

While Amazon S3 is an excellent choice for media storage, it doesn’t directly handle advanced image optimizations or transformations. But with software like Cloudinary, you can take advantage of S3’s media storage and powerful image optimization tools. Cloudinary is a powerful cloud-based media management platform that integrates seamlessly with S3, allowing you to fetch, transform, and deliver optimized images with ease.

Here’s a high-level overview of how your request gets handled when you optimize and deliver your S3 images via Cloudinary:

  1. Fetch images from S3: Images are pulled directly from your S3 bucket and uploaded on Cloudinary server.
  2. Transform images on the fly: Using Cloudinary’s URL-based transformations, you can resize, compress, and format these images dynamically.
  3. Deliver optimized images with Cloudinary: Cloudinary serves the optimized images to your users via a dynamic multi-CDN network, ensuring quick delivery to users worldwide.

Setting up Amazon S3 and Cloudinary Integration

To get started, sign up for a free Amazon S3 account and a free Cloudinary account if you haven’t already. You can follow this guide to get your Cloudinary credentials, which include your API key and secret pairs and cloud name.

Create a New AWS User

When accessing your S3 buckets programmatically, it’s a recommended practice to create a new user to access the buckets rather than using the root user. In your dashboard, head over to the IAM (Identity and Access Management) dashboard. You can use the search bar in the console to navigate to it by typing “IAM”.

Next, click on User in the left panel as shown below:

On the following page, give the user a name. In our case, we’re calling it cloudinary-s3-user:

On the next page, select Attach policies directly in Permissions options. Amazon S3 bucket policies allow you to secure access to objects in your buckets, so that only users with the appropriate permissions can access them.

In the search bar, type S3, then select AmazonS3FullAccess. This policy provides full access to all buckets via the AWS Management Console, allowing a user to read, write, delete, and manage all objects and buckets within the S3 storage without any restrictions.

Note: Use the AmazonS3FullAccess policy with caution in production-ready applications and make sure you fully understand the implications of using it. We’re only using it here for demo purposes only. You can read more about Amazon S3 bucket policies here.

On the next page, review and confirm you applied the right permissions to the newly created user.

When you’re done, you should see a success message and the new user created successfully as shown below:

Get User Credentials

Now that we have created a user successfully, we need the credentials to connect with the S3 bucket programmatically from our application. Click on the newly created user and then go to Security credentials > Access keys > Create access keys.

On the next page, choose where you’ll be accessing the S3 bucket from. We choose Application running outside AWS:

On the page that follows, set a description tag:

Finally, you should have your access keys created successfully:

Copy and save the keys. You’ll need them later.

Create an S3 Bucket

Before we proceed, we need a bucket to store the image files Cloudinary will process. A bucket in Amazon S3 is a container for storing objects such as files, images, videos, documents, and so on. Think of a bucket like a file folder, but in the cloud.

Using the search bar in your AWS console, search for S3. Then, navigate to Amazon S3 > Buckets and follow the instructions to create a bucket. For this tutorial, we’ve named our bucket cloudinary-s3-integration.

In the new bucket, we have uploaded a few image files we’ll be using for this demo. You can also upload a few images to the bucket for testing.

Create a Bucket Policy

Next, we need to allow access to the objects in our bucket from external sources, this way, anyone who has the URL to any of the objects can view it in their browser.

Select the new bucket you created, then go to Permissions > Bucket policy > Edit:

Paste the following command in the policy editor (make sure to replace cloudnary-s3-integration with the actual name of your bucket):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::cloudinary-s3-integration/*"
        }
    ]
}

Now, all objects in the bucket are publicly accessible.

We’ve finished setting up everything in the AWS console. Now, let’s write some code.

Create a Python Virtual Environment

In this tutorial, we’ll use the Amazon and Cloudinary Python SDKs. However, the same process and logic apply regardless of the programming language or SDK you choose for your application.

Create a new directory where you’d like to have the project and run the following command in the terminal to create a virtual environment.

On Linux, run:

python3 -m venv env
source env/bin/activate

On Windows, the command is slightly different:

python -m venv env
env/Scripts/activate

In the root directory, create a .env file with the following content and replace the placeholders with your own data:

cloudinary_cloud_name=your_cloudinary_cloud_name_here
cloudinary_api_key=your_cloudinary_api_key_here
cloudinary_api_secret=your_cloudinary_api_secret_here
aws_access_key_id=your_aws_access_key_id_here
aws_secret_access_key=your_aws_secret_access_key_here

Next, install the necessary libraries:

pip install boto3 cloudinary python-dotenv
  • boto3: The official AWS SDK for Python, allowing seamless interaction with AWS services like S3, EC2, and DynamoDB.
  • cloudinary: A python SDK for uploading, transforming, and optimizing images and videos in Cloudinary.
  • python-dotenv: A package for loading environment variables from a .env file.

Upload Objects to Cloudinary and Generate Public URLS

Create a file named main.py and add the code below to it:

import boto3
from dotenv import load_dotenv
import os
import cloudinary
import cloudinary.uploader
import cloudinary.api


# Load environment variables from the .env file
load_dotenv()


# Access environment variables
cloudinary_cloud_name = os.getenv("cloudinary_cloud_name")
cloudinary_api_key = os.getenv("cloudinary_api_key")
cloudinary_api_secret = os.getenv("cloudinary_api_secret")
aws_access_key_id = os.getenv("aws_access_key_id")
aws_secret_access_key = os.getenv("aws_secret_access_key")
aws_default_region = "eu-north-1"

cloudinary.config(
  cloud_name = cloudinary_cloud_name,
  api_key = cloudinary_api_key,
  api_secret = cloudinary_api_secret
)

s3 = boto3.resource('s3',
         aws_access_key_id = aws_access_key_id,
         aws_secret_access_key = aws_secret_access_key,
        )

bucket_name = "cloudinary-s3-integration" # Replace with the name of your bucket
bucket = s3.Bucket(bucket_name)

# Get bucket region
region = s3.meta.client.get_bucket_location(Bucket=bucket_name).get('LocationConstraint', aws_default_region)

# List all objects in the bucket
bucket_objects = bucket.objects.all()

# Generate public URLs
for obj in bucket_objects:
    object_key = obj.key

    # Generate image public URL
    s3_url = f"https://{bucket_name}.s3.{region}.amazonaws.com/{object_key}"

    # Upload the object to Cloudinary
    upload_response = cloudinary.uploader.upload(s3_url)

    print(upload_response['secure_url'])

Here’s what is going on in the code above:

  • The AWS and Cloudinary credentials are retrieved from environment variables and a connection to an S3 bucket was created using boto3.
  • The code also retrieves the bucket’s region, lists all objects in the bucket, send the images to Cloudinary and finally returns a URL for each of the uploaded images.

Here’s the output after running the script:

https://res.cloudinary.com/cloudinarymich/image/upload/v1738848439/bdrwvjsi9h7dvthere6k.jpg
https://res.cloudinary.com/cloudinarymich/image/upload/v1738848441/tuluj8dvatn27e12cokk.jpg

Compression Techniques in S3 Image Optimization

Cloudinary offers a comprehensive suite of tools and features for image compression, allowing you to optimize your images for web use and improve website performance. Cloudinary supports both lossy and lossless compression techniques. Lossy compression reduces file size by discarding some image data, while lossless compression reduces file size without losing any image data.

To optimize images, you can choose the appropriate compression type based on your image type and use case, or let Cloudinary determine the best quality to serve images to users based on factors such as user’s device type, internet bandwidth, and others.

For example, the following code snippet applies Cloudinary’s automatic image compression to the uploaded S3 images from before:

#...

# Upload to Cloudinary with automatic compression
for obj in bucket_objects:
    object_key = obj.key

    # Generate S3 public URL
    s3_url = f"https://{bucket_name}.s3.{region}.amazonaws.com/{object_key}"

    # Upload to Cloudinary
    upload_response = cloudinary.uploader.upload(s3_url)

    # Apply Cloudinary's automatic compression
    compressed_url = cloudinary.CloudinaryImage(upload_response['public_id']).build_url(
        transformation={'quality': "auto", 'fetch_format': "auto"}
    )
    # Print the Cloudinary URL
    print(f"Optimized Cloudinary URL: {compressed_url}")

In this example, one of the uploaded images originally had a file size of 820 KB. After applying automatic compression, the file size was reduced to just 180 KB, with no noticeable loss in quality compared to the original.

You can read more about optimizing images with Cloudinary in the docs.

Image Transformation Capabilities in S3 and Cloudinary

Beyond image optimization and compression, Cloudinary’s image transformation features offer a wide range of powerful capabilities.

Some of these include:

  • Resizing and scaling: Use Cloudinary’s smart cropping techniques, including face-detection or auto-gravity to adjust image dimensions to fit various screen resolutions for optimal display across devices.
  • Cropping and aspect ratio adjustments: Tailor images to maintain focus on essential elements and enhance visual appeal.
  • Format conversion: Convert images to modern formats like WebP or AVIF for better performance without sacrificing quality.
  • Image and text overlays: Generate new images by overlaying other images or text onto a base image.
  • Apply filters and effects: Enhance images with a variety of effects, filters, and other visual enhancements to achieve the desired impact.

The Cloudinary documentation on image transformation has several examples you can try out.

Automation and Integration in S3 Image Workflows

You can automate image optimization by setting up Cloudinary to monitor your S3 bucket. When a new image is added, Cloudinary can automatically fetch, optimize, and deliver it. You can also configure serverless workflows using AWS Lambda to trigger image processing upon uploads.

Here’s an example of how you might set this up:

def lambda_handler(event, context):
    """
    AWS Lambda handler function triggered by S3 event.
    """
    try:

        for record in event["Records"]:
            bucket_name = record["s3"]["bucket"]["name"]
            object_key = unquote_plus(record["s3"]["object"]["key"])

            # Generate S3 File URL
            s3_url = f"https://{bucket_name}.s3.amazonaws.com/{object_key}"
            print(f"Uploading {s3_url} to Cloudinary...")

            # Upload image to Cloudinary
            upload_response = cloudinary.uploader.upload(s3_url)

            # Generate optimized Cloudinary URL
            compressed_url = cloudinary.CloudinaryImage(upload_response["public_id"]).build_url(
                transformation={"quality": "auto", "fetch_format": "auto"}
            )

        return {"statusCode": 200, "body": json.dumps("Success")}

    except Exception as e:
        print(f"Error: {str(e)}")
        return {"statusCode": 500, "body": json.dumps("Error processing image")}

Take Advantage of S3 Image Optimization For Your Next Project

Optimizing images stored in Amazon S3 is essential for efficient media delivery and enjoyable customer experience. By integrating Cloudinary with your S3 buckets, you can automate image transformations, reduce file sizes, and ensure fast delivery across devices. Exploring this integration can lead to better performance, cost savings, and an enhanced user experience.

Join thousands of businesses transforming their digital asset management with Cloudinary. Sign up for free today!

FAQs

How does Cloudinary work with S3 buckets?

You can use Cloudinary to fetch images from your S3 bucket, apply transformations, and deliver optimized versions to users. This can be achieved through any of the programming SDKs provided by both platforms.

What’s the best way to optimize images for different devices?

You can use several automated and dynamic image processing tools, such as Cloudinary’s real-time transformation features, along with automatic format and quality settings, to deliver the perfect image size and format for different devices.

Can I automate transformations for existing images in S3?

Yes, you can use Cloudinary’s APIs and serverless functions to automate transformations for existing images in your S3 bucket.

Last updated: Mar 5, 2025