Cloudinary Blog

Which Image Compression Technique Looks Best to Human Eyes?

Cloudinary Uses Scale API to Focus on Image Compression

Here at Cloudinary, we provide a cloud-based tool that enables our users to compress images and video for their websites and apps. Our goal is to preserve the visual integrity of the content, but deliver the smallest file size to any device or browser to ultimately optimize website performance and end user satisfaction.

One of the hallmarks of the Cloudinary solution is the ability to automate many functions of image compression, so that developers don’t have to spend time tweaking each photo and making multiple copies of different sizes and resolutions to fit every possible scenario. Compression algorithms can be tricky because they’re trying to make changes that have the smallest visual impact, but different images can react differently to compression.

As we were developing the algorithm for our “q_auto” capabilities – which strikes a balance between visual quality and file size – we needed to test it to understand how the resulting images compared to the human eye. Enter Scale API.

Many image compression formats – like JPEG 2000 and JPEG XR – have been tweaked to score well on particular metrics, such as peak signal-to-noise ratio (PSNR). But these don’t always correlate with human perception on image quality.

We leveraged Scale API to compare pairs of images and give us perspective on which image was liked most by humans. With Scale API, we did a variety of tests, comparing several formats, including WebP, JPEG 2000, JPEG XR (lossy) Lepton (MozJPEG, recompressed with Lepton), FLIF, BPG, Daala, and PNG8 (pngquant+optipng). We also were able to get input on the difference between the uncompressed original image vs. a compressed version.

Scale API enabled us to create A/B comparisons that were viewed by human observers. We submitted over 4,000 image comparisons to Scale API, sending at least four independent Scale API requests for each pair of image. This resulted in at least eight actual human comparisons for each pair of images. The outcome of these comparisons were evaluated beside other perceptual metrics such as PSNR, Google’s Butteraugli, DSSIM (Structural (Dis)Similarity) and a new metric Cloudinary developed called SSIMULACRA (Structural SIMilarity Unveiling Local And Compression Related Artifacts).

The results showed that overall, PSNR is “correct” in only 67 percent of the cases. Butteraugli gets it right in 80 percent of the cases, and DSSIM in 82 percent of the cases. Our new metric, SSIMULACRA, agrees with human judgments in 87 percent of the cases. Looking just at the high-confidence human judgments, we found about 78 percent agreement for PSNR, 91 percent for both Butteraugli and DSSIM, and almost 98 percent agreement for SSIMULACRA. You can read more about SSIMULACRA and these results on the Cloudinary blog. Or if you want to give it a try: SSIMULACRA is free and open-source software!

The results of Scale API comparisons gave us useful data points to validate our metrics and provided more insights into the compression benchmarks we are running and the comparison of various image formats. And from these insights we were able to improve our visual perception metrics and fine-tune our “q_auto” functionality so we know how aggressively we can compress images.

Through this process we were impressed not only by the useful data points derived from the Scale API, but also the great support we got from the company and the product’s ease-of-use, all which came at a reasonable price.

This was originally posted on Scale API

Recent Blog Posts

Google Analytics Video Tracking Made Simple

Most web analytics only look at page interactions. But what happens when the major content on your page is a video. You want to know when and how long your users are interacting with your media, when they paused to take a closer look at your media content and when they skipped some content because it wasn’t relevant to them. These - along with a lot of other reasons - are why you should not only analyze open/visits rates, but also drill down in the media content.

Read more
Give your mobile app a boost: pre-upload image processing

As a mobile developer, enabling users to upload images and share them with other users is a very common requirement. When developing those capabilities, we need to take into account that most users won't think twice about uploading the massive images that their high-resolution mobile cameras capture. Those huge files are not only overkill for on-screen display, but can also cause significant slow downs in upload and delivery times. And of course those same users wouldn't think twice about complaining or abandoning our app if their overall user experience wasn't smooth and fast.

Read more
Cloudinary Helps Hinge Keep Modern Romance Real

To create a profile, Hinge users initially had to connect their Facebook and/or Instagram accounts to the app, which would import images to the users’ Hinge profiles. Hinge stored those images with a URL that expired after two months, unless the user logged into the app regularly. This aspect of the app was frustrating for users because the photos would become inaccessible for others to view.

Read more
Building a Smart AI Image Search Tool Using React

In our first article, we built a part of the front-end of our image search tool with the focus mainly on the parent App.js stateful component.

In this article - part two of a series - we will continue developing a Smart Search App, in which users can search for content in an image, not just the description. The app is built with React for UI interaction, Cloudinary for image upload and management and Algolia for search.

Read more