Cloudinary Blog

A story about production systems, Rails, monitoring and off-hour notifications

Cloudinary's image management service is used by thousands of world-wide websites and mobile apps. For many of our clients, Cloudinary has become a central, mission-critical component used for managing image uploads, transformations and delivery.
 
This is why we've built Cloudinary from the ground up to be a very robust service. We put a lot of emphasis on availability, scalability and support and we take our users' confidence in us extremely seriously.
 
So far, we've been quite satisfied with our ability to keep Cloudinary at an average of > 99.99% uptime.
 
However, on April 4th, the Cloudinary service experienced outages for a few hours. We wanted to explain what happened, our conclusions and the steps we've taken to make sure this won’t happen again.
 

The upgrade

Cloudinary's core service is built with Ruby on Rails. The service is tested thoroughly and upgrades are handled with uttermost care. This is why we've preferred to stay with Rails v3.0 for a long time rather than rock the boat with an upgrade to the latest Rails 3.2.
 
A few weeks ago a security vulnerability was discovered in Rails. As always, we wanted to apply the security fix as soon as possible. However, the Rails team stopped releasing fixes for Rails 3.0. We had to upgrade to v3.2.
 
We've upgraded to Rails 3.2 in our lab and modified our code to support it (Rails upgrades tend to be non backward compatible and break code built with previous versions). We've tested our code extensively and verified that our thousands of unit tests passed correctly. We've successfully finished a thorough manual QA of the system in our staging environment. It all went quite smoothly.
 
We scheduled the upgrade for April 4th. As usual, we deployed the system gradually to all of our production servers. Deployment went smoothly as well. We performed additional sanity testing after the system was deployed and closely monitored the system during the working day.
 
We went to sleep happy and relaxed.
 

The issues

At about 1am at night things started to shake.
 
Apparently, Rails 3.2 changed the defaults of one simple configuration parameter - response caching was turned on by default when certain cache headers are returned.
 
As a result, after long hours of service requests, the local application disk for some of our servers became full due to the cached responses. This caused certain requests that required disk space to fail, depending on the exact request and the size of the response.
 
Annoyingly enough, the automatic monitoring service that regularly verifies our APIs, was performing a request that required very little disk space and continued to operate regularly. This service is configured to send notifications to our engineering team's mobile phones during the night. But since no errors were detected, no notification was sent.
 
Fortunately for us, our co-founder's toddler woke him up early in the morning. He naturally (?) checked his inbox, understood that something was very wrong. He quickly cleared the disk space and modified Rails 3.2 cache settings. The system was fully working again.
 
It's important to note that during these ~5 hours, all existing images and transformed images were delivered successfully to users through our delivery service and tens of thousands of worldwide CDN edges (Akamai + CloudFront). Still, part of the upload API calls did fail during this time and we are very sorry for this.
 

Going forward

Naturally, we've immediately started to improve our outage prevention mechanisms.
 
We've added additional disk space tests to our QA list and added abnormal disk usage monitoring to our urgent notification service. We're also adding a wider set of API requests to our automatic service monitoring.
 
We've integrated with Twilio to enhance our off-hour notifications. Specifically, our engineering team will now receive automatic voice calls to their mobile phones in addition to our previous notification methods.
 
To make sure we keep you in the know during outages, we've pushed up the priority of a public status page. This page will include automatic monitoring details as well as human written notes.
 

Summary & conclusions 

We are happy that Cloudinary had nearly zero availability issues in almost 2 years of operations. On the other hand, no online service is perfect and every service experienced or will experience outages. 
 
We will continue to enhance our service with additional image-related features. On the same time, we'll continue to work hard on having Cloudinary's uptime as close to 100% as possible.
 
Thank you for trusting us with your images!

Recent Blog Posts

Hipcamp Optimizes Images and Improves Page Load Times With Cloudinary

When creating a website that allows campers to discover great destinations, Hipcamp put a strong emphasis on featuring high-quality images that showcased the list of beautiful locations, regardless of whether users accessed the site on a desktop, tablet, or phone. Since 2015, Hipcamp has relied on Cloudinary’s image management solution to automate cropping and image optimization, enabling instant public delivery of photos, automatic tagging based on content recognition, and faster loading of webpages. In addition, Hipcamp was able to maintain the high standards it holds for the look and feel of its website.

Read more
New Image File Format: FUIF: Why Do We Need a New Image Format

In my last post, I introduced FUIF, a new, free, and universal image format I’ve created. In this post and other follow-up pieces, I will explain the why, what, and how of FUIF.

Even though JPEG is still the most widely-used image file format on the web, it has limitations, especially the subset of the format that has been implemented in browsers and that has, therefore, become the de facto standard. Because JPEG has a relatively verbose header, it cannot be used (at least not as is) for low-quality image placeholders (LQIP), for which you need a budget of a few hundred bytes. JPEG cannot encode alpha channels (transparency); it is restricted to 8 bits per channel; and its entropy coding is no longer state of the art. Also, JPEG is not fully “responsive by design.” There is no easy way to find a file’s truncation offsets and it is limited to a 1:8 downscale (the DC coefficients). If you want to use the same file for an 8K UHD display (7,680 pixels wide) and for a smart watch (320 pixels wide), 1:8 is not enough. And finally, JPEG does not work well with nonphotographic images and cannot do fully lossless compression.

Read more
 New Image File Format: FUIF:Lossy, Lossless, and Free

I've been working to create a new image format, which I'm calling FUIF, or Free Universal Image Format. That’s a rather pretentious name, I know. But I couldn’t call it the Free Lossy Image Format (FLIF) because that acronym is not available any more (see below) and FUIF can do lossless, too, so it wouldn’t be accurate either.

Read more
Optimizing Video Streaming and Delivery: Q&A with Doug Sillars

Doug Sillars, a digital nomad and a freelance mobile-performance expert, answers questions about video streaming and delivery, website optimization, and more.

Doug Sillars, a freelance mobile-performance expert and developer advocate, is a Google Developer Expert and the author of O’Reilly’s High Performance Android Apps. Given his extensive travels across the globe—from the UK to Siberia—with his wife, kids, and 11-year-old dog, Max, he has been referred to as a “digital nomad.” So far in 2018, Doug has spoken at more than 75 meetups and conferences!

Read more