Skip to content

Innovator Spotlight: JPEG XL Co-Creator Jon Sneyers on Image Compression and More

Few possess the expertise of Jon Sneyers, recognized as one of the foremost experts at the intersection of image compression and standards development.

As a computer scientist, Jon has been a guiding force behind the creation and enhancement of image compression standards, driving the industry toward more efficient, versatile, and future-ready solutions.

Among Jon’s most recent contributions, is his work as one of the founding fathers of JPEG XL, a next-generation image compression format, which is having a moment thanks in large part to Apple’s announcement last June that it would support the standard in iOS 17, macOS Sonoma, and the Safari browser.

Jon’s influence doesn’t stop at his involvement in cutting-edge projects. His research and development has appeared in numerous publications. His insights into image compression, media formats, and computer science have been shared in various conferences and publications like this recent article in Computer Weekly

Recently Juli Greenwood, Sr. Director of Communications & Customer Marketing, sat down with Jon to discuss his career, including his role as Cloudinary’s Senior Image Researcher, his latest projects, and his views on the ever-changing world of imaging technology.

Cloudinary: Can you tell us a bit about yourself; where you grew up and where you live now? 

JS: I’m from Belgium. I was born and raised in Leuven, 30km to the east of Brussels, and studied and worked at the University of Leuven, KU Leuven. Then I lived in Brussels for a while — my three daughters were born there — and now I live in Asse, 15km to the west of Brussels.

Cloudinary: You have a PhD in Computer Science, what inspired your focus on image processing and compression, and what led you to the image work you do today?

JS: My PhD was on optimizing compilation of a declarative programming language called CHR, so quite a different topic from what I do now. When I was doing postdoc research on probabilistic logic programming, at some point I got the idea of using a probabilistic model based on decision trees to guide entropy coding. I applied this idea to a pet project for lossless image compression. That pet project eventually became FLIF, the Free Lossless Image Format. I started working at Cloudinary in 2016, after they did a large evaluation of lossless compression formats and concluded that FLIF was the best format. I kept doing research on image compression and image processing. Some of the key ideas of FLIF were later integrated into JPEG XL.

Cloudinary: For the less technical folks reading this, can you explain why the JPEG needed an update and your role in creating the new standard, JPEG XL? And more broadly, why modern formats are so important.

JS: The JPEG format was designed at the end of the 1980s. It was very well-designed and future-proof, which explains why more than 30 years later, it is still ubiquitous. But it does have some limitations. To name a few: It’s not really suitable for high dynamic range (HDR) images, and it cannot represent alpha transparency, it cannot do lossless compression. Also in terms of compression performance, it is no longer state-of-the-art. Modern formats like AVIF and JPEG XL overcome these limitations in terms of features, and bring better compression — which means a more effective use of resources like bandwidth and storage, but also a better user experience when browsing the web: images will just load faster. That’s why for example Adobe and Apple are adopting these new formats: they’re needed for new display technology like HDR, and they can help save storage and bandwidth.

Cloudinary: You also co-created the Free Lossless Image Format (FLIF). What inspired that work?

JS: Well, I knew and liked the PNG format, but at the technical level, I couldn’t believe that nothing better could be done, in terms of compression. After all, in essence, the compression in PNG format is not much more than applying DEFLATE — the combination of the 1977 Lempel-Ziv algorithm with 1952 Huffman coding, also known as ZIP — to an uncompressed image buffer. Algorithmically, it has only very limited “image-specific ingredients.” This is different in FLIF, which has significantly more “ingredients” that are specific to images, so it can obtain better compression ratios. Just like lossless WebP by the way, which was created around the same time as FLIF. I remember there being a sense of competition between FLIF and lossless WebP, both trying to bring the best possible lossless compression. Interestingly, I ended up joining forces within the JPEG committee with the people who designed lossless WebP, and people at Google Research, and we ended up creating JPEG XL together [laughs].

Cloudinary: Fun fact time! Favorite city/country traveled thus far and/or your favorite book?

JS: Hard to pick a favorite. I loved Cape Town, Melbourne, Prague, Tel Aviv, Paris, and Porto. For books, I very much enjoy science fiction books, from “The Hitchhiker’s Guide to the Galaxy” to the works of Asimov or “The Three-Body Problem” trilogy.

Back to top

Featured Post