If you are anything like me, one of the things you love about this digital era is that you can be artistic and creative, even if your drawing skills never made it much past stick figures. The average smartphone can take photographs of astonishing quality. And all sorts of special effects, filters, and overlays enable users to convert their favorite photos into unique expressions of themselves.
But all of this still generally remains within the realm of photographic content.
What if there was an algorithm so smart, that it could actually analyze your favorite work of art and then ‘paint’ you a replica of any other photo in the same style? Not some pixel-by-pixel analysis, but evaluating it as a talented artist would: taking into account colors, brush styles and other artistic techniques and then applying those styles in a way that blends with the content of the target photograph.
Yep, you got it. There is in-fact such an algorithm. Well, it’s more than just an algorithm. It requires a sophisticated, multi-layered neural network, known as VGG 16. The network can be used for a variety of applications in categorization, semantic segmentation, and more. Since Gatys et al first realized the network’s potential for separately extracting style and content information from images in 2015, there have been a few attempts to commercialize these style transfer deep learning capabilities (applying the style of one image to the content of another). However, in most cases, the practicality of these services has been quite limited.
For example, most services available today limit the source artworks to a few, limit the resolution size of the output image, require unrealistic wait times, appropriate huge amounts of the customers’ data allotment and processing resources, and/or have a pricing model for high resolution images that makes it impractical to really commercialize.
But now here’s the real “what if”: What if you could offer all that genius neural network and style transfer deep learning capability to your web site or mobile app users with only a few lines of code and get reliable results, in seconds, for a nominal cost, with all the processing done in the cloud?
As of today, “Yes you can”. You can do it with Cloudinary’s new Neural Artworks add-on by simply adding the
style_transfer effect to any delivered image while specifying the image ID of any source artwork as it’s
overlay. And voila! Your users can generate and share their own masterpieces in seconds!
This is all it takes:
The applications for this functionality are endless, from marketing contests to cool profile pics to unique personalized canvas prints, and more. I’m sure your mind is already jumping with ideas.
Cloudinary is a SaaS offering that provides solutions for image and video management, including server or client-side upload, a huge range of on-the-fly image and video transformation options, quick CDN delivery, and powerful asset management options.
Cloudinary enables web and mobile developers to address all their media management needs with simple bits of code in their favorite programming languages or frameworks, leaving them free to focus primarily on their own product’s value proposition.
style_transfer transformation effect provided with the new add-on integrates seamlessly with all other Cloudinary features.
Deep neural networks like the one used by our add-on use a type of artificial intelligence (AI) that’s designed to mimic the behaviors of the neurons in our brain. Like young children, they ‘learn’ by ‘viewing’ large amounts of data until they begin to recognize common patterns.
Neural networks that focus on style transfer attempt to synthesize textures and colors from the source artwork within constraints that enable it to still preserve the main semantic content of the target photograph, and thus allow, to some extent, a separation of image content from image style.
This very simplified diagram, taken from the paper ‘A Neural Algorithm of Artistic Style’ by Leon A. Gatys et al, provides a conceptual demonstration of how different abstractions of the source and target images can be filtered at different layers. The target photograph is downsampled so that the essential elements of the input image can be applied without losing the essence of the original content.
The more the neural network is allowed to process a particular source artwork, the better the replication of the style can be. But each incremental improvement comes at a huge performance cost.
Cloudinary’s algorithm takes advantage of Xun Huange and Serge Belongie’s enhancement on the Gatys algorithm, which make it possible to use any image for both source and target, and still deliver a good quality style transfer in real time, using a single feed-forward neural network. While the style match is not as precise as some other available services, Cloudinary’s implementation is much faster, is not limited to pre-learned images, and even supports high resolution outputs that are out of scope for most similar services currently available.
By now, I’m sure you are ready to get your hands on this artistic magic, so here’s a small demo for you to play with.
Note: Although the feature supports using any image for the source artwork, we’ve provided a limited set of sources here to avoid uploads of unlicensed art. When you start using this feature on your own account, do make sure that all source artworks have valid usage licenses.
Pick a target photo:Or pick your own:
|Press the magic button!|
The demo above shows the default
style_transfer effect. If you want to retain more of the target photograph’s colors or photographic essense, you can include the Boolean
preserve_color option or adjust the
style_strengthof the effect. Learn more about the
style_transfer effect and these additional options in the Neural Artwork Style Transfer documentation
Are you ready to offer Neural Artworks to your users? Or just want to take advantage of this powerful deep learning feature to create some masterpieces for your own site or blog page?
We can’t wait to see what amazing ideas you come up with! We’d love if you would share how you envision using style transfer in your app and better yet, screencaps of your own creations, right here in the comments for everyone to enjoy!
It’s clear that this is just the beginning of the line. Deep learning and neural networks are undoubtedly going to fundamentally change the way we create media and art of all kinds in the very foreseeable future. So, jump on the bandwagon now. It’s going to be one heck of a ride!