Last updated: Sep-16-2024
Cloudinary's visual effects and enhancements are a great way to easily change the way your images look within your site or application. For example, you can change the shape of your images, blur and pixelate them, apply quality improvements, make color adjustments, change the look and feel with fun effects, apply filters, and much more. You can also apply multiple effects to an image by applying each effect as a separate chained transformation.
Some transformations use fairly simple syntax, whereas others require more explanation - examples of these types of transformations are shown in the advanced syntax examples.
Besides the examples on this page, there are many more effects available and you can find a full list of them, including examples, by checking out our URL transformation reference.
Here are some popular options for using effects and artistic enhancements. Click each image to see the URL parameters applied in each case:
your images Add a vignette to
your images Generate low quality
image placeholders Add image
outlines
Simple syntax examples
Here are some examples of effects and enhancements that use a simple transformation syntax. Click the links to see the full syntax for each transformation in the URL transformation reference.
Artistic filters
Apply an artistic filter using the art
effect, specifying one of the filters shown.
Available filters
Filters:
See full syntax: e_art in the Transformation Reference.
Cartoonify
Make an image look more like a cartoon using the cartoonify
effect.
See full syntax: e_cartoonify in the Transformation Reference.
Opacity
Adjust the opacity of an image using the opacity transformation (o
in URLs). Specify a value between 0 and 100, representing the percentage of transparency, where 100 means completely opaque and 0 is completely transparent. In this case the image is delivered with 30% opacity:
See full syntax: o (opacity) in the Transformation Reference.
Pixelate
Pixelate an image using the pixelate
effect.
See full syntax: e_pixelate in the Transformation Reference.
Sepia
Change the colors of an image to shades of sepia using the sepia
effect.
See full syntax: e_sepia in the Transformation Reference.
Vignette
Fade the edges of images into the background using the vignette
effect.
See full syntax: e_vignette in the Transformation Reference.
Image enhancement options
Cloudinary offers various way to enhance your images. This table explains the difference between them, and below you can see examples of each. You can also watch a video tutorial showing how to apply these in a React app.
Transformation | Purpose | Key features | Main use cases | How it works |
---|---|---|---|---|
Generative restore (e_gen_restore) |
Excels in revitalizing images affected by digital manipulation and compression. | âś… Compression Artifact Removal: Effectively eliminates JPEG blockiness and overshoot due to compression. âś… Noise Reduction: Smoothens grainy images for a cleaner visual. âś… Image Sharpening: Boosts clarity and detail in blurred images. |
âś… Over-compressed images. âś… User-generated content. âś… Restoring vintage photos. |
Utilizes generative AI to recover and refine lost image details. |
Upscale (e_upscale) |
Increases the resolution of an image using AI, with special attention to faces. | âś… Enhances clarity and detail while upscaling. âś… Specialized face detection and enhancement. âś… Preserves the natural look of faces. |
âś… Improving the quality of low resolution images, especially those with human faces. | Analyzes the image, with additional logic applied to faces, to predict necessary pixels. |
Enhance (e_enhance) |
Enhances the overall appeal of images without altering content using AI. | âś… Improves exposure, color balance, and white balance. âś… Enhances the general look of an image. |
âś… Any images requiring a quality boost. âś… User-generated content. |
An AI model analyzes and applies various operators to enhance the image. |
Improve (e_improve) |
Automatically improves images by adjusting colors, contrast, and lighting. | âś… Enhances overall visual quality. âś… Adjusts colors, contrast, and lighting. |
âś… Enhancing user-generated content. âś… Any images requiring a quality boost. |
Applies an automatic enhancement filter to the image. |
Generative restore
This example shows how the generative restore effect can enhance the details of a highly compressed image:
Try it out: Generative restore in the Transformation Center.
Upscale
This example shows how the upscale
effect can preserve the details of a low resolution image when upscaling:
Try it out: Upscale in the Transformation Center.
Enhance
This example shows how the enhance
effect can improve the lighting of an under exposed image:
Try it out: AI image enhancer in the Transformation Center.
Improve
This example shows how the improve
effect can adjust the overall colors and contrast in an image:
Advanced syntax examples
In general, most of the visual effects and enhancements can take an additional option to tailor the effect to your liking. For some, however, you may need to provide additional syntax and use some more complex concepts. It is important to understand how these advanced transformations work when attempting to use them. The sections below outline some of the more advanced transformations and help you to use these with your own assets.
Remember, there are many more transformations available and you can find a full list of them, including examples, by checking out our URL transformation reference.
3D LUTs
3D lookup tables (3D LUTs) are used to map one color space to another. They can be used to adjust colors, contrast, and/or saturation, so that you can correct contrast, fix a camera's inability to see a particular color shade, or give a final finished look or a particular style to your image.
After uploading a .3dl
file to your product environment as a raw file, you can apply it to any image using the lut
property of the layer
parameter ( l_lut:
in URLs), followed by the LUT file name (including the .3dl
extension).
Below you can see the docs/textured_handbag.jpg
image file in its original color, compared to the image with different LUT files applied. Below these is the code for applying one of the LUTs.
See full syntax: l_lut in the Transformation Reference.
Background color
Use the background
parameter (b
in URLs) to set the background color of the image. The image background is visible when padding is added with one of the padding crop modes, when rounding corners, when adding overlays, and with semi-transparent PNGs and GIFs.
An opaque color can be set as an RGB hex triplet (e.g., b_rgb:3e2222
), a 3-digit RGB hex (e.g., b_rgb:777
) or a named color (e.g., b_green
). Cloudinary's client libraries also support a #
shortcut for RGB (e.g., setting background
to #3e2222
which is then translated to rgb:3e2222
).
For example, the uploaded image named mountain_scene
padded to a width and height of 300 pixels with a light blue background:
You can also use a 4-digit or 8-digit RGBA hex quadruplet for the background color, where the 4th hex value represents the alpha (opacity) value (e.g., co_rgb:3e222240
results in 25% opacity).
predominant_contrast
. This selects the strongest contrasting color to the predominant color while taking all pixels in the image into account. For example, l_text:Arial_30:foo,b_predominant_contrast
.See full syntax: b (background) in the Transformation Reference.
Try it out: Background in the Transformation Center.
Content-aware padding
You can automatically set the background color to the most prominent color in the image when applying one of the padding crop modes (pad, lpad, mpad or fill_pad) by setting the background
parameter to auto
(b_auto
in URLs). The parameter can also accept an additional value as follows:
-
b_auto:border
- selects the predominant color while taking only the image border pixels into account. This is the default option forb_auto
. -
b_auto:predominant
- selects the predominant color while taking all pixels in the image into account. -
b_auto:border_contrast
- selects the strongest contrasting color to the predominant color while taking only the image border pixels into account. -
b_auto:predominant_contrast
- selects the strongest contrasting color to the predominant color while taking all pixels in the image into account.
For example, padding the purple-suit-hanky-tablet
image to a width and height of 300 pixels, and with the background color set to the predominant color in the image:
See full syntax: b_auto in the Transformation Reference.
Try it out: Background in the Transformation Center.
Gradient fade
You can also apply a padding gradient fade effect with the predominant colors in the image by adjusting the value of the b_auto
parameter as follows:
b_auto:[gradient_type]:[number]:[direction]
Where:
-
gradient_type
- one of the following values:-
predominant_gradient
- base the gradient fade effect on the predominant colors in the image -
predominant_gradient_contrast
- base the effect on the colors that contrast the predominant colors in the image -
border_gradient
- base the gradient fade effect on the predominant colors in the border pixels of the image -
border_gradient_contrast
- base the effect on the colors that contrast the predominant colors in the border pixels of the image
-
-
number
- the number of predominant colors to select. Possible values:2
or4
. Default:2
-
direction
- if 2 colors are selected, this parameter specifies the direction to blend the 2 colors together (if 4 colors are selected each gets interpolated between the four corners). Possible values:horizontal
,vertical
,diagonal_desc
, anddiagonal_asc
. Default:horizontal
Custom color palette
Add a custom palette to limit the selected color to one of the colors in the palette that you provide. Once the predominant color has been calculated then the closest color from the available palette is selected. Append a colon and then the value palette
followed by a list of colors, each separated by an underscore. For example, to automatically add padding and a palette that limits the possible choices to green, red and blue: b_auto:palette_red_green_blue
The palette can be used in combination with any of the various values for b_auto
, and the same color in the palette can be selected more than once when requesting multiple predominant colors. For example, padding to a width and height of 300 pixels, with a 4 color gradient fade in the auto colored padding, and limiting the possible colors to red, green, blue, and orange:
Gradient fade into padding
Fade the image into the added padding by adding the gradient_fade
effect with a value of symmetric_pad
(e_gradient_fade:symmetric_pad
in URLs). The padding blends into the edge of the image with a strength indicated by an additional value, separated by a colon (Range: 0 to 100, Default: 20). Values for x and y can also be specified as a percentage (range: 0.0 to 1.0), or in pixels (integer values) to indicate how far into the image to apply the gradient effect. By default, the gradient is applied 30% into the image (x_0.3).
For example, padding the string
image to a width and height of 300 pixels, with the background color set to the predominant color, and with a gradient fade effect, between the added padding and 50% into the image.
See full syntax: e_gradient_fade in the Transformation Reference.
Try it out: Background in the Transformation Center.
Borders
Add a solid border around images with the border
parameter (bo
in URLs). The parameter accepts a value with a CSS-like format: width_style_color
(e.g., 3px_solid_black
).
An opaque color can be set as an RGB hex triplet (e.g., rgb:3e2222
), a 3-digit RGB hex (e.g., rgb:777
) or a named color (e.g., green
).
You can also use a 4-digit or 8-digit RGBA hex quadruplet for the color, where the 4th hex value represents the alpha (opacity) value (e.g., co_rgb:3e222240
results in 25% opacity).
Additionally, Cloudinary's client libraries also support a #
shortcut for RGB (e.g., setting color to #3e2222
which is then translated to rgb:3e2222
), and when using Cloudinary's client libraries, you can optionally set the border values programmatically instead of as a single string (e.g., border: { width: 4, color: 'black' }
).
For example, the uploaded JPG image named blue_sweater delivered with a 5 pixel blue border:
Borders are also useful for adding to overlays to clearly define the overlaying image, and also automatically adapt to any rounded corner transformations. For example, the base image given rounded corners with a 10 pixel grey border, and an overlay of the image of sale
resized to a 100x100 thumbnail added to the northeast corner:
predominant_contrast
. This selects the strongest contrasting color to the predominant color while taking all pixels in the image into account. For example, l_text:Arial_30:foo,bo_3px_solid_predominant_contrast
See full syntax: bo (border) in the Transformation Reference.
Color blind effects
Cloudinary has a number of features that can help you to choose the best images as well as to transform problematic images to ones that are more accessible to color blind people. You can use Cloudinary to:
- Simulate how an image would look to people with different color blind conditions.
- Assist people with color blind conditions to help differentiate problematic colors.
- Analyze images to provide color blind accessibility scores and information on which colors are the hardest to differentiate.
Simulate color blind conditions
You can simulate a number of different color blind conditions using the simulate_colorblind
effect. For full syntax and supported conditions, see the e_simulate_colorblind parameter in the Transformation URL API Reference.
Simulate the way an image would appear to someone with deuteranopia (most common form of) color blindness:
See full syntax: e_simulate_colorblind in the Transformation Reference.
Assist people with color blind conditions
Use the assist_colorblind
effect (e_assist_colorblind in URLs) to help people with color blind conditions to differentiate between colors.
You can add stripes in different directions and thicknesses to different colors, making them easier to differentiate, for example:
A color blind person would see the stripes like this:
Alternatively, you can use color shifts to make colors easier to distinguish by specifying the xray
assist type, for example:
See full syntax: e_assist_colorblind in the Transformation Reference.
Displacement maps
You can displace pixels in a source image based on the intensity of pixels in a displacement map image using the e_displace
effect in conjunction with a displacement map image specified as an overlay. This can be useful to create interesting effects in a select area of an image or to warp the entire image to fit a needed design or texture. For example, to make an image wrap around a coffee cup or appear to be printed on a textured canvas.
The displace
effect (e_displace
in URLs) algorithm displaces the pixels in an image according to the color channels of the pixels in another specified image (a gradient map specified with the overlay parameter). The displace
effect is added in the same component as the layer_apply
flag. The red channel controls horizontal displacement, green controls vertical displacement, and the blue channel is ignored.
The final displacement of each pixel in the base image is determined by a combination of the red and green color channels, together with the configured x and/or y parameters:
x | Red Channel | Pixel Displacement |
---|---|---|
Positive | 0 - 127 | Right |
Positive | 128 - 255 | Left |
Negative | 0 - 127 | Left |
Negative | 128 - 255 | Right |
y | Green Channel | Pixel Displacement |
---|---|---|
Positive | 0 - 127 | Down |
Positive | 128 - 255 | Up |
Negative | 0 - 127 | Up |
Negative | 128 - 255 | Down |
The displacement of pixels is proportional to the channel values, with the extreme values giving the most displacement, and values closer to 128 giving the least displacement.
The displacement formulae are:
x displacement = (127-red channel)*(x parameter)/127
y displacement = (127-green channel)*(y parameter)/127
Positive displacement is right and down, and negative displacement is up and left.
For example, specifying an x value of 500, at red channel values of 0 and 255, the base image pixels are displaced by 500 pixels horizontally, whereas at 114 and 141 (127 - 10% and 128 + 10%) the base image pixels are displaced by 50 pixels horizontally.
x | Red Channel | Pixel Displacement |
---|---|---|
500 | 0 | 500 pixels right |
500 | 114 | 50 pixels right |
500 | 141 | 50 pixels left |
500 | 255 | 500 pixels left |
x
and y
must be between -999 and 999.This is a standard displacement map algorithm used by popular image editing tools, so you can upload existing displacement maps found on the internet or created by your graphic artists to your product environment and specify them as the overlay asset, enabling you to dynamically apply the displacement effect on other images in your product environment or those uploaded by your end users.
Several sample use-case of this layer-based effect are shown in the sections below.
See full syntax: e_displace in the Transformation Reference.
Use case: Warp an image to fit a 3-dimensional product
Use a displacement map to warp the perspective of an overlay image for final placement as an overlay on a mug:
Using this overlay transformation for placement on a mug:
Use case: Create a zoom effect
To displace the sample
image by using a displacement map, creating a zoom effect:
You could take this a step further by applying this displacement along with another overlay component that adds a magnifying glass. In this example, the same displacement map as above is used on a different base image and offset to a different location.
Use case: Apply a texture to your image
Interactive texture demo
For more details on displacement mapping with the displace
effect, see the article on Displacement Maps for Easy Image Transformations with Cloudinary. The article includes a variety of examples, as well as an interactive demo.
Distort
Using the distort
effect, you can change the shape of an image, distorting its dimensions and the image itself. It works in one of two modes: you can either change the positioning of each of the corners, or you can warp the image into an arc.
To change the positioning of each of the corners, it is helpful to have in mind a picture like the one below. The solid rectangle shows the coordinates of the corners of the original image. The intended result of the distortion is represented by the dashed shape. The new corner coordinates are specified in the distort effect as x,y pairs, clockwise from top-left. For example:
For more details on perspective warping with the distort
effect, see the article on How to dynamically distort images to fit your graphic design.
To curve an image, you can specify arc
and the number of degrees in the distort effect, instead of the corner coordinates. If you specify a positive value for the number of degrees, the image is curved upwards, like a frown. Negative values curve the image downwards, like a smile.
You can distort text in the same way as images, for example, to add curved text to the frisbee
image (e_distort:arc:-120
):
See full syntax: e_distort in the Transformation Reference.
Text distortion demo
The CLOUDINARY text in the following demo was created using the text method of the Upload API. Try distorting it by entering different values for the corner coordinates.
Select one of the options or manually change the corner coordinates then generate the new distorted text.
,
,
,
,
Generative AI effects
Cloudinary has a number of transformations that make use of generative AI:
- Generative background replace: Generate an alternative background for your images
- Generative fill: Naturally extend your images to fit new dimensions
- Generative recolor: Recolor aspects of your images
- Generative remove: Seamlessly remove parts of your images
- Generative replace: Replace items in your images
- Generative restore: Revitalize degraded images
You can use natural language in most of these transformations as prompts to guide the generation process.
Generative background replace (Beta)
Use AI to generate an alternative background for your images. The new background takes into account the foreground elements, positioning them naturally within the scene.
For images with transparency, the generated background replaces the transparent area. For images without transparency, the effect first determines the foreground elements and leaves those areas intact, while replacing the background.
You can use generative background replace without a prompt, and let the AI decide what to show in the background, based on the foreground elements. For example, replace the background of this image (e_gen_background_replace
):
Alternatively, you can use a natural language prompt to guide the AI and describe what you want to see in the background. For example, place the model in front of an old castle (e_gen_background_replace:prompt_an%20old%20castle
):
You can regenerate the background with the same prompt (or no prompt) by setting the seed parameter. A different result is generated for each value you set. For example, regenerate the background for the old castle example (e_gen_background_replace:prompt_an%20old%20castle;seed_1
):
If you want to reproduce a background, use the same seed value, and make sure to keep any preceding transformation parameters the same. Subsequent parameters can be different, for example, scale down the same image:
In this next example, the transparent background of the original image is replaced to give context to the motorbike (e_gen_background_replace:prompt_a%20deserted%20street
):
- The use of generative AI means that results may not be 100% accurate.
- There is a special transformation count for the generative background replace effect.
- If you get blurred results when using this feature, it is likely that the built-in NSFW (Not Safe For Work) check has detected something inappropriate. You can contact support to disable this check if you believe it is too sensitive.
- The generative background replace effect is not supported for animated images, fetched images or incoming transformations.
- Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
See full syntax: e_gen_background_replace in the Transformation Reference.
Try it out: Generative background replace in the Transformation Center.
Generative fill
When resizing images using one of the padding crop modes (pad, lpad, mpad or fill_pad), rather than specifying a background color or using content-aware padding, you can seamlessly extend the existing image into the padded area.
Using generative AI, you can automatically add visually realistic pixels to either or both dimensions of the image. Optionally specify a prompt to guide the result of the generation.
To extend the width of an image, specify the aspect ratio such that the width needs padding. For example, change the following portrait image to be landscape by specifying an aspect ratio of 16:9 with a padding crop, then fill in the extended width using the gen_fill
background option (b_gen_fill
in URLs):
Similarly, you can change a landscape image into portrait dimensions by specifying the aspect ratio such that the height needs padding:
To extend both the width and the height of an image, you can use the minimum pad mode, ensuring that the dimensions you specify are greater than the original image dimensions. For example, extend this 640 x 480 pixel image to fill a 1100 x 1100 pixel square:
When using padding modes, you can use the gravity parameter to position the original image within the padding, for example, perhaps with the first example, you only want to extend the image to the left, you can position the original image to the right by setting gravity
to east
:
If you want to see something specific in the generated parts of the image, you can specify a prompt using natural language. For example, add a mug of coffee and cookies to the extended regions (b_gen_fill:prompt_mug%20of%20coffee%20and%20cookies
):
You can regenerate the filled background with the same prompt (or no prompt) by setting the seed parameter. A different result is generated for each value you set. For example, regenerate the background for the coffee and cookies example (b_gen_fill:prompt_mug%20of%20coffee%20and%20cookies;seed_1,c_pad,h_400,w_1500
):
To reproduce a filled background, use the same seed value, and make sure to keep any preceding transformation parameters the same. Subsequent parameters can be different, for example, scale down the same image:
If you want to ensure that the background is extended in a natural fashion, without taking elements of the foreground into account, then you can set the ignore-foreground
parameter to true
. This is in fact the default behavior, unless a foreground object touches the edge of the image. You can see in the following example that the bike wheel touches the edge of the image, so in this case the foreground would not be ignored, however, this has consequences of the bike being generated in the extended image. Therefore, in this case it is better to force the foreground to be ignored:
- Generative fill can only be used on non-transparent images.
- There is a special transformation count for generative fill.
- Generative fill is not supported for animated images, fetched images or incoming transformations.
- If you get blurred results when using this feature, it is likely that the built-in NSFW (Not Safe For Work) check has detected something inappropriate. You can contact support to disable this check if you believe it is too sensitive.
- Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
See full syntax: b_gen_fill in the Transformation Reference.
Try it out: Generative fill in the Transformation Center.
Generative recolor
Recolor elements of your images using generative AI.
Use natural language to describe what you want to recolor in the image. For example, turn the jacket on the right pink (e_gen_recolor:prompt_the%20jacket%20on%20the%20right;to-color_pink
):
To recolor all instances of the prompt in the image, specify multiple_true
, for example, recolor all the devices in the following image to a particular orange color, with hex code EA672A
:
If there are a number of different things that you want to recolor, you can specify more than one prompt. Note that when you specify more than one prompt, multiple instances of each prompt are recolored, regardless of the multiple
parameter setting. For example, in this image, all devices and both people's hair are recolored:
AI-powered object recolor demo
- The generative recolor effect can only be used on non-transparent images.
- The use of generative AI means that results may not be 100% accurate.
- The generative recolor effect works best on simple objects that are clearly visible.
- Very small objects and very large objects may not be detected.
- During processing, large images are downscaled to a maximum of 2048 x 2048 pixels, then upscaled back to their original size, which may affect quality.
- When you specify more than one prompt, all the objects specified in each of the prompts will be recolored whether or not
multiple_true
is specified in the URL. - There is a special transformation count for the generative recolor effect.
- The generative recolor effect is not supported for animated images, fetched images or incoming transformations.
- User-defined variables cannot be used for the prompt when more than one prompt is specified.
- Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
See full syntax: e_gen_recolor in the Transformation Reference.
Try it out: Generative recolor in the Transformation Center.
Generative remove
This effect uses generative AI to remove an object from an image and fill in the space with artificially generated, visually realistic pixels.
Use natural language to describe what you want to remove from the image, for example, remove the stick from this image of a dog with a stick in its mouth (e_gen_remove:prompt_the%20stick
):
The natural language lets you be specific about what you want to remove. In the following example, specifying only 'the child' removes the child in the middle, whereas specifying the 'the child in green' removes the child wearing the green jacket:
Remove multiple items
If there is more than one of the same item in an image, you can remove them all using by setting multiple
to true. For example, remove all the geese in this image (e_gen_remove:prompt_goose;multiple_true
):
Otherwise, only one is removed:
If there are a number of different things that you want to remove, you can specify more than one prompt. Note that when you specify more than one prompt, multiple instances of each prompt are removed regardless of the multiple
parameter setting. For example, in this image, all phones are removed, together with the mouse and keyboard:
Remove items from a region
You can also specify one or more region if you know the co-ordinates of the pixels that you want to remove. For each region, specify the x,y co-ordinates of the top left of the region, plus its width and height in pixels. For example, remove the objects from the top left and bottom right of the image:
Remove shadows and reflections
By default, shadows and reflections cast by objects specified in the prompt are not removed. If you want to remove the shadow/reflection, set the remove-shadow
parameter to true:
Remove text
You can remove all the text from an image by setting the prompt to text
e.g. e_gen_remove:prompt_text
, or e_gen_remove:prompt_(dog;text)
.
For example, remove the text and person from this store front (e_gen_remove:prompt_(text;person)
):
If you don't want to remove all the text in the image, specify the object you want to remove the text from by using the syntax text:<object>
as the prompt (either as the only prompt, or together with other prompts as in the previous example).
For example, in the following image there is text in the main part of the image in addition to text on the mobile screen. You can remove the text on the mobile screen only, as follows (e_gen_remove:prompt_text:the%20mobile%20screen
):
- The generative remove effect can only be used on non-transparent images.
- The use of generative AI means that results may not be 100% accurate.
- The generative remove effect works best on simple objects that are clearly visible.
- Very small objects and very large objects may not be detected.
- Do not attempt to remove faces or hands.
- During processing, large images are downscaled to a maximum of 6140 x 6140 pixels, then upscaled back to their original size, which may affect quality.
- When you specify more than one prompt, all the objects specified in each of the prompts will be removed whether or not
multiple_true
is specified in the URL. - There is a special transformation count for the generative remove effect.
- The generative remove effect is not supported for animated images, fetched images or incoming transformations.
- User-defined variables cannot be used for the prompt when more than one prompt is specified.
- Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
See full syntax: e_gen_remove in the Transformation Reference.
Try it out: Generative remove in the Transformation Center.
Generative replace
This effect uses generative AI to replace objects in images with other objects.
Use natural language to describe what you want to replace in the image, and what to replace it with.
For example, replace "the picture" with "a mirror with a silver frame" (e_gen_replace:from_the%20picture;to_a%20mirror%20with%20a%20silver%20frame
):
If you want to maintain the shape of the object you're replacing, set the preserve-geometry
parameter to true
. For example, below, notice the difference between the position of the sleeves and neckline of the sweater, with and without preserving the geometry when the shirt is replaced with a cable knit sweater:
- The generative replace effect can only be used on non-transparent images.
- The use of generative AI means that results may not be 100% accurate.
- The generative replace effect works best on simple objects that are clearly visible.
- Very small objects and very large objects may not be detected.
- Do not attempt to replace faces, hands or text.
- During processing, large images are downscaled to a maximum of 2048 x 2048 pixels, then upscaled back to their original size, which may affect quality.
- There is a special transformation count for the generative replace effect.
- The generative replace effect is not supported for animated images, fetched images or incoming transformations.
- Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.
See full syntax: e_gen_replace in the Transformation Reference.
Try it out: Generative replace in the Transformation Center.
Generative restore
Revitalize degraded and poor quality images using generative AI.
You can use the gen_restore
effect (e_gen_restore
in URLs) to improve images that have become degraded through repeated processing and compression, in addition to enhancing old images by improving sharpness and reducing noise.
Particularly useful for user generated content (UGC), generative restore can:
- Remove severe compression artifacts
- Reduce noise from grainy images
- Sharpen blurred images
Use the slider in this example to see the difference between the original image on the left and the restored image on the right:
You can use the generative restore effect together with the improve effect for even better results. While generative restore tries to rectify compression artifacts, the improve effect addresses color, contrast and brightness.