Programmable Media

Transformation URL API reference

Last updated: Nov-13-2024

The Transformation URL API enables you to deliver media assets, including a large variety of on-the-fly transformations through the use of URL parameters. This reference provides comprehensive coverage of all available URL transformation parameters, including syntax, value details, and examples.

Overview

The default Cloudinary asset delivery URL has the following structure:

https://res.cloudinary.com/<cloud_name>/<asset_type>/<delivery_type>/<transformations>/<version>/<public_id_full_path>.<extension>

This reference covers the parameters and corresponding options and values that can be used in the <transformations> element of the URL. It also covers the <extension> element.

For information on other elements of the URL, see Transformation URL syntax.

Important

The transformation names and syntax shown in this reference refer to the URL API.

Depending on the Cloudinary SDK you use, the names and syntax for the same transformation may be different. Therefore, all of the transformation examples in this reference also include the code for generating the example delivery URL from your chosen SDK.

The SDKs additionally provide a variety of helper methods to simplify the building of the transformation URL as well as other built-in capabilities. You can find more information about these in the relevant SDK guides.

Parameter types

There are two types of transformation parameters:

  • Action parameters: Parameters that perform a specific transformation on the asset.
  • Qualifier parameters: Parameters that do not perform an action on their own, but rather alter the default behavior or otherwise adjust the outcome of the corresponding action parameter.

See the Transformation Guide for additional guidelines and best practices regarding parameter types.

Tip
Visit the Transformation Center in your Cloudinary Console to explore and experiment with transformations across a variety of images and videos.

.<extension>

 

Although not a transformation parameter belonging to the <transformation> element of the URL, the extension of the URL can transform the format of the delivered asset, in the same way as f_<supported format>.

If f_<supported format> or f_<auto> are not specified in the URL, the format is determined by the extension. If no format or extension is specified, then the asset is delivered in its originally uploaded format.

  • If using an SDK to generate your URL, you can control the extension using the format parameter, or by adding the extension to the public ID.
  • If using a raw transformation, for example to define an eager or named transformation, you can specify the extension at the end of the transformation parameters, following a forward slash. For example, c_pad,h_300,w_300/jpg means that the delivery URL has transformation parameters of c_pad,h_300,w_300 and a .jpg extension. c_pad,h_300,w_300/ represents the same transformation parameters, but with no extension.

Note
As the extension is considered to be part of the transformation, be careful when defining eager transformations and transformations that are allowed when strict transformations are enabled, as the delivery URL must exactly match the transformation, including the extension.

a (angle)

 

Rotates or flips an asset by the specified number of degrees or automatically according to its orientation or available metadata. Multiple modes can be applied by concatenating their values with a dot.

Learn more: Rotating images | Rotating videos

<degrees>

 a_<degrees>

Rotates an asset by the specified angle.

See also: Arithmetic expressions

<mode>

 a_<mode>

Rotates an image or video based on the specified mode.

Use with: To apply one of the a_auto modes, use it as a qualifier with a cropping action that adjusts the aspect ratio, as per the syntax details and example below.

ac (audio codec)

 ac_<codec value>

Sets the audio codec.

af (audio frequency)

 af_<frequency value>

Controls the audio sampling frequency.

As a qualifier, can be used to preserve the original frequency, overriding the default frequency behavior of vc_auto.

As a qualifier, use with: vc_auto

Learn more: Audio frequency control

ar (aspect ratio)

 ar_<ratio value>

A qualifier that crops or resizes the asset to a new aspect ratio, for use with a crop/resize mode that determines how the asset is adjusted to the new dimensions.

Use with: c (crop/resize)

Learn more: Setting the resize dimensions

See also: h (height) | w (width) | Arithmetic expressions

b (background)

 

Applies a background to empty or transparent areas.

<color value>

 b_<color value>

Applies the specified background color on transparent background areas in an image.

Can also be used as a qualifier to override the default background color for padded cropping of images and videos, text overlays and generated waveform images.

As a qualifier, use with: c_auto_pad - image only | c_fill_pad - image only | c_lpad | c_mpad - image only | c_pad | l_subtitles | l_text | fl_waveform

Learn more: Background color for images | Background color for videos

auto

 b_auto[:<mode>][:<number>][:<direction>][:palette_<color 1>[_..._<color n>]]

A qualifier that automatically selects the background color based on one or more predominant colors in the image, for use with one of the padding crop mode transformations.

Learn more: Content-aware padding

Use with: c_auto_pad - image only | c_pad | c_lpad | c_mpad | c_fill_pad - image only

blurred

 b_blurred[:<intensity>][:<brightness>]

A qualifier that generates a blurred version of the same video to use as the background with the corresponding padded cropping transformation.

Use with: c_pad | c_lpad

Learn more: Pad with blurred video background

gen_fill

 b_gen_fill[:prompt_<prompt>][;ignore-foreground_<ignore foreground>][;seed_<seed>]

A qualifier that automatically fills the padded area using generative AI to extend the image seamlessly. Optionally include a prompt to guide the image generation.

Using different seeds, you can regenerate the image if you're not happy with the result. You can also use seeds to return a previously generated result, as long as any other preceding transformation parameters are the same.

Notes and limitations:
  • Generative fill can only be used on non-transparent images.
  • There is a special transformation count for generative fill.
  • Generative fill is not supported for animated images, fetched images or incoming transformations.
  • If you get blurred results when using this feature, it is likely that the built-in NSFW (Not Safe For Work) check has detected something inappropriate. You can contact support to disable this check if you believe it is too sensitive.
  • Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.

Learn more: Generative fill

Use with: c_auto_pad | c_pad | c_lpad | c_mpad | c_fill_pad

bl (baseline)

 bl_<named transformation>

Establishes a baseline transformation from a named transformation. The baseline transformation is cached, so when re-used with other transformation parameters, the baseline part of the transformation does not have to be regenerated, saving processing time and cost.

Notes
  • You can combine the baseline transformation with other transformation parameters, but it must be the first component in the chain and the only transformation parameter in that component.
  • You must specify a supported format transformation (f_) in the named transformation.
  • Consider using f_jxl/q_100 in the baseline transformation to prevent images suffering from loss due to double lossy encoding.
  • You cannot use automatic format (f_auto) in the named transformation, although this can be used in a subsequent component.
  • If the named transformation contains variables, the variables must be defined within the named transformation.
  • The baseline transformation is not supported for fetched media or incoming transformations.

bo (border)

 bo_<width>_<style>_<color>

Adds a solid border around an image or video.

As a qualifier, adds a border to an overlay.

Use with: l_<image id> | l_fetch | l_subtitles | l_text | l_video

Learn more: Adding borders

br (bitrate)

 br_<bitrate value>[:constant]

Controls the bitrate for audio or video files in bits per second. By default, a variable bitrate (VBR) is used, with this value indicating the maximum bitrate.

Supported for video codecs: h264, h265 (MPEG-4); vp8, vp9 (WebM)
Supported for audio codecs: aac, mp3, vorbis

Learn more: Bitrate control

c (crop/resize)

 

Changes the size of the delivered asset according to the requested width & height dimensions.

Depending on the selected <crop mode>, parts of the original asset may be cropped out and/or the asset may be resized (scaled up or down).

When using any of the modes that can potentially crop parts of the asset, the selected gravity parameter controls which part of the original asset is kept in the resulting delivered file.

Learn more: Resizing and cropping images | Resizing and cropping videos

auto

 c_auto

Automatically determines the best crop based on the gravity and specified dimensions.

If the requested dimensions are smaller than the best crop, the result is downscaled. If the requested dimensions are larger than the original image, the result is upscaled. Use this mode in conjunction with the g (gravity) parameter.

Required qualifiers

g (gravity)

    And

Two of the following: w (width) | h (height) | ar (aspect ratio)

(In rare cases, you may choose to provide only one sizing qualifier)

Learn more: Automatic gravity with the automatic cropping mode

auto_pad

 c_auto_pad

Tries to prevent a "bad crop" by first attempting to use the auto cropping mode, but adding some padding if the algorithm determines that more of the original image needs to be included in the final image. Especially useful if the aspect ratio of the delivered asset is considerably different from the original's aspect ratio. Supported only in conjunction with g_auto.

Note
Not supported for animated images.

Required qualifiers

g_auto

    And

Two of the following: w (width) | h (height) | ar (aspect ratio)

(In rare cases, you may choose to provide only one sizing qualifier)

Optional qualifiers

b (background)

crop

 c_crop

Extracts the specified size from the original image without distorting or scaling the delivered asset.

By default, the center of the image is kept (extracted) and the top/bottom and/or side edges are evenly cropped to achieve the requested dimensions. You can specify the gravity qualifier to control which part of the image to keep, either as a compass direction (such as south or north_east), one of the special gravity positions (such as faces or ocr_text), AI-based automatic region detection or AI-based object detection.

You can also specify a specific region of the original image to keep by specifying x and y qualifiers together with w (width) and h (height) qualifiers to define an exact bounding box. When using this method, and no gravity is specified, the x and y coordinates are relative to the top-left (north-west) corner of the original asset. You can also use percentage based numbers instead of the exact coordinates for x, y, w and h (e.g., 0.5 for 50%). Use this method only when you already have the required absolute cropping coordinates. For example, you might use this if your application allows a user to upload user-generated content, and your application allows the user to manually select a region to crop from the original image, and you pass those coordinates to build the crop URL.

Required qualifiers

Two of the following: w (width) | h (height) | ar (aspect ratio)

(In rare cases, you may choose to provide only one sizing qualifier)

Optional qualifiers

g (gravity) | x (x-coordinate) | y (y-coordinate)

fill

 c_fill

Creates an asset with the exact specified width and height without distorting the asset. This option first scales as much as needed to at least fill both of the specified dimensions. If the requested aspect ratio is different than the original, cropping will occur on the dimension that exceeds the requested size after scaling. You can specify which part of the original asset you want to keep if cropping occurs using the gravity (set to 'center' by default).

Required qualifiers

At least one of the following: w (width) | h (height) | ar (aspect ratio)

(In rare cases, you may choose to provide only one sizing qualifier)

Optional qualifiers

g (gravity)

fill_pad

 c_fill_pad

Tries to prevent a "bad crop" by first attempting to use the fill mode, but adding some padding if the algorithm determines that more of the original image needs to be included in the final image, or if more content in specific frames in a video should be shown. Especially useful if the aspect ratio of the delivered asset is considerably different from the original's aspect ratio. Supported only in conjunction with g_auto.

Note
Not supported for animated images.

Required qualifiers

g_auto

    And

Two of the following: w (width) | h (height) | ar (aspect ratio)

(In rare cases, you may choose to provide only one sizing qualifier)

Optional qualifiers

b (background) - image only

fit

 c_fit

Scales the asset up or down so that it takes up as much space as possible within a bounding box defined by the specified dimension parameters without cropping any of it. The original aspect ratio is retained and all of the original image is visible.

Required qualifiers

Two of the following: w (width) | h (height) | ar (aspect ratio)

(In rare cases, you may choose to provide only one sizing qualifier)

imagga_crop

 c_imagga_crop

Requires the Imagga Crop and Scale add-on.
The Imagga Crop and Scale add-on can be used to smartly crop your images based on areas of interest within each specific photo as automatically calculated by the Imagga algorithm.

Required qualifiers

At least one of the following: w (width) | h (height)

Optional qualifiers

ar (aspect_ratio)

imagga_scale

 c_imagga_scale

Requires the Imagga Crop and Scale add-on.
The Imagga Crop and Scale add-on can be used to smartly scale your images based on automatically calculated areas of interest within each specific photo.

Required qualifiers

At least one of the following: w (width) | h (height) | ar (aspect ratio)

(In rare cases, you may choose to provide only one sizing qualifier)

lfill

 c_lfill

The lfill (limit fill) mode is the same as fill but only if the original image is larger than the specified resolution limits, in which case the image is scaled down to fill the specified width and height without distorting the image, and then the dimension that exceeds the request is cropped. If the original dimensions are smaller than the requested size, it is not resized at all. This prevents upscaling. You can specify which part of the original image you want to keep if cropping occurs using the gravity parameter (set to center by default).

Required qualifiers

Two of the following: w (width) | h (height) | ar (aspect ratio)

(In rare cases, you may choose to provide only one sizing qualifier)

Optional qualifiers

g (gravity)

limit

 c_limit

Same as the fit mode but only if the original asset is larger than the specified limit (width and height), in which case the asset is scaled down so that it takes up as much space as possible within a bounding box defined by the specified width and height parameters. The original aspect ratio is retained (by default) and all of the original asset is visible. This mode doesn't scale up the asset if your requested dimensions are larger than the original image size.

Required qualifiers

Two of the following: w (width) | h (height) | ar (aspect ratio)

(In rare cases, you may choose to provide only one sizing qualifier)

lpad

 c_lpad

The lpad (limit pad) mode is the same as pad but only if the original asset is larger than the specified limit (width and height), in which case the asset is scaled down to fill the specified width and height while retaining the original aspect ratio (by default) and with all of the original asset visible. This mode doesn't scale up the asset if your requested dimensions are bigger than the original asset size. Instead, if the proportions of the original asset do not match the requested width and height, padding is added to the asset to reach the required size. You can also specify where the original asset is placed by using the gravity parameter (set to center by default). Additionally, you can specify the color of the background in the case that padding is added.

Required qualifiers

Two of the following: w (width) | h (height) | ar (aspect ratio)

(In rare cases, you may choose to provide only one sizing qualifier)

Optional qualifiers

g_<gravity position> | b (background)

mfit

 c_mfit

The mfit (minimum fit) mode is the same as fit but only if the original image is smaller than the specified minimum (width and height), in which case the image is scaled up so that it takes up as much space as possible within a bounding box defined by the specified width and height parameters. The original aspect ratio is retained (by default) and all of the original image is visible. This mode doesn't scale down the image if your requested dimensions are smaller than the original image's.

Required qualifiers

Two of the following: w (width) | h (height) | ar (aspect ratio)

(In rare cases, you may choose to provide only one sizing qualifier)

mpad

 c_mpad

The mpad (minimum pad) mode is the same as pad but only if the original image is smaller than the specified minimum (width and height), in which case the image is unchanged but padding is added to fill the specified dimensions. This mode doesn't scale down the image if your requested dimensions are smaller than the original image's. You can also specify where the original image is placed by using the gravity parameter (set to center by default). Additionally, you can specify the color of the background in the case that padding is added.

Required qualifiers

Two of the following: w (width) | h (height) | ar (aspect ratio)

(In rare cases, you may choose to provide only one sizing qualifier)

Optional qualifiers

g_<gravity position> | b (background)

pad

 c_pad

Resizes the asset to fill the specified width and height while retaining the original aspect ratio (by default) and with all of the original asset visible. If the proportions of the original asset do not match the specified width and height, padding is added to the asset to reach the required size. You can also specify where the original asset is placed using the gravity parameter (set to center by default). Additionally, you can specify the color of the background in the case that padding is added.

Required qualifiers

Two of the following: w (width) | h (height) | ar (aspect ratio)

(In rare cases, you may choose to provide only one sizing qualifier)

Optional qualifiers

g_<gravity position> | b (background)

scale

 c_scale

Resizes the asset exactly to the specified width and height. All original asset parts are visible, but might be stretched or shrunk if the dimensions you request have a different aspect ratio than the original.

If only width or only height is specified, then the asset is scaled to the new dimension while retaining the original aspect ratio (unless you also include the fl_ignore_aspect_ratio flag).

Required qualifiers

At least one of the following: w (width) | h (height) | ar (aspect ratio)

Optional qualifiers

fl_ignore_aspect_ratio | g_liquid

See also: Liquid rescaling

thumb

 c_thumb

Creates image thumbnails based on a gravity position. Must always be accompanied by the g (gravity) parameter. This cropping mode generates a thumbnail of an image with the exact specified width and height dimensions and with the original proportions retained, but the resulting image might be scaled to fit in the specified dimensions. You can specify the z (zoom) parameter to determine how much to scale the resulting image within the specified width and height.

Required qualifiers

g (gravity)

    And

Two of the following: w (width) | h (height) | ar (aspect ratio)

(In rare cases, you may choose to provide only one sizing qualifier)

Optional qualifiers

z (zoom)

co (color)

 co_<color value>

A qualifier that specifies the color to use with the corresponding transformation.

Use with: e_colorize | e_outline | e_make_transparent | e_shadow | l_text | l_subtitles | fl_waveform

cs (color space)

 cs_<color space mode>

Controls the color space used for the delivered image or video. If you don't include this parameter in your transformation, the color space of the original asset is generally retained. In some cases for videos, the color space is normalized for web delivery, unless cs_copy is specified.

d (default image)

 d_<image asset>

Specifies a backup placeholder image to be delivered in the case that the actual requested delivery image or social media picture does not exist. Any requested transformations are applied on the placeholder image as well.

Learn more: Using a default image placeholder

dl (delay)

 dl_<time value>

Controls the time delay between the frames of a delivered animated image. (The source asset can be an image or a video.)

Related flag: fl_animated

dn (density)

 dn_<dots per inch>

Controls the density to use when delivering an image or when converting a vector file such as a PDF or EPS document to a web image delivery format.

  • For web image formats: By default, if an image does not contain resolution information in its embedded metadata, Cloudinary normalizes any derived images for web optimization purposes and delivers them at 150 dpi. Controlling the dpi can be useful when generating a derived image intended for printing.

    Tip
    You can take advantage of the idn (initial density) value to automatically set the density of your image to the (pre-normalized) initial density of the original image (for example, dn_idn). This value is taken from the original image's metadata.
  • For vector files (PDF, EPS, etc.): When you deliver a vector file in a web image format, it is delivered by default at 150 dpi.

See also: Arithmetic expressions

Learn more: Deliver a PDF page as an image

dpr (DPR)

 

Sets the device pixel ratio (DPR) for the delivered image or video using a specified value or automatically based on the requesting device.

<pixel ratio>

 dpr_<pixel ratio>

Delivers the image or video in the specified device pixel ratio.

Note
When setting a DPR value, you must also include a crop/resize transformation specifying a certain width or height.

Important
When delivering at a DPR value larger than 1, ensure that you also set the desired final display dimensions in your image or video tag. For example, if you set c_scale,h_300/dpr_2.0 in your delivery URL, you should also set height=300 in your image tag. Otherwise, the image will be delivered at 2.0 x the requested dimensions (a height of 600px in this example).

Learn more: Set Device Pixel Ratio (DPR)

See also: Arithmetic expressions

auto

 dpr_auto

Delivers the image in a resolution that automatically matches the DPR (Device Pixel Ratio) setting of the requesting device, rounded up to the nearest integer. Only works for certain browsers and when Client-Hints are enabled.

Learn more: Automatic DPR

du (duration)

 du_<time value>

Sets the duration (in seconds) of a video or audio clip.

  • Can be used independently to trim a video or audio clip to the specified length. This parameter is often used in conjunction with the so (start offset) and/or eo (end offset) parameters.
  • Can be used as a qualifier to control the length of time for a corresponding transformation.

As a qualifier, use with: e_boomerang | l_audio | l_<image id> | l_video

e (effect)

 

Applies the specified effect to an asset.

Note

If you specify more than one effect in a transformation component (separated by commas), only the last effect in that component is applied.

To combine effects, use separate components (separated by forward slashes) following best practice guidelines, which recommend including only one action parameter per component.

accelerate

 e_accelerate[:<acceleration percentage>]

Speeds up the video playback by the specified percentage.

adv_redeye

 e_adv_redeye

Requires the Advanced Facial Attribute Detection add-on.
Automatically removes red eyes from an image.

anti_removal

 e_anti_removal[:<distortion level>]

A qualifier that slightly distorts the corresponding image overlay to prevent easy removal.

Use with: l_<image id> | l_fetch | l_text | u (underlay)

Learn more: Smart anti-removal

art

 e_art:<filter>

Applies the selected artistic filter.

Learn more: Artistic filter effects

auto_brightness

 e_auto_brightness[:<blend percentage>]

Automatically adjusts the image brightness and blends the result with the original image.

auto_color

 e_auto_color[:<blend percentage>]

Automatically adjusts the image color balance and blends the result with the original image.

auto_contrast

 e_auto_contrast[:<blend percentage>]

Automatically adjusts the image contrast and blends the result with the original image.

assist_colorblind

 e_assist_colorblind[:<assist type>]

Applies stripes or color adjustment to help people with common color blind conditions to differentiate between colors that are similar for them.

Learn more: Blog post

background_removal

 e_background_removal[:fineedges_<enable fine edges>]

Uses the Cloudinary AI Background Removal add-on to make the background of an image transparent.

Notes and tips

Notes:

  • You can combine the background_removal effect with other transformation parameters, but the background_removal effect must be the first component in the chain.
  • With no other action parameters in the same component (as per our best practice guidelines), the background-removed version is saved so that when used for other derived versions of the background-removed asset, the add-on is not called again for that asset.
  • The first time the add-on is called for an asset, a 423 error response is returned until the processing has completed.
  • The add-on imposes a limit of 4,194,304 (2048 x 2048) total pixels on its input images. If an image exceeds this limit, the add-on first scales down the image to fit the limit, and then processes it. The scaling does not affect the aspect ratio of the image, but it does alter its output dimensions.
  • Background removal on the fly cannot currently be used for image overlays. Instead, apply the base image as an underlay.
  • Background removal on the fly is not supported for fetched images.
  • Background removal on the fly is not supported for incoming transformations. If you need this use case, you can remove the background on upload.

Tips:

  • This transformation generally gives better results than the e_bgremoval and e_make_transparent effects.
  • It works well for foreground objects with fine edges and lets you specify certain items that you expect to see as foreground objects.
  • You can also try the Pixelz Remove the Background add-on for professional manual background removal.

Learn more: On-the-fly background removal

bgremoval

 e_bgremoval[:screen][:<color to remove>]

Makes the background of an image transparent (or solid white for JPGs). Use when the background is a uniform color.

Tips

blackwhite

 e_blackwhite[:<threshold>]

Converts an image to black and white.

Note
You can also convert an image to grayscale.

blue

 e_blue:<level>

Adjust an image's blue channel.

blur

 e_blur[:<strength>]

Applies a blurring filter to an asset.

blur_faces

 e_blur_faces[:<strength>]

Blurs all detected faces in an image.

blur_region

 e_blur_region[:<strength>]

Applies a blurring filter to the region of an image specified by x, y, width and height, or an area of text. If no region is specified, the whole image is blurred.

Optional qualifiers

x, y (x & y coordinates) | w (width) | h (height) | g_ocr_text

boomerang

 e_boomerang

Causes a video clip to play forwards and then backwards.

Use in conjunction with trimming parameters (duration, start_offset, or end_offset and the loop effect to deliver a classic (short, repeating) boomerang clip.

Learn more: Create a boomerang video clip

brightness

 e_brightness:<level>

Adjusts the image or video brightness.

brightness_hsb

 e_brightness_hsb[:<level>]

Adjusts image brightness modulation in HSB to prevent artifacts in some images.

camera

 e_camera[[:up_<vertical position>][;right_<horizontal position>][;zoom_<zoom amount>][;env_<environment>][;exposure_<exposure amount>][;frames_<number of frames>]]

A qualifier that lets you customize a 2D image captured from a 3D model, as if a photo is being taken by a camera.

The camera always points towards the center of the 3D model and can be rotated around it. Specify the position of the camera, the exposure, zoom and lighting to capture your perfect shot.

Use with fl_animated to create a 360 spinning animation.

Use with: f (format)

Learn more: Generating an image from a 3D model

See also: e_light

cartoonify

 e_cartoonify[:<line strength>][:<color reduction>]

Applies a cartoon effect to an image.

colorize

 e_colorize[:<level>]

Colorizes an image. By default, gray is used for colorization. You can specify a different color using the color qualifier.

Optional qualifier

color

contrast

 e_contrast[:level_<level>][;type_<function type>]

Adjusts an image or video contrast.

Note
This transformation also supports non-verbose, ordered syntax.

cut_out

 e_cut_out

Trims pixels according to the transparency levels of a specified overlay image. Wherever an overlay image is transparent, the original is shown, and wherever an overlay is opaque, the resulting image is transparent.

Note
The same layer transformation syntax rules apply, including for authenticated or private assets.

Required qualifiers

l (layer)

Learn more: Shape cutouts: remove a shape

deshake

 e_deshake[:<pixels>]

Removes small motion shifts from a video. Useful for non-professional (user-generated content) videos.

displace

 e_displace

Displaces the pixels in an image according to the color channels of the pixels in another specified image (a gradient map specified with the overlay parameter).

Note
The same layer transformation syntax rules apply, including for authenticated or private assets.

Required qualifiers

At least one of the following: x, y (x & y coordinates)

Note
Values of x and y must be between -999 and 999.

Learn more: Displacement maps

distort

Distorts an image to a new shape by either adjusting its corners or by warping it into an arc.

 e_distort:<x1>:<y1>:<x2>:<y2>:<x3>:<y3>:<x4>:<y4>

Distorts an image, or text overlay, to a new shape by adjusting its corners to achieve perception warping.

Learn more: Image shape changes and distortion effects

 e_distort:arc:<degrees>

Distorts an image, or text overlay, to an arc shape.

Learn more: Image shape changes and distortion effects

dropshadow

 e_dropshadow[:azimuth_<azimuth>][;elevation_<elevation>][;spread_<spread>]

Adds a shadow to the object(s) in an image. Specify the angle and spread of the light source causing the shadow.

Notes
  • Either:
    • the original image must include transparency, for example where the background has already been removed and it has been stored in a format that supports transparency, such as PNG, or
    • the dropshadow effect must be chained after the background_removal effect, for example:

Learn more: Dropshadow effect

See also: e_shadow

enhance

 e_enhance

Uses AI to analyze an image and make adjustments to enhance the appeal of the image, such as:

  • Exposure reduction: Correcting overexposed images, smartly reducing excessive brightness and reclaiming details in bright areas, bringing back a balanced exposure.
  • Exposure enhancement: Adjusting underexposed images by enhancing dim areas, thus improving overall exposure without compromising the image's natural quality.
  • Color intensification: Enriching color vividness, making hues more vibrant and lively, thus bringing a more dynamic color range to the image.
  • Color temperature correction: Adjusting the white balance, correcting color casts and ensuring that the colors in the image accurately reflect their real-world appearance.

Consider also using generative restore to revitalize poor quality images, or the improve effect to automatically adjust color, contrast and brightness. See this comparison of image enhancement options.

Notes and limitations:
  • During processing, large images are downscaled to a maximum of 4096 x 4096 pixels, then upscaled back to their original size, which may affect quality.
  • There is a special transformation count for the enhance effect.
  • The enhance effect is not supported for fetched images or incoming transformations.

See also: e_improve | e_gen_restore

extract

 e_extract:prompt_(<prompt 1>[;...;<prompt n>])[;multiple_<detect multiple>][;mode_<mode>][;invert_<invert>]

Extracts an area or multiple areas of an image, described in natural language. You can choose to keep the content of the extracted area(s) and make the rest of the image transparent (like background removal), or make the extracted area(s) transparent, keeping the content of the rest of the image. Alternatively, you can make a grayscale mask of the extracted area(s) or everything excluding the extracted area(s), which you can use with other transformations such as e_mask, e_multiply, e_overlay and e_screen.

Notes and limitations:
  • During processing, large images are downscaled to a maximum of 2048 x 2048 pixels, then upscaled back to their original size, which may affect quality.
  • When you specify more than one prompt, all the objects specified in each of the prompts will be extracted whether or not multiple_true is specified in the URL.
  • There is a special transformation count for the extract effect.
  • The extract effect is not supported for animated images, fetched images or incoming transformations.
  • User-defined variables cannot be used for the prompt when more than one prompt is specified.
  • Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.

See also: e_background_removal

Learn more: Shape cutouts: use AI to determine what to remove or keep in an image

fade

 e_fade[:<duration>]

Fades into, or out of, an animated GIF or video. You can chain fade effects to both fade into and out of the media.

Learn more: Fade in and out

fill_light

 e_fill_light[:<blend>][:<bias>]

Adjusts the fill light and optionally blends the result with the original image.

gamma

 e_gamma[:<level>]

Adjusts the image or video gamma level.

gen_background_replace

 e_gen_background_replace[:prompt_<prompt>][;seed_<seed>]

Replaces the background of an image with an AI-generated background. If no prompt is specified, the background is based on the contents of the image. Otherwise, the background is based on the natural language prompt specified.

For images with transparency, the generated background replaces the transparent area. For images without transparency, the effect first determines the foreground elements and leaves those areas intact, while replacing the background.

Using different seeds, you can regenerate a background if you're not happy with the result. You can also use seeds to return a previously generated result, as long as any other preceding transformation parameters are the same.

Notes and limitations:
  • The use of generative AI means that results may not be 100% accurate.
  • There is a special transformation count for the generative background replace effect.
  • If you get blurred results when using this feature, it is likely that the built-in NSFW (Not Safe For Work) check has detected something inappropriate. You can contact support to disable this check if you believe it is too sensitive.
  • The generative background replace effect is not supported for animated images, fetched images or incoming transformations.
  • Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.

Learn more: Generative background replace

gen_recolor

 e_gen_recolor:prompt_(<prompt 1>[;...;<prompt n>]);to-color_<color>[;multiple_<detect multiple>]

Uses generative AI to recolor parts of your image, maintaining the relative shading. Specify one or more prompts and the color to change them to. Use the multiple parameter to replace the color of all instances of the prompt when one prompt is given.

Notes and limitations:
  • The generative recolor effect can only be used on non-transparent images.
  • The use of generative AI means that results may not be 100% accurate.
  • The generative recolor effect works best on simple objects that are clearly visible.
  • Very small objects and very large objects may not be detected.
  • During processing, large images are downscaled to a maximum of 2048 x 2048 pixels, then upscaled back to their original size, which may affect quality.
  • When you specify more than one prompt, all the objects specified in each of the prompts will be recolored whether or not multiple_true is specified in the URL.
  • There is a special transformation count for the generative recolor effect.
  • The generative recolor effect is not supported for animated images, fetched images or incoming transformations.
  • User-defined variables cannot be used for the prompt when more than one prompt is specified.
  • Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.

Tip
Consider using e_replace_color if you want to recolor everything of a particular color in your image, rather than specific elements.

Learn more: Generative recolor

See also: e_replace_color

gen_remove

 e_gen_remove[:prompt_(<prompt 1>[;...;<prompt n>])][;multiple_<detect multiple>][;remove-shadow_<remove shadow>]][:region_((x_<x coordinate 1>;y_<y coordinate 1>;w_<width 1>;h_<height 1>)[;...;(x_<x coordinate n>;y_<y coordinate n>;w_<width n>;h_<height n>)])]

Uses generative AI to remove unwanted parts of your image, replacing the area with realistic pixels. Specify either one or more prompts or one or more regions. Use the multiple parameter to remove all instances of the prompt when one prompt is given.

By default, shadows cast by removed objects are not removed. If you want to remove the shadow, when specifying a prompt you can set the remove-shadow parameter to true.

Notes and limitations:
  • The generative remove effect can only be used on non-transparent images.
  • The use of generative AI means that results may not be 100% accurate.
  • The generative remove effect works best on simple objects that are clearly visible.
  • Very small objects and very large objects may not be detected.
  • Do not attempt to remove faces or hands.
  • During processing, large images are downscaled to a maximum of 6140 x 6140 pixels, then upscaled back to their original size, which may affect quality.
  • When you specify more than one prompt, all the objects specified in each of the prompts will be removed whether or not multiple_true is specified in the URL.
  • There is a special transformation count for the generative remove effect.
  • The generative remove effect is not supported for animated images, fetched images or incoming transformations.
  • User-defined variables cannot be used for the prompt when more than one prompt is specified.
  • Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.

Learn more: Generative remove

gen_replace

 e_gen_replace:from_<from prompt>;to_<to prompt>[;preserve-geometry_<preserve geometry>][;multiple_<detect multiple>]

Uses generative AI to replace parts of your image with something else. Use the preserve-geometry parameter to fill exactly the same shape with the replacement.

Notes and limitations:
  • The generative replace effect can only be used on non-transparent images.
  • The use of generative AI means that results may not be 100% accurate.
  • The generative replace effect works best on simple objects that are clearly visible.
  • Very small objects and very large objects may not be detected.
  • Do not attempt to replace faces, hands or text.
  • During processing, large images are downscaled to a maximum of 2048 x 2048 pixels, then upscaled back to their original size, which may affect quality.
  • There is a special transformation count for the generative replace effect.
  • The generative replace effect is not supported for animated images, fetched images or incoming transformations.
  • Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.

Learn more: Generative replace

gen_restore

 e_gen_restore

Uses generative AI to restore details in poor quality images or images that may have become degraded through repeated processing and compression.

Consider also using the improve effect to automatically adjust color, contrast and brightness, or the enhance effect to improve the appeal of an image based on AI analysis. See this comparison of image enhancement options.

Notes and limitations:
  • The generative restore effect can only be used on non-transparent images.
  • The use of generative AI means that results may not be 100% accurate.
  • There is a special transformation count for the generative restore effect.
  • The generative restore effect is not supported for animated images, fetched images or incoming transformations.
  • Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.

See also: e_enhance | e_improve

Learn more: Generative restore

gradient_fade

 e_gradient_fade[:<type>][:<strength>]

Applies a gradient fade effect from the edge of an image. Use x or y to indicate from which edge to fade and how much of the image should be faded. Values of x and y can be specified as a percentage (range: 0.0 to 1.0), or in pixels (integer values). Positive values fade from the top (y) or left (x). Negative values fade from the bottom (y) or right (x). By default, the gradient is applied to the top 50% of the image (y_0.5).

Optional qualifiers

x, y (x & y coordinates)

grayscale

 e_grayscale

Converts an image to grayscale (multiple shades of gray).

green

 e_green[:<level>]

Adjust an image's green channel.

hue

 e_hue[:<level>]

Adjusts an image's hue.

improve

 e_improve[:<mode>][:<blend>]

Adjusts an image's colors, contrast and brightness to improve its appearance.

Consider also using generative restore to revitalize poor quality images, or the enhance effect to improve the appeal of an image based on AI analysis. See this comparison of image enhancement options.

See also: e_enhance | e_gen_restore

light

 e_light[:shadowintensity_<intensity>]

When generating a 2D image from a 3D model, this effect introduces a light source to cast a shadow. You can control the intensity of the shadow that's cast.

Note
You must specify a 2D image file format that supports transparency, such as PNG or AVIF.

Use with: f (format) | e_camera

Learn more: Generating an image from a 3D model

loop

 e_loop[:<additional iterations>]

Loops a video or animated image the specified number of times.

make_transparent

 e_make_transparent[:<tolerance>]

Makes the background of an image or video transparent (or solid white for formats that do not support transparency). The background is determined as all pixels that resemble the pixels on the edges of an image or video, or the color specified by the color qualifier.

Tips

Optional qualifier

color

Learn more: Apply video transparency

mask

 e_mask

A qualifier that uses an image layer as a mask to hide, reveal, or change the opacity of the layer below it.

Note
The same layer transformation syntax rules apply, including for authenticated or private assets.

Use with: l_<image id> | l_fetch | l_text | u (underlay)

morphology

 e_morphology[:method_<method>][;iterations_<iterations>][;kernel_<kernel>][;radius_<radius>]

Applies kernels of various sizes and shapes to an image using different methods to achieve effects such as image blurring and sharpening.

Note
This transformation also supports non-verbose, ordered syntax.

multiply

 e_multiply

A qualifier that blends image layers using the multiply blend mode, whereby the RGB channel numbers for each pixel from the top layer are multiplied by the values for the corresponding pixel from the bottom layer. The result is always a darker picture; since each value is less than 1, their product will be less than either of the initial values.

Note
The same layer transformation syntax rules apply, including for authenticated or private assets.

Use with: l_<image id> | l_fetch | l_text | u (underlay)

See also: Other blend modes: e_mask | e_overlay | e_screen

negate

 e_negate

Creates a negative of an image.

noise

 e_noise[:<level>]

Adds visual noise to the video, visible as a random flicker of "dots" or "snow".

Learn more: Add visual noise

oil_paint

 e_oil_paint[:<strength>]

Applies an oil painting effect.

opacity_threshold

 e_opacity_threshold[:<level>]

Causes all semi-transparent pixels in an image to be either fully transparent or fully opaque. Specifically, each pixel with an opacity lower than the specified threshold level is set to an opacity of 0% (transparent). Each pixel with an opacity greater than or equal to the specified level is set to an opacity of 100% (opaque).

Note
This effect can be a useful solution when PhotoShop PSD files are delivered in a format supporting partial transparency, such as PNG, and the results without this effect are not as expected.

ordered_dither

 e_ordered_dither[:<type>]

Applies an ordered dither filter to an image.

outline

 e_outline[:<mode>][:<width>][:<blur>]

Adds an outline effect to an image. Specify the color of the outline using the co (color) qualifier. If no color is specified, the default outline is black.

Optional qualifier

co (color)

Learn more: Outline effects

overlay

 e_overlay

A qualifier that blends image layers using the overlay blend mode, which combines the multiply and screen blend modes. The parts of the top layer where the base layer is light become lighter, the parts where the base layer is dark become darker. Areas where the top layer are mid gray are unaffected.

Note
The same layer transformation syntax rules apply, including for authenticated or private assets.

Use with: l_<image id> | l_fetch | l_text | u (underlay)

See also: Other blend modes: e_mask | e_multiply | e_screen

pixelate

 e_pixelate[:<square size>]

Applies a pixelation effect.

pixelate_faces

 e_pixelate_faces[:<square size>]

Pixelates all detected faces in an image.

pixelate_region

 e_pixelate_region[:<square size>]

Pixelates the region of an image specified by x, y, width and height, or an area of text. If no region is specified, the whole image is pixelated.

Optional qualifiers

x, y (x & y coordinates) | w (width) | h (height) | g_ocr_text

preview

 e_preview[:duration_<duration>][:max_seg_<max segments>][:min_seg_dur_<min segment duration>]

Generates a summary of a video based on Cloudinary's AI-powered preview algorithm, which identifies the most interesting video segments in a video and uses these to generate a video preview.

Note
There are special transformation counts for videos using e_preview.

Optional qualifier

fl_getinfo

Learn more: Generate an AI-based video preview

progressbar

 e_progressbar[:type_<bar type>][:color_<bar color>][:width_<width>]

Adds a progress indicator overlay to a video.

Note
This transformation also supports non-verbose, ordered syntax.

Learn More: Add a video progress indicator

recolor

 e_recolor:<value1>:<value2>:...:<value9>[<value10>:<value11>:...:<value16>]

Converts the colors of every pixel in an image based on a supplied color matrix, in which the value of each color channel is calculated based on the values from all other channels (e.g. a 3x3 matrix for RGB, a 4x4 matrix for RGBA or CMYK, etc).

red

 e_red[:<level>]

Adjust an image's red channel.

redeye

 e_redeye

Automatically removes red eyes in an image.

replace_color

 e_replace_color:<to color>[:<tolerance>][:<from color>]

Maps an input color and those similar to the input color to corresponding shades of a specified output color, taking luminosity and chroma into account, in order to recolor an object in a natural way. More highly saturated input colors usually give the best results. It is recommended to avoid input colors approaching white, black, or gray.

Notes
  • This transformation only supports non-verbose, ordered syntax, so remember to include the tolerance parameter if specifying from color, even if you intend to use the default tolerance.
  • Consider using e_gen_recolor if you want to specify particular elements in your image to recolor, rather than everything with the same color.

Learn more: Replace color

See also: e_gen_recolor

reverse

 e_reverse

Plays a video or audio file in reverse.

saturation

 e_saturation[:<level>]

Adjusts an image or video saturation level.

screen

 e_screen

A qualifier that blends image layers using the screen blend mode, whereby the RGB channel numbers of the pixels in the two layers are inverted, multiplied, and then inverted again. This yields the opposite effect to multiply, and results in a brighter picture.

Note
The same layer transformation syntax rules apply, including for authenticated or private assets.

Use with: l_<image id> | l_fetch | l_text | u (underlay)

See also: Other blend modes: e_mask | e_multiply | e_overlay

sepia

 e_sepia[:<level>]

Changes the color scheme of an image to sepia.

shadow

 e_shadow[:<strength>]

Adds a gray shadow to the bottom right of an image. You can change the color and location of the shadow using the color and x,y qualifiers.

Optional qualifiers

x, y (x & y coordinates) | co (color)

Learn more: Shadow effect

See also: e_dropshadow

sharpen

 e_sharpen[:<strength>]

Applies a sharpening filter.

shear

 e_shear:<x-skew>:<y-skew>

Skews an image according to the two specified values in degrees. Negative values skew an image in the opposite direction.

simulate_colorblind

 e_simulate_colorblind[:<condition>]

Simulates the way an image would appear to someone with the specified color blind condition.

Learn more: Blog post

swap_image

 e_swap_image:image_<image ref>[;index_<index>]

Replaces an image/texture in a 3D model.

Note
This transformation also supports non-verbose, ordered syntax.

Learn more: Changing the texture of a 3D model

theme

 e_theme:color_<bgcolor>[:photosensitivity_<level>]

Changes the main background color to the one specified, as if a 'theme change' was applied (e.g. dark mode vs light mode).

Note
This transformation also supports non-verbose, ordered syntax.

Learn more: Theme effect

tint

 e_tint[:equalize][:<amount>][:<color1>][:<color1 position>][:<color2>][:<color2 position>][:...][:<color10>][:<color10 position>]

Blends an image with one or more tint colors at a specified intensity. You can optionally equalize colors before tinting and specify gradient blend positioning per color.

Learn more: Tint effects

transition

 e_transition

A qualifier that applies a custom transition between two concatenated videos.

Important
This effect has been deprecated. To concatenate videos with transitions use cross fade transitions.

Use with: l_video

Learn more: Concatenate videos with custom transitions

trim

 e_trim[:<tolerance>][:<color override>]

Detects and removes image edges whose color is similar to the corner pixels or transparent.

unsharp_mask

 e_unsharp_mask[:<strength>]

Applies an unsharp mask filter to an image.

upscale

 e_upscale

Uses AI-based prediction to add fine detail while upscaling small images.

This 'super-resolution' feature scales each dimension by four, multiplying the total number of pixels by 16.

Tip
Also consider using other image enhancement options.

Notes
  • To use the upscale effect, the input image must be smaller than 4.2 megapixels (the equivalent of 2048 x 2048 pixels).
  • There is a special transformation count for the upscale effect.
  • The upscale effect is not supported for animated images, fetched images or incoming transformations.
  • Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.

Learn more: Upscaling with super resolution

vectorize

 e_vectorize[:<colors>][:<detail>][:<despeckle>][:<paths>][:<corners>]

Vectorizes an image. The values can be specified either in an ordered manner according to the above syntax, or by name as shown in the examples below.

Notes
  • To deliver an image as a vector image, make sure to change the format (or URL extension) to a vector format, such as SVG. However, you can also deliver in a raster format if you just want to get the 'vectorized' graphic effect.
  • Large images are scaled down to 1000 pixels in the largest dimension before vectorization.

vibrance

 e_vibrance[:<strength>]

Applies a vibrance filter to an image.

viesus_correct

 e_viesus_correct[:no_redeye][:skin_saturation[_<saturation level>]]

Requires the Viesus Automatic Image Enhancement add-on.
Enhances an image to its best visual quality.

vignette

 e_vignette[:<level>]

Applies a vignette effect to an image.

volume

 e_volume[:<volume>]

Increases or decreases the volume on a video or audio file.

zoompan

 e_zoompan[:mode_<mode>][;maxzoom_<max zoom>][;du_<duration>][;fps_<frame rate>][;from_([g_<gravity>][;zoom_<zoom>][;x_<x position>][;y_<y position>])][;to_([g_<gravity>][;zoom_<zoom>][;x_<x position>][;y_<y position>])]

Also known as the Ken Burns effect, this transformation applies zooming and/or panning to an image, resulting in a video or animated GIF (depending on the format you specify by either changing the extension or using the format parameter).

You can either specify a mode, which is a predefined type of zoom/pan, or you can provide custom start and end positions for the zoom and pan. You can also use the gravity parameter to specify different start and end areas, such as objects, faces, and automatically determined areas of interest.

Notes
  • The resulting video or animated GIF does not go outside the bounds of the original image. So, if you specify an x,y position of (0,0), for example, the center of the frame will be as close to the top left as possible, but will not be centered on that position.
  • The resolution of your image needs to be sufficient for the zoom level that you choose to maintain good quality.
  • To achieve the best visual quality, the output resolution of the resulting video or animated image should be less than or equal to the input image resolution divided by the maximum zoom level. For example, if your original image has a width of 1920 pixels, and your maximum zoom is 3.2, you should display the resulting video at a width of 600 pixels or less (e.g. chain c_scale,w_600 onto the end of the transformation).
  • If you apply the zoompan effect to an animated image, the first frame of the animated image is taken as the input.
  • To achieve a smoother zoom, you can increase the frame rate, extend the length of the time over which the zoom occurs, and reduce the difference between zoom levels at the start and end of the transformation.
  • The zoompan effect won't work if the resulting video exceeds the limits set for your account. As a general rule, use images that don't exceed 5000 x 5000 pixels.
  • Currently, you can't use automatic gravity (g_auto) in other transformation components that are chained with the zoompan effect.

Learn more: The zoompan effect | Using objects with the zoompan effect

eo (end offset)

 eo_<time value>

Specifies the last second to include in a video (or audio clip). This parameter is often used in conjunction with the so (start offset) and/or du (duration) parameters.

  • Can be used independently to trim a video (or audio clip) by specifying the last second of the video to include. Everything after that second is trimmed off.
  • Can be used as a qualifier to control the timing of a corresponding transformation.

As a qualifier, use with: e_boomerang | l_audio | l_<image id> | l_video

Learn more: Trimming videos | Adding image overlays to videos | Adding video overlays

f (format)

 

Converts (if necessary) and delivers an asset in the specified format regardless of the file extension used in the delivery URL.

Must be used for automatic format selection (f_auto) as well as when fetching remote assets, while the file extension for the delivery URL remains the original file extension.

In most other cases, you can optionally use this transformation to change the format as an alternative to changing the file extension of the public ID in the URL to a supported format. Both will give the same result.

Note
In SDK major versions with initial release earlier than 2020, the name of this parameter is fetch_format. These SDKs also have a format parameter, which is not a transformation parameter, but is used to change the file extension, as shown in the file extension examples - #2.

The later SDKs have a single format parameter (which parallels the behavior of the fetch_format parameter of older SDKs). You can use this to change the actual delivered format of any asset, but if you prefer to convert the asset to a different format by changing the extension of the public ID in the generated URL, you can do that in these later SDKs by specifying the desired extension as part of the public ID value, as shown in file extension examples - #1.

<supported format>

 f_<supported format>

Converts and delivers an asset in the specified format.

Optional qualifier

e_camera

Learn more: Fetch format | Transcoding videos to other formats

auto

 f_auto[:media type]

Automatically generates (if needed) and delivers an asset in the optimal format for the requesting browser in order to minimize the file size.

Optionally, include a media type to ensure the asset is delivered with the desired media type when no file extension is included. For example, when delivering a video using f_auto and no file extension is provided, the media type will default to an image unless f_auto:video is used.

Note
When used in conjunction with automatic quality (q_auto), sometimes the selected format is not the one that minimizes file size, but rather the format that yields the optimal balance between smaller file size and good visual quality.

Learn more: Image automatic format selection | Video automatic format selection

Optional qualifiers

fl_animated | fl_attachment | fl_preserve_transparency

fl (flag)

 

Alters the regular behavior of another transformation or the overall delivery behavior.

Tip
You can set multiple flags by separating the individual flags with a dot (.).

alternate

  fl_alternate:lang_<language>[;name_<name>]

Defines an audio layer to be used as an alternate audio track for videos delivered using automatic streaming profile selection. Used to provide multiple audio tracks, for example when you want to provide audio in multiple languages.

Use with: l_audio

Learn more: Defining alternate audio tracks

Note
This flag is not currently supported by our SDKs.

animated

  fl_animated

Alters the regular video delivery behavior by delivering a video file as an animated image instead of a single frame image, when specifying an image format that supports both still and animated images, such as webp or avif.

Use with: fl_apng | fl_awebp | f_auto

Note
When delivering a video and specifying the GIF format (either f_gif or specifying a GIF extension) it's automatically delivered as an animated GIF and this flag is not necessary. To force Cloudinary to deliver a single frame of a video in GIF format, use the page parameter.

Learn more: Converting videos to animated images

any_format

  fl_any_format

Alters the regular behavior of the q_auto parameter, allowing it to switch to PNG8 encoding if the automatic quality algorithm decides that's more efficient.

Use with: q_auto

apng

  fl_apng

The apng (animated PNG) flag alters the regular PNG delivery behavior by delivering an animated image asset in animated PNG format rather than a still PNG image. Keep in mind that animated PNGs are not supported in all browsers and versions.

Use with: fl_animated | f_png (or when specifying png as the delivery URL file extension).

attachment

  fl_attachment[:<filename>]

Alters the regular delivery URL behavior, causing the URL link to download the (transformed) file as an attachment rather than embedding it in your Web page or application.

Note
You can also use this flag with raw files to specify a custom filename for the download. The generated file's extension will match the raw file's original extension.

Use with: f_auto

See also: fl_streaming_attachment

awebp

  fl_awebp

The awebp (animated WebP) flag alters the regular WebP delivery behavior by delivering an animated image or video asset in animated WebP format rather than as a still WebP image. Keep in mind that animated WebPs are not supported in all browsers and versions.

Use with: fl_animated | f_webp (or when specifying webp as the delivery URL file extension).

c2pa

  fl_c2pa

Use the c2pa flag when delivering images that you want to be signed by Cloudinary for the purposes of C2PA (Coalition for Content Provenance and Authenticity).

Learn more: Content provenance and authenticity

clip

  fl_clip

For images with a clipping path saved with the originally uploaded image (e.g. manually created using PhotoShop), makes everything outside the clipping path transparent.

If there are multiple paths stored in the file, you can indicate which clipping path to use by specifying either the path number or name as the value of the page parameter (pg in URLs).

Use with: pg (page or file layer)

clip_evenodd

  fl_clip_evenodd

For images with a clipping path saved with the originally uploaded image, makes pixels transparent based on the clipping path using the 'evenodd' clipping rule to determine whether points are inside or outside of the path.

cutter

  fl_cutter

Trims the pixels on the base image according to the transparency levels of a specified overlay image. Where the overlay image is opaque, the original is kept and displayed, and wherever the overlay is transparent, the base image becomes transparent as well. This results in a delivered image displaying the base image content trimmed to the exact shape of the overlay image.

Note
The same layer transformation syntax rules apply, including for authenticated or private assets.

Learn more: Shape cutouts: keep a shape

draco

  fl_draco

Optimizes the mesh buffer in glTF 3D models using Draco compression.

Learn more: Applying Draco compression to glTF files

force_icc

  fl_force_icc

Adds ICC color space metadata to an image, even when the original image doesn't contain any ICC data.

force_strip

  fl_force_strip

Instructs Cloudinary to clear all image metadata (IPTC, Exif and XMP) while applying an incoming transformation.

getinfo

  fl_getinfo

For images: returns information about both the input asset and the transformed output asset in JSON instead of delivering a transformed image.

For videos: returns an empty JSON file unless one of the qualifiers below is used.

Not applicable to files delivered in certain formats, such as animated GIF, PDF and 3D formats.

As a qualifier, returns additional data as detailed below.

Use with:

  • g_auto:
    • For images, the returned JSON includes the cropping coordinates recommended by the g_auto algorithm.
    • For videos, the returned JSON includes the cropping confidence score for the whole video and per second in addition to the horizontal center point of each frame (on a scale of 0 to 1) recommended by the g_auto algorithm.
  • g_<face-specific-gravity>: For images, the returned JSON includes the coordinates of facial landmarks relative to the top-left corner of the original image.
  • e_preview: For videos, the returned JSON includes an importance histogram for the video.

Learn more:

group4

  fl_group4

Applies Group 4 compression to the image. Currently applicable to TIFF files only. If the original image is in color, it is transformed to black and white before the compression is applied.

Use with: f_tiff (or when specifying tiff as the delivery URL file extension)

hlsv3

  fl_hlsv3

A qualifier that delivers an HLS adaptive bitrate streaming file as HLS v3 instead of the default version (HLS v4).

This flag is supported only for product environments with a private CDN configuration.

Use with: sp (streaming profile)

Learn more: Adaptive bitrate streaming

ignore_aspect_ratio

  fl_ignore_aspect_ratio

A qualifier that adjusts the behavior of scale cropping. By default, when only one dimension (width or height) is supplied, the other dimension is automatically calculated to maintain the aspect ratio. When this flag is supplied together with a single dimension, the other dimension keeps its original value, thus distorting an image by scaling in only one direction.

Use with: c_scale

ignore_mask_channels

  fl_ignore_mask_channels

A qualifier that ensures that an alpha channel is not applied to a TIFF image if it is a mask channel.

Use with: f_tiff (or when specifying tiff as the delivery URL file extension)

immutable_cache

  fl_immutable_cache

Sets the cache-control for an image to be immutable, which instructs the browser that an image does not have to be revalidated with the server when the page is refreshed, and can be loaded directly from the cache. Currently supported only in Firefox.

keep_attribution

  keep_attribution

Cloudinary's default behavior is to strip almost all metadata from a delivered image when generating new image transformations. Applying this flag alters this default behavior, and keeps all the copyright-related fields while still stripping the rest of the metadata.

Learn more: Default optimizations

See also: fl_keep_iptc

Note
This flag works well when delivering images in JPG format. It may not always work as expected for other image formats.

keep_dar

  fl_keep_dar

Keeps the Display Aspect Ratio (DAR) metadata of an originally uploaded video (if it's different from the delivered video dimensions).

keep_iptc

  fl_keep_iptc

Cloudinary's default behavior is to strip almost all embedded metadata from a delivered image when generating new image transformations. Applying this flag alters this default behavior, and keeps all of an image's embedded metadata in the transformed image.

Note
This flag cannot be used in conjunction with q_auto.

Learn more: Default optimizations

See also: fl_keep_attribution

layer_apply

  fl_layer_apply

A qualifier that enables you to apply chained transformations to an overlaid image or video. The first component of the overlay (l_<image_id>) acts as an opening parentheses of the overlay transformation and the fl_layer_apply component acts as the ending parentheses. Any transformation components between these two are applied as chained transformations to the overlay and not to the base asset.

This flag is also required when concatenating images to videos or concatenating videos with custom transitions.

Use with: l_<image id> | l_audio | l_video

Learn more:

lossy

  fl_lossy

When used with an animated GIF file, instructs Cloudinary to use lossy compression when delivering an animated GIF. By default a quality of 80 is applied when delivering with lossy compression. You can use this flag in conjunction with a specified q_<quality_level> to deliver a higher or lower quality level of lossy compression.

When used while delivering a PNG format, instructs Cloudinary to deliver an image in PNG format (as requested) unless there is no transparency channel, in which case, deliver in JPEG format instead.

Use with: f_gif with or without q_<quality level> | f_png
(or when specifying gif or png as the delivery URL file extension)

Learn more: Applying lossy GIF compression

mono

  fl_mono

Converts the audio channel in a video or audio file to mono. This can help to optimize your video files if stereo sound is not essential.

no_overflow

  fl_no_overflow

A qualifier that prevents an image or text overlay from extending a delivered image canvas beyond the dimensions of the base image

Use with: l_<image id> | l_text

See also: fl_text_disallow_overflow

no_stream

  fl_no_stream

Prevents a video that is currently being generated on the fly from beginning to stream until the video is fully generated.

png8 / png24 / png32

  fl_png8 fl_png24 fl_png32

By default, Cloudinary delivers PNGs in PNG-24 format, or if f_auto and q_auto are used, these determine the PNG format that minimizes file size while maximizing quality. In some cases, the algorithm will select PNG-8. By specifying one of these flags when delivering a PNG file, you can override the default Cloudinary behavior and force the requested PNG format.

See also: fl_any_format

preserve_transparency

  fl_preserve_transparency

A qualifier that ensures that the f_auto parameter will always deliver in a transparent format if the image has a transparency channel.

Use with: f_auto

progressive

  fl_progressive[:<mode>]

Generates a JPG or PNG image using the progressive (interlaced) format. This format allows the browser to quickly show a low-quality rendering of the image until the full quality image is loaded.

rasterize

  fl_rasterize

Reduces a vector image to one flat pixelated layer, enabling transformations like PDF resizing and overlays.

region_relative

  fl_region_relative

A qualifier that instructs Cloudinary to interpret percentage-based ( e.g. 0.8) width and height values for an image layer (overlay or underlay), as a percentage that is relative to the size of the defined or automatically detected region(s). For example, the region may be the coordinates of an automatically detected face or piece of text, or a custom-defined region. If an image has multiple regions, then the specified overlay image will be overlaid over each identified region at a size relative to the region it overlays.

Use with: l_<image id> | u (underlay)
                        AND
one of the following special gravities: adv_eyes, adv_faces, custom face, faces, ocr_text

Learn more:

relative

  fl_relative

A qualifier that instructs Cloudinary to interpret percentage-based ( e.g. 0.8) width and height values for an image layer (overlay or underlay), as a percentage that is relative to the size of the base image, rather than relative to the original size of the specified overlay image. This flag enables you to use the same transformation to add an overlay to images that will always resize to a relative size of whatever image it overlays.

Use with: l_<image id> | u (underlay)

Learn more: Transforming overlays

replace_image

  fl_replace_image

A qualifier that takes the image specified as an overlay and uses it to replace the first image embedded in a PDF.

Transformation parameters that modify the appearance of the overlay (such as effects) can be applied. However, when this flag is used, the overlay image is always scaled exactly to the dimensions of the image it replaces. Therefore, resize transformations applied to the overlay are ignored. For this reason, it is important that the image specified in the overlay matches the aspect ratio of the image in the PDF that it will replace.

Note
The same layer transformation syntax rules apply, including for authenticated or private assets.

Use with: l_<image_id>

sanitize

  fl_sanitize

Relevant only for the SVG images. Runs a sanitizer on the image.

splice

  fl_splice[:transition[_([name_<transition name>][;du_<transition duration>])]]

A qualifier that concatenates (splices) the image, video or audio file specified as an overlay to a base video (instead of placing it as an overlay). By default, the overlay image, video or audio file is spliced to the end of the base video. You can use the start offset parameter set to 0 (so_0) to splice the overlay asset to the beginning of the base video by specifying it alongside fl_layer_apply. You can optionally provide a cross fade transition between assets.

Note
Make sure you read the important notes regarding concatenating media.

Use with: l_audio | l_video

Learn more: Concatenating media

See also: so (start offset)

streaming_attachment

  fl_streaming_attachment[:<filename>]

Like fl_attachment, this flag alters the regular video delivery URL behavior, causing the URL link to download the (transformed) video as an attachment rather than embedding it in your Web page or application. Additionally, if the video transformation is being requested and generated for the first time, this flag causes the video download to begin immediately, streaming it as a fragmented video file. (Most standard video players successfully play fragmented video files without issue.)

(In contrast, if the regular fl_attachment flag is used, then when a user requests the video transformation for the first time, the download will begin only after the complete transformed video has been generated.)

Note
HLS (.m3u8) and MPEG-DASH (.mpd) files are by nature non-streamable. If this flag is used with a video in one of those formats, it behaves identically to the regular fl_attachment flag.

See also: fl_attachment

strip_profile

  fl_strip_profile

Alters the regular video delivery URL behavior, by removing all ICC color profile data from the delivered image.

text_disallow_overflow

  fl_text_disallow_overflow

A qualifier used with text overlays that fails the transformation and returns a 400 (bad request) error if the text (in the requested size and font) exceeds the base image boundaries. This can be useful if the expected text of the overlay and/or the size of the base image isn't known in advance, for example with user-generated content. You can check for this error and if it occurs, let the user who supplied the text know that they should change the font, font size, or number of characters (or alternatively that they should provide a larger base image).

Use with: l_text

See also: fl_no_overflow

text_no_trim

  fl_text_no_trim

A qualifier used with text overlays that adds a small amount of padding around the text overlay string. Without this flag, text overlays are trimmed tightly to the text with no excess padding.

Use with: l_text

tiff8_lzw

  fl_tiff8_lzw

A qualifier that generates TIFF images in the TIFF8 format using LZW compression.

Use with: f_tiff (or when specifying tiff as the delivery URL file extension)

tiled

  fl_tiled

A qualifier that tiles the specified image overlay over the entire image. This can be useful for adding a watermark effect.

Use with: l_<image id>

Learn more: Automatic tiling

truncate_ts

  fl_truncate_ts

Truncates (trims) a video file based on the times defined in the video file's metadata (relevant only where the file metadata includes a directive to play only a section of the video).

waveform

  fl_waveform

Instead of delivering the audio or video file, generates and delivers a waveform image in the requested image format based on the audio from the audio or video file. by default, the waveform color is white and the background is black. You can customize these using the co_<color> and b_<color value>

Optional qualifiers

b_<color value> | co_<color>

Learn more: Auto-generated waveform images

fn (custom function)

 fn_<function type>:<source>

Injects a custom function into the image transformation pipeline. You can use a remote function/lambda as your source, run WebAssembly functions from a compiled .wasm file stored in your Cloudinary product environment, deliver assets based on filters using tags and structured metadata, or filter assets returned when generating a client-side list.

Learn more: Custom functions

fps (FPS)

 fps_<frames per second>[-<maximum frames per second>]

Controls the FPS (Frames Per Second) of a video or animated image to ensure that the asset (even when optimized) is delivered with an expected FPS level (for video, this helps with sync to audio). Can also be specified as a range.

g (gravity)

 

A qualifier that determines which part of an asset to focus on, and thus which part of the asset to keep, when any part of the asset is cropped. For overlays, this setting determines where to place the overlay.

Learn more: Control image gravity | Control video gravity

<compass position>

 g_<compass position>

A qualifier that defines a fixed location within an asset to focus on.

Use with: c_auto - image only | c_crop | c_fill | c_lfill | c_lpad | c_mpad | c_pad | c_thumb | l_<image id> | l_fetch | l_text | l_subtitles | l_video | u (underlay) | x, y (x & y coordinates)

<special position>

 g_<special position>

A qualifier that defines a special position within the asset to focus on.

Note
The only special position that is supported for animated images is custom. If other positions are specified in an animated image transformation, center gravity is applied.

Use with: c_auto | c_crop | c_fill | c_lfill | c_scale (for g_liquid only) | c_thumb | e_pixelate_region | l_<image id> | l_fetch | l_text | u (underlay) | x, y (x & y coordinates)

See also: fl_getinfo

<object>

 g_<object>

Requires the Cloudinary AI Content Analysis add-on.
A qualifier for cropping an image to automatically crop around objects without needing to specify dimensions or an aspect ratio.

Note
Object gravity is not supported for animated images. If g_<object> is used in an animated image transformation, center gravity is applied.

Use with: c_auto | c_crop | c_fill | c_lfill | c_thumb

Learn more: Cloudinary AI Content Analysis add-on

auto

 g_auto[:<algorithm>][:<focal gravity>][:<thumb aggressiveness>][:thirds_0]

A qualifier to automatically identify the most interesting regions in the asset, and include in the crop.

Notes
  • Automatic gravity is not supported for animated images. If g_auto is used in an animated image transformation, center gravity is applied, except when c_fill_pad is also specified, in which case an error is returned.
  • Any custom coordinates defined for a specific image will override the automatic cropping algorithm and only the custom coordinates will be used 'as is' for the gravity, unless you specify 'custom_no_override' or 'none' as the focal_gravity.

Use with: c_auto - image only | c_auto_pad - image only | c_crop - image only | c_fill | c_fill_pad | c_lfill - image only | c_thumb - image only

Learn more: Automatic image cropping | Automatic video cropping

See also: fl_getinfo

clipping_path

 g_clipping_path_!<clipping path name>!

A qualifier to specify a named clipping path in the image to focus on when cropping an image. Works on file formats that can contain clipping paths such as TIFF.

Note
Clipping paths work when the original image is 64 megapixels or less. Above that limit, the clipping paths are ignored.

Use with: c_auto | c_crop | c_fill | c_lfill | c_lpad | c_mpad | c_pad | c_thumb

region

 g_region_!<region name>!

A qualifier to specify a named custom region in the image to focus on.

Notes
  • You can set named custom regions using the regions parameter of the upload, explicit or update methods.
  • You can see the coordinates of named regions in images:

Use with: c_auto | c_crop | c_fill | c_lfill | c_lpad | c_mpad | c_pad | c_thumb

Learn more: Custom regions

track_person

 g_track_person[:obj_<object>][;position_<position>][;adaptivesize_<size>]

A qualifier to add an image or text layer that tracks the position of a person throughout a video. Can be used with fashion object detection to conditionally add the layer based on the presence of a specified object.

Notes
  • Only one tracked layer can be applied at a time.
  • The maximum video duration that tracked layers can be applied to is 3 minutes.
  • When requesting your video on the fly, you will receive a 423 response until the video has been processed. Once processed, subsequent transformations will be applied synchronously.
  • You can apply transformations to the layer, such as controlling duration, by adding those into the layer definition component (e.g. l_price_tag,du_3)

Use with: l_<image id> | l_fetch | l_text | u (underlay)

h (height)

 h_<height value>

A qualifier that determines the height of a transformed asset or an overlay.

Use with: c (crop/resize) | l (layer) | e_blur_region | e_pixelate_region | u (underlay)

Learn more: Resizing and cropping images | Placing layers on images | Placing layers on videos

See also: w (width) | ar (aspect ratio) | Arithmetic expressions

if (if condition)

 if_[<directive>][_<asset characteristic>_<operator>_<asset characteristic value>]

Applies a transformation only if a specified condition is met.

Learn more: Conditional image transformations | Conditional video transformations

See also: Arithmetic expressions

ki (keyframe interval)

 ki_<interval value>

Explicitly sets the keyframe interval of the delivered video.

l (layer)

 

Applies a layer over the base asset, also known as an overlay. This can be an image or video overlay, a text overlay, subtitles for a video or a 3D lookup table for images or videos.

You will often want to adjust the dimension and position of the overlay. You do this by using the w (width), h (height), x, y (x & y coordinates) and g (gravity) parameters with the overlay transformation.

In addition to these common overlay transformations, you can apply nearly any supported image or video transformation to an image or video overlay, including applying chained transformations, by using the fl_layer_apply flag to indicate the end of the layer transformations.

Learn more:

See also: u (underlay)

<image id>

 l_<image id>

Overlays an image on the base image or video.

Adjust the dimensions and position of the overlay with the w (width), h (height), x, y (x & y coordinates) and g (gravity) parameters with the overlay transformation.

Optional qualifiers

| bo (border) | du (duration) | e_anti_removal | e_multiply | e_overlay | e_screen | eo (end offset) | fl_no_overflow | fl_region_relative | g_<compass position> | g_<special position> | h (height) | so (start offset) | w (width) | x, y (x & y coordinates)

Learn more: Adding image overlays to images | Adding image overlays to videos

audio

 l_audio:<audio id>

Overlays the specified audio track on a base video or another audio track. If you specify a video to overlay, only the audio track will be applied. You can use this to mix multiple audio tracks together or add additional audio tracks when using automatic streaming profile selection.

Optional qualifiers

du (duration) | eo (end offset) | fl_alternate| fl_layer_apply | fl_splice | so (start offset)

Learn more: Adding audio overlays | Mixing audio tracks | Defining alternate audio tracks

fetch

 l_fetch:<base64 encoded URL>

Overlays a remote image onto an image or video.

Adjust the dimensions and position of the overlay with the w (width), h (height), x, y (x & y coordinates) and g (gravity) parameters with the overlay transformation.

Optional qualifiers

bo (border) | du (duration) | e_anti_removal | e_multiply | e_overlay | e_screen | eo (end offset) | fl_no_overflow | fl_region_relative | g_<compass position> | g_<special position> | h (height) | so (start offset) | w (width) | x, y (x & y coordinates)

Learn more: Adding image overlays

lut

 l_lut:<lut public id>

Applies a 3D lookup table (3D LUT) to an image or video. LUTs are used to map one color space to another. The LUT file must first be uploaded to Cloudinary as a raw file.

Learn more: Applying 3D LUTs to images | Applying 3D LUTs to videos

subtitles

 l_subtitles:<subtitle id>

Embed subtitle texts from an SRT or WebVTT file into a video. The subtitle file must first be uploaded as a raw file.

You can optionally set the font and font-size (as optional values of your l_subtitles parameter) as well as subtitle text color and either subtitle background color or subtitle outline color (using the co and b/bo optional qualifiers). By default, the texts are added in Arial, size 15, with white text and black border.

Note
Subtitles can also be added to videos delivered via the HLS adaptive bitrate streaming protocol using the streaming profile. This method enables support for multiple tracks as part of the manifest file. For more information, see adding subtitles to HLS videos.

Optional qualifiers

b_<color value> | bo (border) | co (color) | g_<compass position>

Learn more: Adding subtitles

text

 l_text:<text style>:<text string>

Adds a text overlay to an image or video.

Adjust the dimensions and position of the overlay with the w (width), h (height), x, y (x & y coordinates) and g (gravity) parameters with the overlay transformation.

Optional qualifiers

b_<color value> | bo (border) | c_fit | co (color) | e_anti_removal | e_multiply | e_overlay | e_screen | fl_no_overflow | fl_text_disallow_overflow | fl_text_no_trim | g_<compass position> | g_<special position> | h (height) | w (width) | x, y (x & y coordinates)

Learn more: Adding text overlays to images | Adding text overlays to videos | Adding auto-line breaks

video

 l_video:<video id>

Overlays the specified video on a base video.

Adjust the dimensions and position of the overlay with the w (width), h (height), x, y (x & y coordinates) and g (gravity) parameters with the overlay transformation.

Optional qualifiers

bo (border) | du (duration) | e_transition | eo (end offset) | fl_layer_apply | fl_splice | g_<compass position> | h (height) | so (start offset) | w (width) | x, y (x & y coordinates)

Learn more: Adding video overlays

o (opacity)

 o_<opacity level>

Adjusts the opacity of an asset and makes it semi-transparent.

Note
If the image format does not support transparency, the background color is used instead as a base (white by default). The color can be changed with the background parameter.

See also: Arithmetic expressions

p (prefix)

 p_<prefix value>

Adds a prefix to all style class names in the CSS that is created for a sprite.

Learn more: Applying transformations to sprites

pg (page or file layer)

 

Delivers specified pages, frames, or layers of a multi-page/frame/layer file, such as a PDF, animated image, TIFF, or PSD.

Learn more: Paged and layered media | Animated images

Note
When using an SDK that uses action-based syntax, the action that exposes this method is extract.

<number>

  pg_<number>

Delivers a page or layer of a multi-page or multi-layer file (PDF, TIFF, PSD), or a specified frame of an animated image.

Optional qualifier

fl_clip

<range>

  pg_<range>

Delivers the specified range of pages or layers from a multi-page or multi-layer file (PDF, TIFF, PSD).

embedded

  pg_embedded:<index>

Extracts and delivers an object embedded in a PSD file, by index.

  pg_embedded:name:<layer name>

Extracts and delivers an object embedded in a PSD file, by layer name.

name

  pg_name:<layer name(s)>

Delivers one or more named layers from a PSD file.

q (quality)

 q_<quality value>

Controls the quality of the delivered asset. Reducing the quality is a trade-off between visual quality and file size.

Learn more:

<quality level>

 q_<quality level>[:qmax_<quant value>][:<chroma>]

Sets the quality to the specified level.

Caution
A quality level of 100 can increase the file size significantly, particularly for video, as it is delivered lossless and uncompressed. As a result, a video with a quality level of 100 isn't playable on every browser.

See also: Arithmetic expressions

auto

 q_auto[:<quality type>][:sensitive]

Delivers an asset with an automatically determined level of quality.

Learn more: Automatic quality and encoding settings

Related flag: fl_any_format

r (round corners)

 

Rounds the corners of an image or video.

Learn More: Rounding image corners | Rounding video corners

<radius>

 r_<pixel value>

Rounds all four corners of an asset by the same pixel radius.

See also: Arithmetic expressions

<selected corners>

 r_<value1>[:<value2>][:<value3>][:<value4>]

Rounds selected corners of an image, based on the number of values specified, similar to the border-radius CSS property:

max

 r_max

Delivers the asset as a rounded circle or oval shape.

  • If the input asset is a 1:1 aspect ratio, it will be a circle.
  • If rectangular, it will be an oval.

so (start offset)

 so_<time value>

Specifies the first second to include in the video (or audio clip). This parameter is often used in conjunction with the eo (end offset) and/or du (duration) parameters.

  • Can be used independently to trim the video (or audio clip) by specifying the first second of the video to include. Everything prior to that second is trimmed off.
  • Can be used as a qualifier to control the timing of a corresponding transformation.
  • Can be used to indicate the frame of the video to use for generating video thumbnails.

As a qualifier, use with: e_boomerang | l_audio | l_<image id> | l_video

Learn more: Trimming videos | Adding video overlays | Adding audio overlays to videos | Adding image overlays to videos | Video thumbnails

See also: fl_splice

sp (streaming profile)

 

Determines the streaming profile to apply when delivering a video using adaptive bitrate streaming.

auto

 sp_auto[:maxres_<maximum resolution>][;subtitles_<subtitles config>]

Lets Cloudinary choose the best streaming profile on the fly for both HLS and DASH. You can limit the resolution at which to stream the video by specifying the maximum resolution.

Learn more: Automatic streaming profile selection

<profile name>

 sp_<profile name>[:subtitles_<subtitles config>]

Specifies the streaming profile to apply when delivering a video using HLS or MPEG-DASH adaptive bitrate streaming. Optionally allows for defining subtitles tracks for HLS, which will be defined as part of the manifest file.

Optional qualifier

fl_hlsv3

Learn more: Adaptive bitrate streaming | Pre-defined streaming profiles | Create new custom streaming profiles

t (named transformation)

 t_<transformation name>

Applies a pre-defined named transformation to an image or video.

Learn more: Named transformations | Create a named transformation

u (underlay)

 u_<image id>

Applies an image layer under the base image or video.

You can adjust the dimensions and position of the underlay using the w (width), h (height), x, y (x & y coordinates) and g (gravity) parameters with the underlay transformation.

In addition to these common underlay transformations, you can apply nearly any supported image transformation to an image underlay, including applying chained transformations, by using the fl_layer_apply flag to indicate the end of the layer transformations.

Optional qualifiers

e_anti_removal | e_multiply | e_overlay | e_screen | e_mask fl_region_relative | fl_relative | g_<compass position> | g_<special position> | h (height) | w (width) | x, y (x & y coordinates)

Learn more: Adding image underlays | Using the fl_layer_apply flag | Blending and masking layers

See also: l (layer)

vc (video codec)

 

Sets the video codec to use when encoding a video.

Learn more: Video codec settings

<codec value>

 vc_<codec value>[:<profile>[:<level>][:bframes_<bframes>]]

Sets a specific video codec to use to encode a video. For h264, optionally include the desired profile and level.

auto

 vc_auto

Normalizes and optimizes a video by automatically selecting the most appropriate codec based on the output format.

The settings for each format are:

Format Video codec Profile Quality Audio Codec Audio Frequency
MP4 h264 high1 auto:good aac 22050
WebM vp92 N/A auto:good vorbis 22050
OGV theora N/A auto:good vorbis 22050

Footnotes
  1. For older Cloudinary accounts the default is baseline. Submit a support request to change this default.
  2. For older Cloudinary accounts the default is vp8. Submit a support request to change this default.

Optional qualifiers

af_iaf

none

 vc_none

Removes the video codec to leave just the audio, useful when you want to extract the audio from a video.

vs (video sampling)

 vs_<sampling rate>

Sets the sampling rate to use when converting videos or animated images to animated GIF or WebP format. If not specified, the resulting GIF or WebP samples the whole video/animated image (up to 400 frames, at up to 10 frames per second). By default, the duration of the resulting animated image is the same as the duration of the input, no matter how many frames are sampled from the original video/animated image (use the dl (delay) parameter to adjust the amount of time between frames).

Related flag: fl_animated

Learn more: Converting videos to animated images

w (width)

 

A qualifier that sets the desired width of an asset using a specified value, or automatically based on the available width.

<width value>

 w_<width value>

A qualifier that determines the width of a transformed asset or an overlay.

Use with: c (crop/resize) | l (layer) | e_blur_region | e_pixelate_region | u (underlay)

Learn more: Resizing and cropping images | Placing layers on images | Placing layers on videos

See also: h (height) | ar (aspect ratio) | Arithmetic expressions

auto

A qualifier that determines how to automatically resize an image to match the width available for the image in a responsive layout. The parameter can be further customized by overriding the default rounding step or by using automatic breakpoints.

 w_auto[:<rounding step>][:<fallback width>]

The width is rounded up to the nearest rounding step (every 100 pixels by default) in order to avoid creating extra derived images and consuming too many extra transformations. Only works for certain browsers and when Client-Hints are enabled.

Use with: c_limit

Learn more: Automatic image width

 w_auto:breakpoints[_<breakpoint settings>][:<fallback width>][:json]

The width is rounded up to the nearest breakpoint, where the optimal breakpoints are calculated using either the default breakpoint request settings or using the given settings.

Use with: c_limit

Learn more: Responsive breakpoint request settings

x, y (x & y coordinates)

 x/y_<coordinate value>

A qualifier that adjusts the starting location or offset of the corresponding transformation action.

Action Effect of x & y coordinates
c_crop The top-left coordinates of the crop (positive x = right, positive y = down).
e_blur_region The top-left coordinates of the blurred region (positive x = right, positive y = down).
e_displace See Displacement maps.
e_gradient_fade Positive values fade from the top (y) or left (x). Negative values fade from the bottom (y) or right (x). Values between 0.0 and 1.0 indicate a percentage. Integer values indicate pixels.
e_pixelate_region The top-left coordinates of the pixelated region (positive x = right, positive y = down).
e_shadow The offset of the shadow relative to the image in pixels. Positive values offset the shadow right (x) or down (y). Negative values offset the shadow left (x) or up (y).
g_<compass position> Offset the compass position, e.g. when positioning overlays:
  • center, north_west, north, west: positive x = right, positive y = down, negative x = left, negative y = up
  • north_east, east: positive x = left, positive y = down, negative x = right, negative y = up
  • south_east: positive x = left, positive y = up, negative x = right, negative y = down
  • south, south_west: positive x = right, positive y = up, negative x = left, negative y = down
Values between 0.0 and 1.0 indicate a percentage. Integer values indicate pixels.
g_xy_center The coordinates of the center of gravity (positive x = right, positive y = down).
l_layer
u (underlay)
The offset of the layer according to the compass position (see above). If no compass position is specified, center is assumed.

Use with: c_crop | e_blur_region | e_displace | e_gradient_fade | e_pixelate_region | e_shadow | g_<compass position> | g_<special position> | l_layer | u (underlay)

Learn more: Controlling gravity | Placing overlays

See also: Arithmetic expressions

z (zoom)

 z_<zoom amount>

A qualifier that controls how close to crop to the detected coordinates when using face-detection, custom-coordinate, or object-specific gravity (when using the Cloudinary AI Content Analysis addon).

Use with: c_auto | c_crop | c_thumb

Note
  • When used with thumb resize mode, the detected coordinates are scaled to completely fill the requested dimensions and then cropped as needed.
  • When used with the crop resize mode, the zoom qualifier has an impact only if resize dimensions (height and/or width) are not specified. In this case, the crop dimensions are determined by the detected coordinates and then adjusted based on the requested zoom.

Learn more: Creating image thumbnails

See also: Arithmetic expressions

$ (variable)

 $<variable name>[_<variable value>]

Defines and assigns values to user defined variables, so you can use the variables as values for other parameters.

Learn more: User-defined variables in image transformations | User-defined variables in video transformations

See also: Arithmetic expressions

✔️ Feedback sent!

Rate this page: