> ## Documentation Index
> Fetch the complete documentation index at: https://cloudinary.com/documentation/llms.txt
> Use this file to discover all available pages before exploring further.

# Cloudinary AI Content Analysis


[linkAssets]: dam_add_ons#cloudinary_ai_content_analysis

[Cloudinary](https://cloudinary.com) is a cloud-based service that provides solutions for image and video management. These include server or client-side upload, on-the-fly image and video transformations, fast CDN delivery, and a variety of asset management options.

The Cloudinary **AI Content Analysis add-on** uses AI-based object detection and content-aware algorithms to provide the following functionality:

* [Object-aware cropping](#object_aware_cropping): Ensures that your image crops keep the specific objects that matter to you, even when you significantly modify the aspect ratio.
  ![Crop to the sink](https://res.cloudinary.com/demo/image/upload/w_600,ar_1,c_thumb,g_auto:sink/docs/kitchen1.jpg "with_image:false")

```nodejs
cloudinary.image("docs/kitchen1.jpg", {width: 600, aspect_ratio: "1", gravity: "auto:sink", crop: "thumb"})
```

```react
new CloudinaryImage("docs/kitchen1.jpg").resize(
  thumbnail()
    .width(600)
    .aspectRatio("1.0")
    .gravity(autoGravity().autoFocus(focusOn(sink())))
);
```

```vue
new CloudinaryImage("docs/kitchen1.jpg").resize(
  thumbnail()
    .width(600)
    .aspectRatio("1.0")
    .gravity(autoGravity().autoFocus(focusOn(sink())))
);
```

```angular
new CloudinaryImage("docs/kitchen1.jpg").resize(
  thumbnail()
    .width(600)
    .aspectRatio("1.0")
    .gravity(autoGravity().autoFocus(focusOn(sink())))
);
```

```js
new CloudinaryImage("docs/kitchen1.jpg").resize(
  thumbnail()
    .width(600)
    .aspectRatio("1.0")
    .gravity(autoGravity().autoFocus(focusOn(sink())))
);
```

```python
CloudinaryImage("docs/kitchen1.jpg").image(width=600, aspect_ratio="1", gravity="auto:sink", crop="thumb")
```

```php
(new ImageTag('docs/kitchen1.jpg'))
	->resize(Resize::thumbnail()->width(600)
->aspectRatio(1.0)
	->gravity(
	Gravity::autoGravity()
	->autoFocus(
	AutoFocus::focusOn(
	FocusOn::sink()))
	)
	);
```

```java
cloudinary.url().transformation(new Transformation().width(600).aspectRatio("1").gravity("auto:sink").crop("thumb")).imageTag("docs/kitchen1.jpg");
```

```ruby
cl_image_tag("docs/kitchen1.jpg", width: 600, aspect_ratio: "1", gravity: "auto:sink", crop: "thumb")
```

```csharp
cloudinary.Api.UrlImgUp.Transform(new Transformation().Width(600).AspectRatio("1").Gravity("auto:sink").Crop("thumb")).BuildImageTag("docs/kitchen1.jpg")
```

```dart
cloudinary.image('docs/kitchen1.jpg').transformation(Transformation()
	.resize(Resize.thumbnail().width(600)
.aspectRatio('1.0')
	.gravity(
	Gravity.autoGravity()
	.autoFocus(
	AutoFocus.focusOn(
	FocusOn.sink()))
	)
	));
```

```swift
imageView.cldSetImage(cloudinary.createUrl().setTransformation(CLDTransformation().setWidth(600).setAspectRatio("1").setGravity("auto:sink").setCrop("thumb")).generate("docs/kitchen1.jpg")!, cloudinary: cloudinary)
```

```android
MediaManager.get().url().transformation(new Transformation().width(600).aspectRatio("1").gravity("auto:sink").crop("thumb")).generate("docs/kitchen1.jpg");
```

```flutter
cloudinary.image('docs/kitchen1.jpg').transformation(Transformation()
	.resize(Resize.thumbnail().width(600)
.aspectRatio('1.0')
	.gravity(
	Gravity.autoGravity()
	.autoFocus(
	AutoFocus.focusOn(
	FocusOn.sink()))
	)
	));
```

```kotlin
cloudinary.image {
	publicId("docs/kitchen1.jpg")
	 resize(Resize.thumbnail() { width(600)
 aspectRatio(1.0F)
	 gravity(
	Gravity.autoGravity() {
	 autoFocus(
	AutoFocus.focusOn(
	FocusOn.sink()))
	 })
	 }) 
}.generate()
```

```jquery
$.cloudinary.image("docs/kitchen1.jpg", {width: 600, aspect_ratio: "1", gravity: "auto:sink", crop: "thumb"})
```

```react_native
new CloudinaryImage("docs/kitchen1.jpg").resize(
  thumbnail()
    .width(600)
    .aspectRatio("1.0")
    .gravity(autoGravity().autoFocus(focusOn(sink())))
);
```

    
    
    

    
    
    Original
    

    
    
    Crop to the sink
    

    
    
     

    > **INFO**:
>
> By default, delivery URLs that use this add-on either need to be [signed](#signed_urls) or [eagerly generated](eager_and_incoming_transformations#eager_transformations). You can optionally remove this requirement by selecting this add-on in the **Allow unsigned add-on transformations** section of the **Security** page in the Console Settings.
> (For simplicity, most of the examples on this page show eagerly generated URLs without signatures.)
* [Automatic image tagging](#automatic_image_tagging): Adds tags to your images based on objects or abstract concepts detected by the content-aware detection models specified on upload, or when invoked on images already stored in your product environment. 

    ```multi
    |ruby
    Cloudinary::Uploader.upload("winter_fashion.jpg", 
      detection: "cld-fashion", auto_tagging: 0.6)

    |php_2
    $cloudinary->uploadApi()->upload("winter_fashion.jpg", 
      ["detection" => "cld-fashion", "auto_tagging" => 0.6]);

    |python
    cloudinary.uploader.upload("winter_fashion.jpg",
      detection = "cld-fashion", auto_tagging = 0.6)

    |nodejs
    cloudinary.v2.uploader
    .upload("winter_fashion.jpg", 
      { detection: "cld-fashion", 
        auto_tagging: 0.6 })
    .then(result=>console.log(result)); 

    |java
    cloudinary.uploader().upload("winter_fashion.jpg", ObjectUtils.asMap(
      "detection", "cld-fashion", "auto_tagging", "0.6"));

    |csharp
    var uploadParams = new ImageUploadParams() 
    {
      File = new FileDescription(@"winter_fashion.jpg"),
      Detection = "cld-fashion",
      AutoTagging = 0.6
    };
    var uploadResult = cloudinary.Upload(uploadParams);  

    |go
    resp, err := cld.Upload.Upload(ctx, "winter_fashion.jpg", uploader.UploadParams{
        Detection:   "cld-fashion",
        AutoTagging: 0.6})

    |android
    MediaManager.get().upload("winter_fashion.jpg")
      .option("detection", "cld-fashion")
      .option("auto_tagging", "0.6").dispatch();

    |swift
    let params = CLDUploadRequestParams()
      .setDetection("cld-fashion")
      .setAutoTagging(0.6)
    var mySig = MyFunction(params)  // your own function that returns a signature generated on your backend
    params.setSignature(CLDSignature(signature: mySig.signature, timestamp: mySig.timestamp))
    let request = cloudinary.createUploader().signedUpload(
      url: "winter_fashion.jpg", params: params) 

    |cli
    cld uploader upload "winter_fashion.jpg" detection="cld-fashion" auto_tagging=0.6

    |curl
    curl https://api.cloudinary.com/v1_1/demo/image/upload -X POST -F 'file=@/path/to/winter_fashion.jpg' -F 'detection=cld-fashion' -F 'auto_tagging=0.6' -F 'timestamp=173719931' -F 'api_key=436464676' -F 'signature=a781d61f86a6f818af'
    ```

![Woman in winter with bounding boxes](https://res.cloudinary.com/demo/image/upload/docs/winter-fashion-with-tags.jpg "with_code: false, with_url: false, thumb: c_scale,w_300")

* [Image quality analysis](#image_quality_analysis): Analyzes the quality of an image.

    ```multi
    |ruby
    Cloudinary::Uploader.upload("winter_fashion.jpg", 
      detection: "iqa")

    |php_2
    $cloudinary->uploadApi()->upload("winter_fashion.jpg", 
      ["detection" => "iqa"]);

    |python
    cloudinary.uploader.upload("winter_fashion.jpg",
      detection = "iqa")

    |nodejs
    cloudinary.v2.uploader
    .upload("winter_fashion.jpg", 
      { detection: "iqa" })
    .then(result=>console.log(result)); 

    |java
    cloudinary.uploader().upload("winter_fashion.jpg", ObjectUtils.asMap(
      "detection", "iqa"));

    |csharp
    var uploadParams = new ImageUploadParams() 
    {
      File = new FileDescription(@"winter_fashion.jpg"),
      Detection = "iqa"
    };
    var uploadResult = cloudinary.Upload(uploadParams); 

    |go
    resp, err := cld.Upload.Upload(ctx, "winter_fashion.jpg", uploader.UploadParams{
        Detection: "iqa"})

    |android
    MediaManager.get().upload("winter_fashion.jpg")
      .option("detection", "iqa").dispatch();

    |swift
    let params = CLDUploadRequestParams().setDetection("iqa")
    var mySig = MyFunction(params)  // your own function that returns a signature generated on your backend
    params.setSignature(CLDSignature(signature: mySig.signature, timestamp: mySig.timestamp))
    let request = cloudinary.createUploader().signedUpload(
      url: "winter_fashion.jpg", params: params) 

    |cli
    cld uploader upload "winter_fashion.jpg" detection="iqa"

    |curl
    curl https://api.cloudinary.com/v1_1/demo/image/upload -X POST -F 'file=@/path/to/winter_fashion.jpg' -F 'detection=iqa' -F 'timestamp=173719931' -F 'api_key=436464676' -F 'signature=a781d61f86a6f818af'
    ```

    The response includes the `iqa-analysis` field:

    ```json
      "info": {
        "detection": {
          "object_detection": {
            "status": "complete",
            "data": {
              "iqa": {
                "model_name": "iqa",
                "model_version": 1,
                "schema_version": 1,
                "tags": {
                  "iqa-analysis": [
                    {
                      "attributes": {
                        "quality": "medium",
                        "score": 0.4521875
                      },
                      "categories": [],
                      "confidence": 1.0
                    }
                  ]
                }
              }
            }
          }
        }
      },
    ```
* [Watermark detection](#watermark_detection): Detects banners and watermarks in images.

    ```multi
    |ruby
    Cloudinary::Uploader.upload("roses-ban.jpg", 
      detection: "watermark-detection")

    |php_2
    $cloudinary->uploadApi()->upload("roses-ban.jpg", 
      ["detection" => "watermark-detection"]);

    |python
    cloudinary.uploader.upload("roses-ban.jpg",
      detection = "watermark-detection")

    |nodejs
    cloudinary.v2.uploader
    .upload("roses-ban.jpg", 
      { detection: "watermark-detection" })
    .then(result=>console.log(result)); 

    |java
    cloudinary.uploader().upload("roses-ban.jpg", ObjectUtils.asMap(
      "detection", "watermark-detection"));

    |csharp
    var uploadParams = new ImageUploadParams() 
    {
      File = new FileDescription(@"roses-ban.jpg"),
      Detection = "watermark-detection"
    };
    var uploadResult = cloudinary.Upload(uploadParams); 

    |go
    resp, err := cld.Upload.Upload(ctx, "roses-ban.jpg", uploader.UploadParams{
        Detection: "watermark-detection"})

    |android
    MediaManager.get().upload("roses-ban.jpg")
      .option("detection", "watermark-detection").dispatch();

    |swift
    let params = CLDUploadRequestParams().setDetection("watermark-detection")
    var mySig = MyFunction(params)  // your own function that returns a signature generated on your backend
    params.setSignature(CLDSignature(signature: mySig.signature, timestamp: mySig.timestamp))
    let request = cloudinary.createUploader().signedUpload(
      url: "roses-ban.jpg", params: params) 

    |cli
    cld uploader upload "roses-ban.jpg" detection="watermark-detection"

    |curl
    curl https://api.cloudinary.com/v1_1/demo/image/upload -X POST -F 'file=@/path/to/roses-ban.jpg' -F 'detection=watermark-detection' -F 'timestamp=173719931' -F 'api_key=436464676' -F 'signature=a781d61f86a6f818af'
    ```

    
    
    

    
    
    Banner detected
    

    
    
    Watermark detected
    

    
    
     

* [AI-based image captioning](#ai_based_image_captioning): Analyzes an image and suggests a caption to use appropriate to the image's contents.

    ```multi
    |ruby
    Cloudinary::Uploader.upload("kids_sport.jpg", 
      detection: "captioning")

    |php_2
    $cloudinary->uploadApi()->upload("kids_sport.jpg", 
      ["detection" => "captioning"]);

    |python
    cloudinary.uploader.upload("kids_sport.jpg",
      detection = "captioning")

    |nodejs
    cloudinary.v2.uploader
    .upload("kids_sport.jpg", 
      { detection: "captioning" })
    .then(result=>console.log(result)); 

    |java
    cloudinary.uploader().upload("kids_sport.jpg", ObjectUtils.asMap(
      "detection", "captioning"));

    |csharp
    var uploadParams = new ImageUploadParams() 
    {
      File = new FileDescription(@"kids_sport.jpg"),
      Detection = "captioning"
    };
    var uploadResult = cloudinary.Upload(uploadParams); 

    |go
    resp, err := cld.Upload.Upload(ctx, "kids_sport.jpg", uploader.UploadParams{
        Detection: "captioning"})

    |android
    MediaManager.get().upload("kids_sport.jpg")
      .option("detection", "captioning").dispatch();

    |swift
    let params = CLDUploadRequestParams().setDetection("captioning")
    var mySig = MyFunction(params)  // your own function that returns a signature generated on your backend
    params.setSignature(CLDSignature(signature: mySig.signature, timestamp: mySig.timestamp))
    let request = cloudinary.createUploader().signedUpload(
      url: "kids_sport.jpg", params: params) 

    |cli
    cld uploader upload "kids_sport.jpg" detection="captioning"

    |curl
    curl https://api.cloudinary.com/v1_1/demo/image/upload -X POST -F 'file=@/path/to/kids_sport.jpg' -F 'detection=captioning' -F 'timestamp=173719931' -F 'api_key=436464676' -F 'signature=a781d61f86a6f818af'
    ```

  ![A group of young children playing soccer on a soccer field with a goal post in the foreground and a goal post in the background](https://res.cloudinary.com/demo/image/upload/cap_sport.jpg "with_url:false, with_code:false, thumb: c_scale,w_300, caption: A group of young children playing soccer on a soccer field with a goal post in the foreground and a goal post in the background")

> **TIP**: This page describes how to use the Cloudinary AI Content Analysis add-on programmatically, but that you can also use the add-on for DAM use cases in Assets. For more information, see [Cloudinary AI Content Analysis][linkAssets] in the Assets user guide.#### Getting started

Before you can use the Cloudinary AI Content Analysis add-on:

* You must have a Cloudinary account. If you don't already have one, you can [sign up](https://cloudinary.com/users/register_free) for a free account. 

* Register for the add-on: make sure you're logged in to your account and then go to the [Add-ons](https://console.cloudinary.com/app/settings/addons) page. For more information about add-on registrations, see [Registering for add-ons](cloudinary_add_ons#registering_for_add_ons).

* Keep in mind that many of the examples on this page use our SDKs. For SDK installation and configuration details, see the relevant [SDK](cloudinary_sdks) guide.
  
* If you're new to Cloudinary, you may want to take a look at the [Developer Kickstart](dev_kickstart) for a hands-on, step-by-step introduction to a variety of features.

> **TIP**: You can also request analysis using the [Analyze API](analyze_api_guide) (Beta) which also accepts external assets to analyze.

## Supported content-aware detection models

The Cloudinary AI Content Analysis add-on supports a number of built-in content-aware detection models, each supporting a specific set of categories and objects. You can specify which version of each model to invoke for each use of the add-on.  

Cloudinary currently supports the following models:

Model | Description
--|--
coco | The [Common Objects in Context](https://cocodataset.org/) model contains just 80 common objects. 
cld-fashion | Cloudinary's fashion model is specifically dedicated to items of clothing. Used with automatic image tagging, the response includes attributes of the clothing identified, for example whether the garment contains pockets, its material and the fastenings used.
lvis | The [Large Vocabulary Instance Segmentation](https://www.lvisdataset.org/) model contains thousands of general objects. 
unidet | The [UniDet](https://github.com/xingyizhou/UniDet) model is a unified model, combining a number of object models, including [Objects365](https://www.objects365.org/overview.html), which focuses on diverse objects in the wild.
human-anatomy | Cloudinary's human anatomy model identifies parts of the human body in an image. It works best when the majority of a human body is detected in the image.
cld-text | Cloudinary's text model tells you if your image includes text, and where it's located. Used with automatic image tagging, you can then search for images that contain blocks of text. Used with object-aware cropping, you can choose to keep only the text part, or specify a crop that avoids the text.
shop-classifier | Cloudinary's shop classifier model detects if the image is a product image taken in a studio, or if it's a natural image.
image-type | Cloudinary's image type model detects generic properties about a photographic image, for example, photographic style, setting and time of the photo.
captioning | Cloudinary's captioning model is used to describe the contents of an image. See [AI-based image captioning](#ai_based_image_captioning).
watermark-detection | Cloudinary's watermark detection model identifies if the image contains different types of watermark.  See [Watermark detection](#watermark_detection).
iqa | The Image Quality Analysis (IQA) model can predict the quality of a given image on a scale from 0 to 1 and provides a general quality estimation, categorized as 'low', 'medium', or 'high'. See [Image quality analysis](#image_quality_analysis).

### Model capabilities

This table shows the capabilities of each supported version of each model:

* **Default version** is the version of the model that is invoked if left unspecified.
* **Version** indicates support for a particular version of the model - different versions have different accuracies.
* **Default confidence** shows the confidence level used when [auto_tagging](#adding_tags_to_images) is set to `default`.
* **Tag** indicates support for returning tags. This is a required capability for [automatic image tagging](#automatic_image_tagging).
* **Confidence** indicates support for returning confidence levels.
* **Bounding Box** indicates support for returning bounding boxes. This is a required capability for [object-aware cropping](#object_aware_cropping).
* **Attributes** indicates support for returning attributes for each tag in a (key,value) list.

> **NOTES**:
>
> * If you are using our [Asia Pacific data center](admin_api#alternative_data_centers_and_endpoints_premium_feature), currently you can use only the COCO and Open Images models.

> * If you have difficulty accessing any of the models, please [contact support](https://support.cloudinary.com/hc/en-us/requests/new).

### Supported objects and categories

Start typing the name of an object or category to see if it's supported by one of the built-in models.

[//]: # (Note - The reference to jquery below is just to the CSS file for the data table, not to the jQuery library so we can keep it long term - it's used alongside the updated JS script for generating the object and categories table.)

* For [object-aware cropping](#object_aware_cropping):
  * The **Full URL Syntax** column shows the syntax to use to detect a specific object or category in a particular version of a model (e.g. `coco_v2_tie`). You can also omit the version (e.g. `coco_tie`), or both the model and version (e.g. `tie`).

* For [automatic image tagging](#automatic_image_tagging): 
  * You can specify the model and version (e.g. `coco_v2`), or only the model (e.g. `coco`).

* For [dynamic video overlays](video_layers#dynamic_video_overlays):
  * Specify the object from the cld-fashion model (e.g. `g_track_person:obj_hat`)

### Private models

If you have your own content-aware detection models that you would like to use, these can be integrated as private models that work only on your product environment. This service is provided for customers on [Enterprise](https://cloudinary.com/pricing) plans through [Professional Services](https://cloudinary.com/pricing/customer-success#professional-services). [Contact our Enterprise support and sales team](https://cloudinary.com/contact?plan=enterprise) or your CSM to find out more. 

## Object detection demo

This demo lets you choose one of the content-aware detection models, and shows up to twenty objects that are detected by that model in an image of your choice. 

> **TIP**: To see a full list of all the detected objects and other information returned by the model, expand the JSON that appears under the image after upload.

[Automatic image tagging](cloudinary_ai_content_analysis_addon#automatic_image_tagging) is requested on upload, and the response provides the necessary information to overlay bounding boxes around the detected objects, together with the confidence level.

1&nbsp;&nbsp;&nbsp;
Select a model:

  lvis
  coco
  unidet
  cld-fashion
  cld-text
  human-anatomy

2&nbsp;&nbsp;&nbsp;
&nbsp;&nbsp;Upload new image&nbsp; or &nbsp;&nbsp;Use current image

3&nbsp;&nbsp;&nbsp;
See the detected objects:

Click the image to open it full size in a new tab.

<pre class="jsonpre" id="objectinformation">

> **Learn more**:
>
> * [Read this blog](https://cloudinary.com/blog/unboxing_images_to_discover_their_content) to discover all the Cloudinary features in this demo.

## Object-aware cropping

When object-aware cropping is invoked, Cloudinary applies advanced AI-based object detection algorithms on the fly during the crop process. You can either use it in conjunction with auto-gravity to give higher priority to the objects you care about, or directly specify that the crop should be exactly based on the detected coordinates of the specified objects.

Watch this demo to see how the same image is cropped according to the parameters specified in the URL:

### Applying object-aware cropping

After registering for the Cloudinary AI Content Analysis add-on, you can apply it in one of two ways:

* **Automatic gravity with a high weighting towards a specified object**
This variant of [auto-gravity cropping](resizing_and_cropping#automatic_cropping_g_auto) enables you to indicate specific objects or object categories that should be given priority when parts of a photo are cropped out.  This is done by specifying an object or an object category as the `focal_gravity` attribute for the `auto` gravity parameter (for example, `g_auto:cat` in URLs) together with a cropping option. If the specified content is not found in the image, the gravity is determined by the standard auto-gravity algorithm.

* **Object-specific gravity**
By specifying an object or object category as the gravity parameter (for example, `g_cat` in URLs) together with a cropping option, you can accurately crop around objects without needing to specify dimensions or aspect ratio. If the specified content is not found in the image, the gravity remains at the center of the image.

When specifying an object or category, you can optionally include a specific model (that [supports bounding boxes](#model_capabilities)) and version. For example, you can specify:

* Only the object/category, e.g.: `g_auto:cat` or `g_cat`
* The model with the object/category, e.g.: `g_auto:coco_cat` or `g_coco_cat`
* The model and version with object/category, e.g.: `g_auto:coco_v2_cat` or `g_coco_v2_cat`

If you choose not to specify a model, each model that [supports bounding boxes](#model_capabilities) is invoked in turn until the specified content is detected.  The order in which they are invoked is: **coco > cld-fashion > lvis > unidet > human-anatomy > cld-text**. 

> **NOTE**: If you have any private models set up, these are invoked first, in the order that was predefined for your product environment.

Consider the original image of a kitchen below:

 

Using auto-gravity, you can deliver a square thumbnail crop that prioritizes the detected coordinates of the sink, microwave, or refrigerator. To do this, specify the relevant object option for the `g_auto` gravity definition in conjunction with the `thumb` or `auto` cropping option:

![Crop a kitchen photo to a square thumbnail, using the detected sink as the thumb coordinates](https://res.cloudinary.com/demo/image/upload/ar_1.0,c_auto,g_auto:sink,w_600/docs/kitchen1.jpg "with_image:false")

```nodejs
cloudinary.image("docs/kitchen1.jpg", {aspect_ratio: "1.0", gravity: "auto:sink", width: 600, crop: "auto"})
```

```react
new CloudinaryImage("docs/kitchen1.jpg").resize(
  auto()
    .width(600)
    .aspectRatio("1.0")
    .gravity(autoGravity().autoFocus(focusOn(sink())))
);
```

```vue
new CloudinaryImage("docs/kitchen1.jpg").resize(
  auto()
    .width(600)
    .aspectRatio("1.0")
    .gravity(autoGravity().autoFocus(focusOn(sink())))
);
```

```angular
new CloudinaryImage("docs/kitchen1.jpg").resize(
  auto()
    .width(600)
    .aspectRatio("1.0")
    .gravity(autoGravity().autoFocus(focusOn(sink())))
);
```

```js
new CloudinaryImage("docs/kitchen1.jpg").resize(
  auto()
    .width(600)
    .aspectRatio("1.0")
    .gravity(autoGravity().autoFocus(focusOn(sink())))
);
```

```python
CloudinaryImage("docs/kitchen1.jpg").image(aspect_ratio="1.0", gravity="auto:sink", width=600, crop="auto")
```

```php
(new ImageTag('docs/kitchen1.jpg'))
	->resize(Resize::auto()->width(600)
->aspectRatio(1.0)
	->gravity(
	Gravity::autoGravity()
	->autoFocus(
	AutoFocus::focusOn(
	FocusOn::sink()))
	)
	);
```

```java
cloudinary.url().transformation(new Transformation().aspectRatio("1.0").gravity("auto:sink").width(600).crop("auto")).imageTag("docs/kitchen1.jpg");
```

```ruby
cl_image_tag("docs/kitchen1.jpg", aspect_ratio: "1.0", gravity: "auto:sink", width: 600, crop: "auto")
```

```csharp
cloudinary.Api.UrlImgUp.Transform(new Transformation().AspectRatio("1.0").Gravity("auto:sink").Width(600).Crop("auto")).BuildImageTag("docs/kitchen1.jpg")
```

```dart
cloudinary.image('docs/kitchen1.jpg').transformation(Transformation()
	.resize(Resize.auto().width(600)
.aspectRatio('1.0')
	.gravity(
	Gravity.autoGravity()
	.autoFocus(
	AutoFocus.focusOn(
	FocusOn.sink()))
	)
	));
```

```swift
imageView.cldSetImage(cloudinary.createUrl().setTransformation(CLDTransformation().setAspectRatio("1.0").setGravity("auto:sink").setWidth(600).setCrop("auto")).generate("docs/kitchen1.jpg")!, cloudinary: cloudinary)
```

```android
MediaManager.get().url().transformation(new Transformation().aspectRatio("1.0").gravity("auto:sink").width(600).crop("auto")).generate("docs/kitchen1.jpg");
```

```flutter
cloudinary.image('docs/kitchen1.jpg').transformation(Transformation()
	.resize(Resize.auto().width(600)
.aspectRatio('1.0')
	.gravity(
	Gravity.autoGravity()
	.autoFocus(
	AutoFocus.focusOn(
	FocusOn.sink()))
	)
	));
```

```kotlin
cloudinary.image {
	publicId("docs/kitchen1.jpg")
	 resize(Resize.auto() { width(600)
 aspectRatio(1.0F)
	 gravity(
	Gravity.autoGravity() {
	 autoFocus(
	AutoFocus.focusOn(
	FocusOn.sink()))
	 })
	 }) 
}.generate()
```

```jquery
$.cloudinary.image("docs/kitchen1.jpg", {aspect_ratio: "1.0", gravity: "auto:sink", width: 600, crop: "auto"})
```

```react_native
new CloudinaryImage("docs/kitchen1.jpg").resize(
  auto()
    .width(600)
    .aspectRatio("1.0")
    .gravity(autoGravity().autoFocus(focusOn(sink())))
);
```

g_auto:sink

g_auto:microwave

g_auto:refrigerator

 

Using object-specific gravity, you can choose not to give dimensions or aspect ratio, and deliver an image that is tightly cropped to the object.  To do this, specify the relevant object option for the gravity definition in conjunction with the `crop` cropping option:

![Crop a kitchen photo to the detected sink](https://res.cloudinary.com/demo/image/upload/c_crop,g_sink/docs/kitchen1.jpg "with_image:false")

```nodejs
cloudinary.image("docs/kitchen1.jpg", {gravity: "sink", crop: "crop"})
```

```react
new CloudinaryImage("docs/kitchen1.jpg").resize(
  crop().gravity(focusOn(sink()))
);
```

```vue
new CloudinaryImage("docs/kitchen1.jpg").resize(
  crop().gravity(focusOn(sink()))
);
```

```angular
new CloudinaryImage("docs/kitchen1.jpg").resize(
  crop().gravity(focusOn(sink()))
);
```

```js
new CloudinaryImage("docs/kitchen1.jpg").resize(
  crop().gravity(focusOn(sink()))
);
```

```python
CloudinaryImage("docs/kitchen1.jpg").image(gravity="sink", crop="crop")
```

```php
(new ImageTag('docs/kitchen1.jpg'))
	->resize(Resize::crop()
	->gravity(
	Gravity::focusOn(
	FocusOn::sink()))
	);
```

```java
cloudinary.url().transformation(new Transformation().gravity("sink").crop("crop")).imageTag("docs/kitchen1.jpg");
```

```ruby
cl_image_tag("docs/kitchen1.jpg", gravity: "sink", crop: "crop")
```

```csharp
cloudinary.Api.UrlImgUp.Transform(new Transformation().Gravity("sink").Crop("crop")).BuildImageTag("docs/kitchen1.jpg")
```

```dart
cloudinary.image('docs/kitchen1.jpg').transformation(Transformation()
	.resize(Resize.crop()
	.gravity(
	Gravity.focusOn(
	FocusOn.sink()))
	));
```

```swift
imageView.cldSetImage(cloudinary.createUrl().setTransformation(CLDTransformation().setGravity("sink").setCrop("crop")).generate("docs/kitchen1.jpg")!, cloudinary: cloudinary)
```

```android
MediaManager.get().url().transformation(new Transformation().gravity("sink").crop("crop")).generate("docs/kitchen1.jpg");
```

```flutter
cloudinary.image('docs/kitchen1.jpg').transformation(Transformation()
	.resize(Resize.crop()
	.gravity(
	Gravity.focusOn(
	FocusOn.sink()))
	));
```

```kotlin
cloudinary.image {
	publicId("docs/kitchen1.jpg")
	 resize(Resize.crop() {
	 gravity(
	Gravity.focusOn(
	FocusOn.sink()))
	 }) 
}.generate()
```

```jquery
$.cloudinary.image("docs/kitchen1.jpg", {gravity: "sink", crop: "crop"})
```

```react_native
new CloudinaryImage("docs/kitchen1.jpg").resize(
  crop().gravity(focusOn(sink()))
);
```

g_sink

g_microwave

g_refrigerator

 

You can also specify an aspect ratio together with the `crop` cropping option, without including specific dimensions. This keeps the object but may show more of the image to fit the aspect ratio.

![Crop a kitchen photo to the detected sink with an aspect ratio of 1](https://res.cloudinary.com/demo/image/upload/c_crop,g_sink,ar_1/docs/kitchen1.jpg "with_image:false")

```nodejs
cloudinary.image("docs/kitchen1.jpg", {gravity: "sink", aspect_ratio: "1", crop: "crop"})
```

```react
new CloudinaryImage("docs/kitchen1.jpg").resize(
  crop()
    .aspectRatio("1.0")
    .gravity(focusOn(sink()))
);
```

```vue
new CloudinaryImage("docs/kitchen1.jpg").resize(
  crop()
    .aspectRatio("1.0")
    .gravity(focusOn(sink()))
);
```

```angular
new CloudinaryImage("docs/kitchen1.jpg").resize(
  crop()
    .aspectRatio("1.0")
    .gravity(focusOn(sink()))
);
```

```js
new CloudinaryImage("docs/kitchen1.jpg").resize(
  crop()
    .aspectRatio("1.0")
    .gravity(focusOn(sink()))
);
```

```python
CloudinaryImage("docs/kitchen1.jpg").image(gravity="sink", aspect_ratio="1", crop="crop")
```

```php
(new ImageTag('docs/kitchen1.jpg'))
	->resize(Resize::crop()->aspectRatio(1.0)
	->gravity(
	Gravity::focusOn(
	FocusOn::sink()))
	);
```

```java
cloudinary.url().transformation(new Transformation().gravity("sink").aspectRatio("1").crop("crop")).imageTag("docs/kitchen1.jpg");
```

```ruby
cl_image_tag("docs/kitchen1.jpg", gravity: "sink", aspect_ratio: "1", crop: "crop")
```

```csharp
cloudinary.Api.UrlImgUp.Transform(new Transformation().Gravity("sink").AspectRatio("1").Crop("crop")).BuildImageTag("docs/kitchen1.jpg")
```

```dart
cloudinary.image('docs/kitchen1.jpg').transformation(Transformation()
	.resize(Resize.crop().aspectRatio('1.0')
	.gravity(
	Gravity.focusOn(
	FocusOn.sink()))
	));
```

```swift
imageView.cldSetImage(cloudinary.createUrl().setTransformation(CLDTransformation().setGravity("sink").setAspectRatio("1").setCrop("crop")).generate("docs/kitchen1.jpg")!, cloudinary: cloudinary)
```

```android
MediaManager.get().url().transformation(new Transformation().gravity("sink").aspectRatio("1").crop("crop")).generate("docs/kitchen1.jpg");
```

```flutter
cloudinary.image('docs/kitchen1.jpg').transformation(Transformation()
	.resize(Resize.crop().aspectRatio('1.0')
	.gravity(
	Gravity.focusOn(
	FocusOn.sink()))
	));
```

```kotlin
cloudinary.image {
	publicId("docs/kitchen1.jpg")
	 resize(Resize.crop() { aspectRatio(1.0F)
	 gravity(
	Gravity.focusOn(
	FocusOn.sink()))
	 }) 
}.generate()
```

```jquery
$.cloudinary.image("docs/kitchen1.jpg", {gravity: "sink", aspect_ratio: "1", crop: "crop"})
```

```react_native
new CloudinaryImage("docs/kitchen1.jpg").resize(
  crop()
    .aspectRatio("1.0")
    .gravity(focusOn(sink()))
);
```

g_sink

g_microwave

g_refrigerator

 

In addition to the [crop](resizing_and_cropping#crop), [thumb](resizing_and_cropping#thumb) and [auto](resizing_and_cropping#c_auto) cropping modes, object aware cropping can also be used with the [fill](resizing_and_cropping#fill) and [lfill](resizing_and_cropping#lfill_limit_fill) (limit fill) cropping modes. The [fill_pad](resizing_and_cropping#fill_pad) and [auto_pad](resizing_and_cropping#auto_pad) cropping modes work with the auto-gravity variant of object aware cropping, but not object-specific gravity.

### Notes on specifying categories and objects

When applying object-aware cropping, you can specify either individual objects or more general object categories.

* When you specify a category, the algorithm gives priority to any objects that are detected from that category.
* The regular auto-gravity behavior also impacts the cropping decision. But if requested objects are detected, they get significantly higher priority than the subjects or salient areas that the regular auto-gravity algorithm selects.
* If you specify the generic `object` category with auto-gravity (`g_auto:object`), then any detected objects from any category get priority.
* If there are multiple objects of the same type in the image, object-specific gravity selects the most prominent of the objects, and bases its crop around only that object, whereas auto-gravity may choose to keep more than one of the objects in the crop.
* The categories and objects also work in their plural forms when using object-specific gravity.  So, for example, `c_crop,g_birds` keeps all birds in the crop, whereas `c_crop,g_bird` keeps only the most prominent bird. 

### Combining focal gravity options using auto-gravity

When using `auto` gravity to determine the area to keep in a crop, you can specify multiple `focal_gravity` options.

This means that in a single auto-gravity parameter, you can optionally specify:

* One or multiple objects (from the same or different categories and/or models)
* Built-in focal gravity options such as `face`/`faces` or `custom_no_override`
* Other add-on based focal gravity options, such as the `adv_face`, `adv_eyes` options from the [Advanced Facial Attributes Detection add-on](advanced_facial_attributes_detection_addon)
* Only the `classic` or only the `subject` [auto-gravity algorithm](resizing_and_cropping#selecting_a_single_automatic_gravity_algorithm), which in some cases may have some impact on the exact coordinates of the crop, even if other specified objects or focal gravity options are detected.  Note that the default algorithm, which combines both of these algorithms, is recommended in the majority of cases.

For example, your auto-gravity URL parameter might be: `g_auto:cat:sofa:faces:adv_eyes`

This would instruct the cropping mechanism to give top priority to any cats, sofas, faces, or eyes detected in the photo.

For a complete list of all `focal_gravity` options, see the [g_\<special_position\>](transformation_reference#g_special_position) section of the _Transformation URL API Reference_.

> **INFO**:
>
> * The focal gravity options can be specified in any order. The order does not impact the result. 

> * When multiple items are detected that match the requested focal options, larger, more central, and more in-focus (less blurry) objects will get higher priority. In special cases, it's possible to fine-tune this default prioritization further. For details, contact [support](https://support.cloudinary.com/hc/en-us/requests/new).

> * If a particular image has **custom** coordinates defined, those coordinates always override all other focal gravity options, unless you use the `custom_no_override` option in conjunction with the other options, in which case the custom coordinates are taken into account when determining the gravity (see [Custom coordinates with auto gravity](custom_focus_areas#custom_coordinates_with_auto_gravity)).

### Combining focal gravity options using object-specific gravity

When using object-specific gravity to determine the area to keep in a crop, you can specify multiple `focal_gravity` options, but unlike auto-gravity, the order in which they are specified has an impact on the delivered image.

For example, consider this photo of a cat and dog:

 

By setting the `gravity` parameter to `cat:dog` the cat gets precedence:

![Cat gets precedence over dog](https://res.cloudinary.com/demo/image/upload/c_crop,g_cat:dog/docs/one_cat_one_dog.jpg "with_image:false")

```nodejs
cloudinary.image("docs/one_cat_one_dog.jpg", {gravity: "cat:dog", crop: "crop"})
```

```react
new CloudinaryImage("docs/one_cat_one_dog.jpg").resize(
  crop().gravity(focusOn(cat(), dog()))
);
```

```vue
new CloudinaryImage("docs/one_cat_one_dog.jpg").resize(
  crop().gravity(focusOn(cat(), dog()))
);
```

```angular
new CloudinaryImage("docs/one_cat_one_dog.jpg").resize(
  crop().gravity(focusOn(cat(), dog()))
);
```

```js
new CloudinaryImage("docs/one_cat_one_dog.jpg").resize(
  crop().gravity(focusOn(cat(), dog()))
);
```

```python
CloudinaryImage("docs/one_cat_one_dog.jpg").image(gravity="cat:dog", crop="crop")
```

```php
(new ImageTag('docs/one_cat_one_dog.jpg'))
	->resize(Resize::crop()
	->gravity(
	Gravity::focusOn(
	FocusOn::cat(),
	FocusOn::dog()))
	);
```

```java
cloudinary.url().transformation(new Transformation().gravity("cat:dog").crop("crop")).imageTag("docs/one_cat_one_dog.jpg");
```

```ruby
cl_image_tag("docs/one_cat_one_dog.jpg", gravity: "cat:dog", crop: "crop")
```

```csharp
cloudinary.Api.UrlImgUp.Transform(new Transformation().Gravity("cat:dog").Crop("crop")).BuildImageTag("docs/one_cat_one_dog.jpg")
```

```dart
cloudinary.image('docs/one_cat_one_dog.jpg').transformation(Transformation()
	.resize(Resize.crop()
	.gravity(
	Gravity.focusOn(
	FocusOn.cat(),
	FocusOn.dog()))
	));
```

```swift
imageView.cldSetImage(cloudinary.createUrl().setTransformation(CLDTransformation().setGravity("cat:dog").setCrop("crop")).generate("docs/one_cat_one_dog.jpg")!, cloudinary: cloudinary)
```

```android
MediaManager.get().url().transformation(new Transformation().gravity("cat:dog").crop("crop")).generate("docs/one_cat_one_dog.jpg");
```

```flutter
cloudinary.image('docs/one_cat_one_dog.jpg').transformation(Transformation()
	.resize(Resize.crop()
	.gravity(
	Gravity.focusOn(
	FocusOn.cat(),
	FocusOn.dog()))
	));
```

```kotlin
cloudinary.image {
	publicId("docs/one_cat_one_dog.jpg")
	 resize(Resize.crop() {
	 gravity(
	Gravity.focusOn(
	FocusOn.cat(),
	FocusOn.dog()))
	 }) 
}.generate()
```

```jquery
$.cloudinary.image("docs/one_cat_one_dog.jpg", {gravity: "cat:dog", crop: "crop"})
```

```react_native
new CloudinaryImage("docs/one_cat_one_dog.jpg").resize(
  crop().gravity(focusOn(cat(), dog()))
);
```

Whereas, if you switch the order to `dog:cat` the dog gets precedence:

![Dog gets precedence over cat](https://res.cloudinary.com/demo/image/upload/c_crop,g_dog:cat/docs/one_cat_one_dog.jpg "with_image:false")

```nodejs
cloudinary.image("docs/one_cat_one_dog.jpg", {gravity: "dog:cat", crop: "crop"})
```

```react
new CloudinaryImage("docs/one_cat_one_dog.jpg").resize(
  crop().gravity(focusOn(dog(), cat()))
);
```

```vue
new CloudinaryImage("docs/one_cat_one_dog.jpg").resize(
  crop().gravity(focusOn(dog(), cat()))
);
```

```angular
new CloudinaryImage("docs/one_cat_one_dog.jpg").resize(
  crop().gravity(focusOn(dog(), cat()))
);
```

```js
new CloudinaryImage("docs/one_cat_one_dog.jpg").resize(
  crop().gravity(focusOn(dog(), cat()))
);
```

```python
CloudinaryImage("docs/one_cat_one_dog.jpg").image(gravity="dog:cat", crop="crop")
```

```php
(new ImageTag('docs/one_cat_one_dog.jpg'))
	->resize(Resize::crop()
	->gravity(
	Gravity::focusOn(
	FocusOn::dog(),
	FocusOn::cat()))
	);
```

```java
cloudinary.url().transformation(new Transformation().gravity("dog:cat").crop("crop")).imageTag("docs/one_cat_one_dog.jpg");
```

```ruby
cl_image_tag("docs/one_cat_one_dog.jpg", gravity: "dog:cat", crop: "crop")
```

```csharp
cloudinary.Api.UrlImgUp.Transform(new Transformation().Gravity("dog:cat").Crop("crop")).BuildImageTag("docs/one_cat_one_dog.jpg")
```

```dart
cloudinary.image('docs/one_cat_one_dog.jpg').transformation(Transformation()
	.resize(Resize.crop()
	.gravity(
	Gravity.focusOn(
	FocusOn.dog(),
	FocusOn.cat()))
	));
```

```swift
imageView.cldSetImage(cloudinary.createUrl().setTransformation(CLDTransformation().setGravity("dog:cat").setCrop("crop")).generate("docs/one_cat_one_dog.jpg")!, cloudinary: cloudinary)
```

```android
MediaManager.get().url().transformation(new Transformation().gravity("dog:cat").crop("crop")).generate("docs/one_cat_one_dog.jpg");
```

```flutter
cloudinary.image('docs/one_cat_one_dog.jpg').transformation(Transformation()
	.resize(Resize.crop()
	.gravity(
	Gravity.focusOn(
	FocusOn.dog(),
	FocusOn.cat()))
	));
```

```kotlin
cloudinary.image {
	publicId("docs/one_cat_one_dog.jpg")
	 resize(Resize.crop() {
	 gravity(
	Gravity.focusOn(
	FocusOn.dog(),
	FocusOn.cat()))
	 }) 
}.generate()
```

```jquery
$.cloudinary.image("docs/one_cat_one_dog.jpg", {gravity: "dog:cat", crop: "crop"})
```

```react_native
new CloudinaryImage("docs/one_cat_one_dog.jpg").resize(
  crop().gravity(focusOn(dog(), cat()))
);
```

You can also combine the `auto` option to invoke the auto-gravity algorithm if none of the specified objects are found.  For example:

* `g_dog:cat:auto` - auto-gravity is invoked only if no dogs and cats are detected. 
* `g_dog:auto:cat` - auto-gravity weighted by cat (`g_auto:cat`) is invoked if no dogs are detected.

> **INFO**: If you use the `auto` option then you also need to specify at least one dimension parameter (width or height).

For example, consider this photo of a cat and three birds:

 

As there is no dog in the photo, auto-gravity weighted by bird is invoked when using `dog:auto:bird`.  In this case, two birds are kept in the crop:

![Original](https://res.cloudinary.com/demo/image/upload/c_crop,g_dog:auto:bird,w_600,h_800/docs/cat_and_birds.jpg "with_image:false")

```nodejs
cloudinary.image("docs/cat_and_birds.jpg", {gravity: "dog:auto:bird", width: 600, height: 800, crop: "crop"})
```

```react
new CloudinaryImage("docs/cat_and_birds.jpg").resize(
  crop()
    .width(600)
    .height(800)
    .gravity(
      focusOn(dog()).fallbackGravity(autoGravity().autoFocus(focusOn(bird())))
    )
);
```

```vue
new CloudinaryImage("docs/cat_and_birds.jpg").resize(
  crop()
    .width(600)
    .height(800)
    .gravity(
      focusOn(dog()).fallbackGravity(autoGravity().autoFocus(focusOn(bird())))
    )
);
```

```angular
new CloudinaryImage("docs/cat_and_birds.jpg").resize(
  crop()
    .width(600)
    .height(800)
    .gravity(
      focusOn(dog()).fallbackGravity(autoGravity().autoFocus(focusOn(bird())))
    )
);
```

```js
new CloudinaryImage("docs/cat_and_birds.jpg").resize(
  crop()
    .width(600)
    .height(800)
    .gravity(
      focusOn(dog()).fallbackGravity(autoGravity().autoFocus(focusOn(bird())))
    )
);
```

```python
CloudinaryImage("docs/cat_and_birds.jpg").image(gravity="dog:auto:bird", width=600, height=800, crop="crop")
```

```php
(new ImageTag('docs/cat_and_birds.jpg'))
	->resize(Resize::crop()->width(600)
->height(800)
	->gravity(
	Gravity::focusOn(
	FocusOn::dog())
	->fallbackGravity(
	Gravity::autoGravity()
	->autoFocus(
	AutoFocus::focusOn(
	FocusOn::bird()))
	)
	)
	);
```

```java
cloudinary.url().transformation(new Transformation().gravity("dog:auto:bird").width(600).height(800).crop("crop")).imageTag("docs/cat_and_birds.jpg");
```

```ruby
cl_image_tag("docs/cat_and_birds.jpg", gravity: "dog:auto:bird", width: 600, height: 800, crop: "crop")
```

```csharp
cloudinary.Api.UrlImgUp.Transform(new Transformation().Gravity("dog:auto:bird").Width(600).Height(800).Crop("crop")).BuildImageTag("docs/cat_and_birds.jpg")
```

```dart
cloudinary.image('docs/cat_and_birds.jpg').transformation(Transformation()
	.resize(Resize.crop().width(600)
.height(800)
	.gravity(
	Gravity.focusOn(
	FocusOn.dog())
	.fallbackGravity(
	Gravity.autoGravity()
	.autoFocus(
	AutoFocus.focusOn(
	FocusOn.bird()))
	)
	)
	));
```

```swift
imageView.cldSetImage(cloudinary.createUrl().setTransformation(CLDTransformation().setGravity("dog:auto:bird").setWidth(600).setHeight(800).setCrop("crop")).generate("docs/cat_and_birds.jpg")!, cloudinary: cloudinary)
```

```android
MediaManager.get().url().transformation(new Transformation().gravity("dog:auto:bird").width(600).height(800).crop("crop")).generate("docs/cat_and_birds.jpg");
```

```flutter
cloudinary.image('docs/cat_and_birds.jpg').transformation(Transformation()
	.resize(Resize.crop().width(600)
.height(800)
	.gravity(
	Gravity.focusOn(
	FocusOn.dog())
	.fallbackGravity(
	Gravity.autoGravity()
	.autoFocus(
	AutoFocus.focusOn(
	FocusOn.bird()))
	)
	)
	));
```

```kotlin
cloudinary.image {
	publicId("docs/cat_and_birds.jpg")
	 resize(Resize.crop() { width(600)
 height(800)
	 gravity(
	Gravity.focusOn(
	FocusOn.dog()) {
	 fallbackGravity(
	Gravity.autoGravity() {
	 autoFocus(
	AutoFocus.focusOn(
	FocusOn.bird()))
	 })
	 })
	 }) 
}.generate()
```

```jquery
$.cloudinary.image("docs/cat_and_birds.jpg", {gravity: "dog:auto:bird", width: 600, height: 800, crop: "crop"})
```

```react_native
new CloudinaryImage("docs/cat_and_birds.jpg").resize(
  crop()
    .width(600)
    .height(800)
    .gravity(
      focusOn(dog()).fallbackGravity(autoGravity().autoFocus(focusOn(bird())))
    )
);
```

Notice that if auto-gravity is not specified, the object-specific algorithm chooses the most prominent bird out of the three and only keeps this bird in the crop:

![Original](https://res.cloudinary.com/demo/image/upload/c_crop,g_dog:bird,w_600,h_800/docs/cat_and_birds.jpg "with_image:false")

```nodejs
cloudinary.image("docs/cat_and_birds.jpg", {gravity: "dog:bird", width: 600, height: 800, crop: "crop"})
```

```react
new CloudinaryImage("docs/cat_and_birds.jpg").resize(
  crop()
    .width(600)
    .height(800)
    .gravity(focusOn(dog(), bird()))
);
```

```vue
new CloudinaryImage("docs/cat_and_birds.jpg").resize(
  crop()
    .width(600)
    .height(800)
    .gravity(focusOn(dog(), bird()))
);
```

```angular
new CloudinaryImage("docs/cat_and_birds.jpg").resize(
  crop()
    .width(600)
    .height(800)
    .gravity(focusOn(dog(), bird()))
);
```

```js
new CloudinaryImage("docs/cat_and_birds.jpg").resize(
  crop()
    .width(600)
    .height(800)
    .gravity(focusOn(dog(), bird()))
);
```

```python
CloudinaryImage("docs/cat_and_birds.jpg").image(gravity="dog:bird", width=600, height=800, crop="crop")
```

```php
(new ImageTag('docs/cat_and_birds.jpg'))
	->resize(Resize::crop()->width(600)
->height(800)
	->gravity(
	Gravity::focusOn(
	FocusOn::dog(),
	FocusOn::bird()))
	);
```

```java
cloudinary.url().transformation(new Transformation().gravity("dog:bird").width(600).height(800).crop("crop")).imageTag("docs/cat_and_birds.jpg");
```

```ruby
cl_image_tag("docs/cat_and_birds.jpg", gravity: "dog:bird", width: 600, height: 800, crop: "crop")
```

```csharp
cloudinary.Api.UrlImgUp.Transform(new Transformation().Gravity("dog:bird").Width(600).Height(800).Crop("crop")).BuildImageTag("docs/cat_and_birds.jpg")
```

```dart
cloudinary.image('docs/cat_and_birds.jpg').transformation(Transformation()
	.resize(Resize.crop().width(600)
.height(800)
	.gravity(
	Gravity.focusOn(
	FocusOn.dog(),
	FocusOn.bird()))
	));
```

```swift
imageView.cldSetImage(cloudinary.createUrl().setTransformation(CLDTransformation().setGravity("dog:bird").setWidth(600).setHeight(800).setCrop("crop")).generate("docs/cat_and_birds.jpg")!, cloudinary: cloudinary)
```

```android
MediaManager.get().url().transformation(new Transformation().gravity("dog:bird").width(600).height(800).crop("crop")).generate("docs/cat_and_birds.jpg");
```

```flutter
cloudinary.image('docs/cat_and_birds.jpg').transformation(Transformation()
	.resize(Resize.crop().width(600)
.height(800)
	.gravity(
	Gravity.focusOn(
	FocusOn.dog(),
	FocusOn.bird()))
	));
```

```kotlin
cloudinary.image {
	publicId("docs/cat_and_birds.jpg")
	 resize(Resize.crop() { width(600)
 height(800)
	 gravity(
	Gravity.focusOn(
	FocusOn.dog(),
	FocusOn.bird()))
	 }) 
}.generate()
```

```jquery
$.cloudinary.image("docs/cat_and_birds.jpg", {gravity: "dog:bird", width: 600, height: 800, crop: "crop"})
```

```react_native
new CloudinaryImage("docs/cat_and_birds.jpg").resize(
  crop()
    .width(600)
    .height(800)
    .gravity(focusOn(dog(), bird()))
);
```

### Specifying objects to avoid using auto-gravity

In addition to specifying objects to keep in an image, you can specify objects that you would rather not see.  To minimize the likelihood of including a particular object in the cropped image, use auto-gravity with the `avoid` option for the relevant object or category.

For example, in photos like the one below, you may prefer not to include people because the purpose of the photo is to show an interesting storefront, and the people are a distraction.

 

Using `g_auto` by itself makes the people the focal point, but if we use `g_auto:person_avoid`, the other side of the photo is shown, without the people.

![Crop a picture avoiding people](https://res.cloudinary.com/demo/image/upload/w_500,ar_1.0,c_fill,g_auto:person_avoid/docs/store_front.jpg "with_image:false")

```nodejs
cloudinary.image("docs/store_front.jpg", {width: 500, aspect_ratio: "1.0", gravity: "auto:person_avoid", crop: "fill"})
```

```react
new CloudinaryImage("docs/store_front.jpg").resize(
  fill()
    .width(500)
    .aspectRatio("1.0")
    .gravity(autoGravity().autoFocus(focusOn(person()).avoid()))
);
```

```vue
new CloudinaryImage("docs/store_front.jpg").resize(
  fill()
    .width(500)
    .aspectRatio("1.0")
    .gravity(autoGravity().autoFocus(focusOn(person()).avoid()))
);
```

```angular
new CloudinaryImage("docs/store_front.jpg").resize(
  fill()
    .width(500)
    .aspectRatio("1.0")
    .gravity(autoGravity().autoFocus(focusOn(person()).avoid()))
);
```

```js
new CloudinaryImage("docs/store_front.jpg").resize(
  fill()
    .width(500)
    .aspectRatio("1.0")
    .gravity(autoGravity().autoFocus(focusOn(person()).avoid()))
);
```

```python
CloudinaryImage("docs/store_front.jpg").image(width=500, aspect_ratio="1.0", gravity="auto:person_avoid", crop="fill")
```

```php
(new ImageTag('docs/store_front.jpg'))
	->resize(Resize::fill()->width(500)
->aspectRatio(1.0)
	->gravity(
	Gravity::autoGravity()
	->autoFocus(
	AutoFocus::focusOn(
	FocusOn::person())->avoid())
	)
	);
```

```java
cloudinary.url().transformation(new Transformation().width(500).aspectRatio("1.0").gravity("auto:person_avoid").crop("fill")).imageTag("docs/store_front.jpg");
```

```ruby
cl_image_tag("docs/store_front.jpg", width: 500, aspect_ratio: "1.0", gravity: "auto:person_avoid", crop: "fill")
```

```csharp
cloudinary.Api.UrlImgUp.Transform(new Transformation().Width(500).AspectRatio("1.0").Gravity("auto:person_avoid").Crop("fill")).BuildImageTag("docs/store_front.jpg")
```

```dart
cloudinary.image('docs/store_front.jpg').transformation(Transformation()
	.resize(Resize.fill().width(500)
.aspectRatio('1.0')
	.gravity(
	Gravity.autoGravity()
	.autoFocus(
	AutoFocus.focusOn(
	FocusOn.person()).avoid())
	)
	));
```

```swift
imageView.cldSetImage(cloudinary.createUrl().setTransformation(CLDTransformation().setWidth(500).setAspectRatio("1.0").setGravity("auto:person_avoid").setCrop("fill")).generate("docs/store_front.jpg")!, cloudinary: cloudinary)
```

```android
MediaManager.get().url().transformation(new Transformation().width(500).aspectRatio("1.0").gravity("auto:person_avoid").crop("fill")).generate("docs/store_front.jpg");
```

```flutter
cloudinary.image('docs/store_front.jpg').transformation(Transformation()
	.resize(Resize.fill().width(500)
.aspectRatio('1.0')
	.gravity(
	Gravity.autoGravity()
	.autoFocus(
	AutoFocus.focusOn(
	FocusOn.person()).avoid())
	)
	));
```

```kotlin
cloudinary.image {
	publicId("docs/store_front.jpg")
	 resize(Resize.fill() { width(500)
 aspectRatio(1.0F)
	 gravity(
	Gravity.autoGravity() {
	 autoFocus(
	AutoFocus.focusOn(
	FocusOn.person()) { avoid() })
	 })
	 }) 
}.generate()
```

```jquery
$.cloudinary.image("docs/store_front.jpg", {width: 500, aspect_ratio: "1.0", gravity: "auto:person_avoid", crop: "fill"})
```

```react_native
new CloudinaryImage("docs/store_front.jpg").resize(
  fill()
    .width(500)
    .aspectRatio("1.0")
    .gravity(autoGravity().autoFocus(focusOn(person()).avoid()))
);
```

g_auto

g_auto:person_avoid

 

### Choosing the cropping mode

When you specify an object, either specifically or in your auto-gravity parameter, the Object-Aware Cropping AI algorithm detects the coordinates of the object and those coordinates are used by the cropping mode. 

* When using **thumb** cropping (`c_thumb`), the image is cropped as closely as possible to the detected coordinates of the object given the requested aspect ratio, and then scaled to the requested pixel size. Note that if the requested pixel size is greater than the crop, the image is not scaled up, but filled with further pixels from the image.

* When using **crop** mode (`c_crop`), the detected coordinates are prioritized as the area to keep when determining how much to cut from each edge of the photo in order to achieve the requested pixel size. If using auto-gravity and the requested pixel size is larger than the coordinates of the detected object, other elements of the image that receive priority from `g_auto` may impact what else is included in the photo and where in your resulting image the detected object may be located, meaning that the detected object will not necessarily be the center of the photo.

* When using any of the **fill**-based modes (`c_fill`, `c_lfill`, `c_fill_pad`), the coordinates of the detected object should be retained if any cropping is required after scaling.  If using auto-gravity, other elements of the image that receive priority from `g_auto` may impact what else is included in the photo and where in your resulting image the detected object may be located, meaning that the detected object will not necessarily be the center of the photo.

* When using the **auto** cropping mode (`c_auto`), the crop is focused on the object, but also takes into account more of the whole picture, so gives a more 'zoomed out' result than **thumb**, and **crop** but more 'zoomed in' than **fill**. If the requested dimensions are smaller than the best crop, the result is downscaled. If the requested dimensions are larger than the original image, the result is upscaled.

The following examples show how different your cropping results may be for the same requested object in the gravity, but with different cropping modes. In this case, we take the original photo below and apply `g_auto:camera` and `g_camera` with `fill`, `crop`, `thumb` and `auto` cropping modes.  In all cases, the same width and aspect ratio are requested (`ar_1,w_200`).

Original

&nbsp; 

**g_auto:camera**

c_fill

c_crop

c_thumb

c_auto

 

**g_camera**

c_fill

c_crop

c_thumb

c_auto

 

#### Using object-aware cropping for responsive delivery 

You can take advantage of object-aware cropping with various cropping modes to assist in **responsive art direction**.  This means that when you deliver different sized images to different devices, you don't just scale the same image, but rather crop images differently for different sizes, so that the important objects are always highly visible.

For example, you may: 

* deliver a full-size image to large HD screens 
* use `g_auto:[your_important_object]`, or `g_[your_important_object]` with `fill` cropping for medium sized screens 
* use `g_auto:[your_important_object]`, or `g_[your_important_object]` with `thumb` or `auto` cropping for very small screens.

For more details on delivering responsive images, see the [Responsive images](responsive_images) guide.

### Using objects with the zoompan effect

In addition to cropping, the Cloudinary AI Content Analysis add-on allows you to use objects for start and end points of a [zoompan](transformation_reference#e_zoompan) transformation.  

The `zoompan` effect lets you create a video or animated GIF from an image by zooming and panning from one area of the image to another. Use the `from` and/or `to` options with objects as gravity and specify a video or animated image format.

The example below is a seven second MP4 video (`.mp4`) of a model wearing fashionable items, starting zoomed into the hat (`from_(g_hat;zoom_4.5)`), then zooming out and panning to the pants (`to_(g_pants;zoom_1.6)`).
![Zoom and pan from hat to pants](https://res.cloudinary.com/demo/image/upload/e_zoompan:du_7;from_(g_hat;zoom_4.5)

```nodejs
cloudinary.image("e_zoompan:du_7;from_(g_hat;zoom_4.5")
```

```react
new CloudinaryImage("e_zoompan:du_7;from_(g_hat;zoom_4.5");
```

```vue
new CloudinaryImage("e_zoompan:du_7;from_(g_hat;zoom_4.5");
```

```angular
new CloudinaryImage("e_zoompan:du_7;from_(g_hat;zoom_4.5");
```

```js
new CloudinaryImage("e_zoompan:du_7;from_(g_hat;zoom_4.5");
```

```python
CloudinaryImage("e_zoompan:du_7;from_(g_hat;zoom_4.5").image()
```

```php
(new ImageTag('e_zoompan:du_7;from_(g_hat;zoom_4.5'));
```

```java
cloudinary.url().transformation(new Transformation().imageTag("e_zoompan:du_7;from_(g_hat;zoom_4.5");
```

```ruby
cl_image_tag("e_zoompan:du_7;from_(g_hat;zoom_4.5")
```

```csharp
cloudinary.Api.UrlImgUp.BuildImageTag("e_zoompan:du_7;from_(g_hat;zoom_4.5")
```

```dart
cloudinary.image('e_zoompan:du_7;from_(g_hat;zoom_4.5').transformation(Transformation());
```

```swift
imageView.cldSetImage(cloudinary.createUrl().generate("e_zoompan:du_7;from_(g_hat;zoom_4.5")!, cloudinary: cloudinary)
```

```android
MediaManager.get().url().transformation(new Transformation().generate("e_zoompan:du_7;from_(g_hat;zoom_4.5");
```

```flutter
cloudinary.image('e_zoompan:du_7;from_(g_hat;zoom_4.5').transformation(Transformation());
```

```kotlin
cloudinary.image {
	publicId("e_zoompan:du_7;from_(g_hat;zoom_4.5") 
}.generate()
```

```jquery
$.cloudinary.image("e_zoompan:du_7;from_(g_hat;zoom_4.5")
```

```react_native
new CloudinaryImage("e_zoompan:du_7;from_(g_hat;zoom_4.5");
```;to_(g_pants;zoom_1.6)/c_scale,h_250/q_auto/docs/clothing.mp4 "with_image:false")

  
    
    
    
  
  

### Signed URLs

Cloudinary's dynamic image transformation URLs are powerful tools. However, due to the potential costs of your customers experimenting with dynamic URLs that apply the object-aware cropping algorithm, image transformation add-on URLs are required (by default) to be signed using Cloudinary's authenticated API. Alternatively, you can [eagerly generate](eager_and_incoming_transformations#eager_transformations) the requested derived images using Cloudinary's authenticated API. 

To create a signed delivery URL using SDKs, set the `sign_url` parameter to `true` when building a URL or creating an image tag. 

The following code example applies object-aware cropping to the `skater` image, including a signed Cloudinary URL:

![Code for using  with signed URL](https://res.cloudinary.com/my_cloud/image/upload/s--acvfjq2y--/w_400,ar_1,c_thumb,g_auto:skateboard/Skater.jpg "with_url:false, with_code:true, with_image:false")

The generated Cloudinary URL shown below includes a signature component (`/s--acvfjq2y--/`). Only URLs with a valid signature that matches the requested image transformation will be approved for on-the-fly image transformation and delivery.

```html
https://res.cloudinary.com/my_cloud/image/upload/s--acvfjq2y--/w_400,ar_1,c_thumb,g_auto:skateboard/Skater.jpg
```

For more details on signed URLs, see [Signed delivery URLs](control_access_to_media#enforcement_mechanism_signed_delivery_urls).

> **NOTE**:
>
> You can optionally remove the signed URL default requirement for a particular add-on by selecting that add-on in the **Allow unsigned add-on transformations** section of the **Security** page in the Cloudinary Console Settings.

## Automatic image tagging

The automatic image tagging behavior of the Cloudinary AI Content Analysis add-on can be invoked on uploading an image, or by updating an image that's already stored in your product environment. Using the specified model, it analyzes the image, identifies categories and objects, and suggests tags that could be applied to the image. 

### Object and category detection

Take a look at the following photo of a woman dressed fashionably for winter:
![Woman dressed fashionably for winter](https://res.cloudinary.com/demo/image/upload/f_auto,q_auto/docs/winter_fashion.jpg "thumb: w_550, with_code:false, with_url:false")

By setting the `detection` parameter to the name of the model (and optionally the version, e.g. `cld-fashion_v3`) you want to invoke when calling Cloudinary's [upload](image_upload_api_reference#upload) or [update](admin_api#update_details_of_an_existing_resource) methods, the add-on automatically analyzes the content of the uploaded or specified existing image. For example, invoking the `cld-fashion` detection model while uploading `winter_fashion.jpg`:

```multi
|ruby
Cloudinary::Uploader.upload("winter_fashion.jpg", 
  detection: "cld-fashion")

|php_2
$cloudinary->uploadApi()->upload("winter_fashion.jpg", 
  ["detection" => "cld-fashion"]);

|python
cloudinary.uploader.upload("winter_fashion.jpg",
  detection = "cld-fashion")

|nodejs
cloudinary.v2.uploader
.upload("winter_fashion.jpg", 
  { detection: "cld-fashion" })
.then(result=>console.log(result)); 

|java
cloudinary.uploader().upload("winter_fashion.jpg", ObjectUtils.asMap(
  "detection", "cld-fashion"));

|csharp
var uploadParams = new ImageUploadParams() 
{
  File = new FileDescription(@"winter_fashion.jpg"),
  Detection = "cld-fashion"
};
var uploadResult = cloudinary.Upload(uploadParams); 

|go
resp, err := cld.Upload.Upload(ctx, "winter_fashion.jpg", uploader.UploadParams{
		Detection: "cld-fashion"})

|android
MediaManager.get().upload("winter_fashion.jpg")
  .option("detection", "cld-fashion").dispatch();

|swift
let params = CLDUploadRequestParams().setDetection("cld-fashion")
var mySig = MyFunction(params)  // your own function that returns a signature generated on your backend
params.setSignature(CLDSignature(signature: mySig.signature, timestamp: mySig.timestamp))
let request = cloudinary.createUploader().signedUpload(
  url: "winter_fashion.jpg", params: params) 

|cli
cld uploader upload "winter_fashion.jpg" detection="cld-fashion"

|curl
curl https://api.cloudinary.com/v1_1/demo/image/upload -X POST -F 'file=@/path/to/winter_fashion.jpg' -F 'detection=cld-fashion' -F 'timestamp=173719931' -F 'api_key=436464676' -F 'signature=a781d61f86a6f818af'
```

> **TIP**:
>
> :title=Tip
> You can use **upload presets** to centrally define a set of upload options including add-on operations to apply, instead of specifying them in each upload call. You can define multiple upload presets, and apply different presets in different upload scenarios. You can create new upload presets in the **Upload Presets** page of the [Console Settings](https://console.cloudinary.com/app/settings/upload/presets) or using the [upload_presets](admin_api#upload_presets) Admin API method. From the **Upload** page of the Console Settings, you can also select default upload presets to use for image, video, and raw API uploads (respectively) as well as default presets for image, video, and raw uploads performed via the Media Library UI. 
> **Learn more**: [Upload presets](upload_presets)

The upload API response includes the categories and objects automatically identified by the model you requested. As can be seen in the response snippet below, a hat and a specific type of outerwear are automatically detected in the uploaded photo. Depending on the [capabilities](#supported_content_aware_detection_models) of each model, different information is returned. In the example below, a confidence score, bounding box and in some cases, attributes, are returned for each detected object. The confidence score is a numerical value representing the certainty of a correct detection, where 1.0 means 100% confidence. The bounding-box parameter shows the location of the object in the image, as an array: [`x-coordinate of top left corner`, `y-coordinate of top left corner`, `width of box`, `height of box`]. Bounding-box information is used in the [object detection demo](#object_detection_demo).

```json
{
...
  "info": {
    "detection": {
      "object_detection": {
        "status": "complete",
        "data": {
          "cld-fashion": {
            "model_name": "cld-fashion",
            "model_version": 3,
            "schema_version": 1,
            "tags": {
              "hat": [
                {
                  "bounding-box": [
                    1203.4641054329725,
                    85.94320068359376,
                    419.43426051207325,
                    351.2288696289063
                  ],
                  "categories": [
                    "fashion"
                  ],
                  "confidence": 0.9853795170783997
                }
              ],
              "outerwear": [
                {
                  "attributes": {
                    "jackets_coats": [
                      [
                        "blanket (coat)",
                        0.8722688555717468
                      ]
                    ],
                    "length": [
                      [
                        "knee (length)",
                        0.5171285271644592
                      ]
                    ],
                    "neckline_type": [
                      [
                        "cowl (neck)",
                        0.6793023943901062
                      ]
                    ],
                    "pattern": [
                      [
                        "herringbone (pattern)",
                        0.7162534594535828
                      ]
                    ],
                    "silhouette_fit": [
                      [
                        "loose (fit)",
                        0.636439859867096
                      ]
                    ],
                    "special_features": [
                      [
                        "lining",
                        0.7706462740898132
                      ]
                    ]
                  },
                  "bounding-box": [
                    1134.8407251769358,
                    329.68173828125003,
                    553.6939594608659,
                    951.91826171875
                  ],
                  "categories": [
                    "fashion"
                  ],
                  "confidence": 0.9895038604736328
                }
              ]
            }
          }
        }
      }
    }
  },
```

### Adding tags to images

By providing the `auto_tagging` parameter to an `upload` or `update` request, images are automatically assigned tags based on the detected content. The value of the `auto_tagging` parameter is the minimum confidence score of a detected category or object that should be automatically used as an assigned tag. You can also set `auto_tagging` to `default`, which uses the model's [default confidence](#supported_content_aware_detection_models). 

The following code example automatically tags an uploaded image with all detected categories that have a confidence score higher than 0.6. 

```multi
|ruby
Cloudinary::Uploader.upload("winter_fashion.jpg", 
  detection: "cld-fashion", auto_tagging: 0.6)

|php_2
$cloudinary->uploadApi()->upload("winter_fashion.jpg", 
  ["detection" => "cld-fashion", "auto_tagging" => 0.6]);

|python
cloudinary.uploader.upload("winter_fashion.jpg",
  detection = "cld-fashion", auto_tagging = 0.6)

|nodejs
cloudinary.v2.uploader
.upload("winter_fashion.jpg", 
  { detection: "cld-fashion", 
    auto_tagging: 0.6 })
.then(result=>console.log(result)); 

|java
cloudinary.uploader().upload("winter_fashion.jpg", ObjectUtils.asMap(
  "detection", "cld-fashion", "auto_tagging", "0.6"));

|csharp
var uploadParams = new ImageUploadParams() 
{
  File = new FileDescription(@"winter_fashion.jpg"),
  Detection = "cld-fashion",
  AutoTagging = 0.6
};
var uploadResult = cloudinary.Upload(uploadParams);  

|go
resp, err := cld.Upload.Upload(ctx, "winter_fashion.jpg", uploader.UploadParams{
		Detection:   "cld-fashion",
		AutoTagging: 0.6})

|android
MediaManager.get().upload("winter_fashion.jpg")
  .option("detection", "cld-fashion")
  .option("auto_tagging", "0.6").dispatch();

|swift
let params = CLDUploadRequestParams()
  .setDetection("cld-fashion")
  .setAutoTagging(0.6)
var mySig = MyFunction(params)  // your own function that returns a signature generated on your backend
params.setSignature(CLDSignature(signature: mySig.signature, timestamp: mySig.timestamp))
let request = cloudinary.createUploader().signedUpload(
  url: "winter_fashion.jpg", params: params) 

|cli
cld uploader upload "winter_fashion.jpg" detection="cld-fashion" auto_tagging=0.6

|curl
curl https://api.cloudinary.com/v1_1/demo/image/upload -X POST -F 'file=@/path/to/winter_fashion.jpg' -F 'detection=cld-fashion' -F 'auto_tagging=0.6' -F 'timestamp=173719931' -F 'api_key=436464676' -F 'signature=a781d61f86a6f818af'
```

The response to the upload request returns the detected categories as well as the assigned tags for categories meeting the minimum confidence score of 0.6:

```json
{ 
...    
  "tags": [
    "hat",
    "outerwear"
  ],
...
}
```

You can also use the `update` method to apply auto-tagging to images already stored in your product environment.

The following example uses Cloudinary's `update` method on the `puppy` image in the product environment, to detect objects and categories in the LVIS model. Tags are automatically assigned based on the objects and categories detected with over a 90% confidence level.

```multi
|ruby
Cloudinary::Api.update("puppy", 
  detection: "lvis", 
  auto_tagging: 0.9)

|php_2
$cloudinary->api()->update("puppy", [
  "detection" => "lvis", 
  "auto_tagging" => 0.9]);

|python
cloudinary.api.update("puppy",
  detection = "lvis", 
  auto_tagging = 0.9)

|nodejs
cloudinary.v2.api
.update("puppy", 
  { detection: "lvis", 
    auto_tagging: 0.9 })
.then(result=>console.log(result));

|java
cloudinary.api().update("puppy", ObjectUtils.asMap(
  "detection", "lvis", 
  "auto_tagging", 0.9));

|csharp
var updateParams = new UpdateParams("puppy") 
{
  Detection = "lvis",
  AutoTagging = 0.9
};
var updateResult = cloudinary.UpdateResource(updateParams); 

|go
resp, err := cld.Admin.UpdateAsset(ctx, admin.UpdateAssetParams{
		PublicID:    "winter_fashion",
		Detection:   "lvis",
		AutoTagging: 0.9})

|cli
cld admin update "puppy" detection="lvis" auto_tagging=0.9
```

You can use the Admin API's [resource_by_tag](admin_api#get_resources_by_tag) method to return all resources with a certain tag, for example `hat`:

```multi
|curl
curl https://<API_KEY>:<API_SECRET>@api.cloudinary.com/v1_1/<cloud_name>/resources/image/tags/hat
       
|ruby
Cloudinary::Api.resources_by_tag('hat')

|php_2
$api->assetsByTag("hat");

|python
cloudinary.api.resources_by_tag("hat")

|nodejs
cloudinary.v2.api
.resources_by_tag("hat")
.then(result=>console.log(result));

|java
api.resourcesByTag("hat", ObjectUtils.emptyMap());

|csharp
cloudinary.ListResourcesByTag("hat");  

|go
resp, err := cld.Admin.AssetsByTag(ctx, admin.AssetsByTagParams{Tag: "hat"})

|cli
cld admin resources_by_tag hat
```

You can also use the [search method](search_method) or the [Media Library advanced search](https://console.cloudinary.com/console/media_library/search) to find images with certain tags.

### Asynchronous handling

As automatic image tagging may not be immediate, it is good practice to use asynchronous handling for these calls.

To make the call asynchronous, set the `async` parameter of the `upload` method to `true`. To be notified when the processing is complete, you can either set the `notification_url` parameter of the `upload` method (as in the example below) or the global webhook **Notification URL** in the **Upload** page of your Cloudinary Console Settings.

```multi
|ruby
Cloudinary::Uploader.upload("winter_fashion.jpg", 
  detection: "cld-fashion", auto_tagging: 0.6, 
  async: true, 
  notification_url: "https://mysite.example.com/upload_endpoint")

|php_2
$cloudinary->uploadApi()->upload("winter_fashion.jpg", 
  ["detection" => "cld-fashion", "auto_tagging" => 0.6, 
  "async" => true, 
  "notification_url" => "https://mysite.example.com/upload_endpoint"]);

|python
cloudinary.uploader.upload("winter_fashion.jpg",
  detection = "cld-fashion", auto_tagging = 0.6, 
  async = True, 
  notification_url = "https://mysite.example.com/upload_endpoint")

|nodejs
cloudinary.v2.uploader
.upload("winter_fashion.jpg", 
  { detection: "cld-fashion", 
    auto_tagging: 0.6, 
    async: true,
    notification_url: "https://mysite.example.com/upload_endpoint" })
.then(result=>console.log(result)); 

|java
cloudinary.uploader().upload("winter_fashion.jpg", ObjectUtils.asMap(
  "detection", "cld-fashion", 
  "auto_tagging", "0.6",
  "async", true,
  "notification_url", "https://mysite.example.com/upload_endpoint"));

|csharp
var uploadParams = new ImageUploadParams() 
{
  File = new FileDescription(@"winter_fashion.jpg"),
  Detection = "cld-fashion",
  AutoTagging = 0.6,
  Async = true,
  NotificationUrl = "https://mysite.example.com/upload_endpoint"
};
var uploadResult = cloudinary.Upload(uploadParams);  

|go
resp, err := cld.Upload.Upload(ctx, "winter_fashion.jpg", uploader.UploadParams{
		Detection:   "cld-fashion",
		AutoTagging: 0.6,
    Async: true,
    NotificationURL: "https://mysite.example.com/upload_endpoint"})

|android
MediaManager.get().upload("winter_fashion.jpg")
  .option("detection", "cld-fashion")
  .option("auto_tagging", "0.6")
  .option("async", true)
  .option("notification_url", "https://mysite.example.com/upload_endpoint").dispatch();

|swift
let params = CLDUploadRequestParams()
  .setDetection("cld-fashion")
  .setAutoTagging(0.6)
  .setAsync(true)
  .setNotificationUrl("https://mysite.example.com/upload_endpoint")
var mySig = MyFunction(params)  // your own function that returns a signature generated on your backend
params.setSignature(CLDSignature(signature: mySig.signature, timestamp: mySig.timestamp))
let request = cloudinary.createUploader().signedUpload(
  url: "winter_fashion.jpg", params: params) 

|cli
cld uploader upload "winter_fashion.jpg" detection="cld-fashion" auto_tagging=0.6 async=true notification_url="https://mysite.example.com/upload_endpoint"

|curl
curl https://api.cloudinary.com/v1_1/demo/image/upload -X POST -F 'file=@/path/to/winter_fashion.jpg' -F 'detection=cld-fashion' -F 'auto_tagging=0.6' -F 'async=true' -F 'notification_url=https://mysite.example.com/upload_endpoint' -F 'timestamp=173719931' -F 'api_key=436464676' -F 'signature=a781d61f86a6f818af'
```

The response to an asynchronous upload call looks similar to this:

```json
{
  "status": "pending",
  "type": "upload",
  "batch_id": "a7877927ae1af0d1115485018ce92a6792c97938bb3edb9b0777d4663d6abbee"
}
```

When the processing is finished, the complete upload response is sent to the notification URL that you specified.

```json
{
  "notification_type": "upload",
  "timestamp": "2022-06-15T14:49:17+00:00",
  "request_id": "3fcdc820276ace1fa51d4b345d28286b",
  "asset_id": "8580b2beaba30dc93a72f05db9f3d47d",
  "public_id": "ow1pnmgfxkdp1mqkkoac",
  ...
  "access_mode": "public",
  "info": {
    "detection": {
      "object_detection": {
        "status": "complete",
        "data": {
          "cld-fashion": {
            "model_name": "cld-fashion",
            "model_version": 4,
            "schema_version": 1,
            "tags": {
              ...
  },
  "original_filename": "winter_fashion"
}
```

## Image quality analysis

You can analyze the quality of an image using the Image Quality Analysis (IQA) model by setting the `detection` parameter to `iqa` when calling Cloudinary's [upload](image_upload_api_reference#upload) or [update](admin_api#update_details_of_an_existing_resource) methods.

A quality score from 0 to 1 is returned in the `score` attribute of the response, and a general quality estimation of `low`, `medium`, or `high` is returned in the `quality` attribute.

For example, invoking the `iqa` model while uploading `winter_fashion.jpg`:

```multi
|ruby
Cloudinary::Uploader.upload("winter_fashion.jpg", 
  detection: "iqa")

|php_2
$cloudinary->uploadApi()->upload("winter_fashion.jpg", 
  ["detection" => "iqa"]);

|python
cloudinary.uploader.upload("winter_fashion.jpg",
  detection = "iqa")

|nodejs
cloudinary.v2.uploader
.upload("winter_fashion.jpg", 
  { detection: "iqa" })
.then(result=>console.log(result)); 

|java
cloudinary.uploader().upload("winter_fashion.jpg", ObjectUtils.asMap(
  "detection", "iqa"));

|csharp
var uploadParams = new ImageUploadParams() 
{
  File = new FileDescription(@"winter_fashion.jpg"),
  Detection = "iqa"
};
var uploadResult = cloudinary.Upload(uploadParams); 

|go
resp, err := cld.Upload.Upload(ctx, "winter_fashion.jpg", uploader.UploadParams{
		Detection: "iqa"})

|android
MediaManager.get().upload("winter_fashion.jpg")
  .option("detection", "iqa").dispatch();

|swift
let params = CLDUploadRequestParams().setDetection("iqa")
var mySig = MyFunction(params)  // your own function that returns a signature generated on your backend
params.setSignature(CLDSignature(signature: mySig.signature, timestamp: mySig.timestamp))
let request = cloudinary.createUploader().signedUpload(
  url: "winter_fashion.jpg", params: params) 

|cli
cld uploader upload "winter_fashion.jpg" detection="iqa"

|curl
curl https://api.cloudinary.com/v1_1/demo/image/upload -X POST -F 'file=@/path/to/winter_fashion.jpg' -F 'detection=iqa' -F 'timestamp=173719931' -F 'api_key=436464676' -F 'signature=a781d61f86a6f818af'
```

The response includes the `iqa-analysis` field:

```json
  "info": {
    "detection": {
      "object_detection": {
        "status": "complete",
        "data": {
          "iqa": {
            "model_name": "iqa",
            "model_version": 1,
            "schema_version": 1,
            "tags": {
              "iqa-analysis": [
                {
                  "attributes": {
                    "quality": "medium",
                    "score": 0.4521875
                  },
                  "categories": [],
                  "confidence": 1.0
                }
              ]
            }
          }
        }
      }
    }
  },
```

> **NOTES**:
>
> * You can also use [asynchronous handling](#asynchronous_handling) as described for automatic image tagging.

> * Learn about other ways to perform [image quality analysis](image_quality_analysis).

## Watermark detection

You can detect watermarks in images by setting the `detection` parameter to `watermark-detection` when calling Cloudinary's [upload](image_upload_api_reference#upload) or [update](admin_api#update_details_of_an_existing_resource) methods. The response can be one of the following: `banner` (see [banners](#banners)), `watermark` (see [watermarks](#watermarks)), or, if neither of these are detected, `clean`.

> **NOTE**: You can add the tags, `banner`, `watermark` or `clean` by setting the `auto_tagging` parameter, as described in [Adding tags to images](#adding_tags_to_images) and you can also use [asynchronous handling](#asynchronous_handling).

### Banners

If the image contains an opaque text/logo layer with a semi-transparent background it is likely that the image will be flagged as containing a **banner**.

For example, uploading the following image, requesting `watermark-detection`, the response shows 99% confidence that it contains a banner: 

![Example of a banner watermark](https://res.cloudinary.com/demo/image/upload/f_auto/q_auto/docs/roses-ban.jpg "thumb:c_scale,w_550/dpr_2.0, width: 500, with_url:false, with_code:false")

Upload request:

```multi
|ruby
Cloudinary::Uploader.upload("roses-ban.jpg", 
  detection: "watermark-detection")

|php_2
$cloudinary->uploadApi()->upload("roses-ban.jpg", 
  ["detection" => "watermark-detection"]);

|python
cloudinary.uploader.upload("roses-ban.jpg",
  detection = "watermark-detection")

|nodejs
cloudinary.v2.uploader
.upload("roses-ban.jpg", 
  { detection: "watermark-detection" })
.then(result=>console.log(result)); 

|java
cloudinary.uploader().upload("roses-ban.jpg", ObjectUtils.asMap(
  "detection", "watermark-detection"));

|csharp
var uploadParams = new ImageUploadParams() 
{
  File = new FileDescription(@"roses-ban.jpg"),
  Detection = "watermark-detection"
};
var uploadResult = cloudinary.Upload(uploadParams); 

|go
resp, err := cld.Upload.Upload(ctx, "roses-ban.jpg", uploader.UploadParams{
		Detection: "watermark-detection"})

|android
MediaManager.get().upload("roses-ban.jpg")
  .option("detection", "watermark-detection").dispatch();

|swift
let params = CLDUploadRequestParams().setDetection("watermark-detection")
var mySig = MyFunction(params)  // your own function that returns a signature generated on your backend
params.setSignature(CLDSignature(signature: mySig.signature, timestamp: mySig.timestamp))
let request = cloudinary.createUploader().signedUpload(
  url: "roses-ban.jpg", params: params) 

|cli
cld uploader upload "roses-ban.jpg" detection="watermark-detection"

|curl
curl https://api.cloudinary.com/v1_1/demo/image/upload -X POST -F 'file=@/path/to/roses-ban.jpg' -F 'detection=watermark-detection' -F 'timestamp=173719931' -F 'api_key=436464676' -F 'signature=a781d61f86a6f818af'
```

The response includes:

```json
  "info": {
    "detection": {
      "object_detection": {
        "status": "complete",
        "data": {
          "watermark-detection": {
            "model_name": "watermark-detection",
            "model_version": 1,
            "schema_version": 1,
            "tags": {
              "banner": [
                {
                  "categories": [],
                  "confidence": 0.99
                }
              ]
            }
          }
        }
      }
    }
  },
```

### Watermarks

If the image contains a semi-transparent layer, it is likely that the image will be flagged as containing a **watermark**.

For example, uploading the following image, requesting `watermark-detection`, the response shows 99% confidence that it contains a watermark: 

![Example of a watermark](https://res.cloudinary.com/demo/image/upload/f_auto/q_auto/docs/roses-wm.jpg "thumb:c_scale,w_550/dpr_2.0, width: 500, with_url:false, with_code:false")

Upload request:

```multi
|ruby
Cloudinary::Uploader.upload("roses-wm.jpg", 
  detection: "watermark-detection")

|php_2
$cloudinary->uploadApi()->upload("roses-wm.jpg", 
  ["detection" => "watermark-detection"]);

|python
cloudinary.uploader.upload("roses-wm.jpg",
  detection = "watermark-detection")

|nodejs
cloudinary.v2.uploader
.upload("roses-wm.jpg", 
  { detection: "watermark-detection" })
.then(result=>console.log(result)); 

|java
cloudinary.uploader().upload("roses-wm.jpg", ObjectUtils.asMap(
  "detection", "watermark-detection"));

|csharp
var uploadParams = new ImageUploadParams() 
{
  File = new FileDescription(@"roses-wm.jpg"),
  Detection = "watermark-detection"
};
var uploadResult = cloudinary.Upload(uploadParams); 

|go
resp, err := cld.Upload.Upload(ctx, "roses-wm.jpg", uploader.UploadParams{
		Detection: "watermark-detection"})

|android
MediaManager.get().upload("roses-wm.jpg")
  .option("detection", "watermark-detection").dispatch();

|swift
let params = CLDUploadRequestParams().setDetection("watermark-detection")
var mySig = MyFunction(params)  // your own function that returns a signature generated on your backend
params.setSignature(CLDSignature(signature: mySig.signature, timestamp: mySig.timestamp))
let request = cloudinary.createUploader().signedUpload(
  url: "roses-wm.jpg", params: params) 

|cli
cld uploader upload "roses-wm.jpg" detection="watermark-detection"

|curl
curl https://api.cloudinary.com/v1_1/demo/image/upload -X POST -F 'file=@/path/to/roses-wm.jpg' -F 'detection=watermark-detection' -F 'timestamp=173719931' -F 'api_key=436464676' -F 'signature=a781d61f86a6f818af'
```

The response includes:

```json
  "info": {
    "detection": {
      "object_detection": {
        "status": "complete",
        "data": {
          "watermark-detection": {
            "model_name": "watermark-detection",
            "model_version": 1,
            "schema_version": 1,
            "tags": {
              "watermark": [
                {
                  "categories": [],
                  "confidence": 0.99
                }
              ]
            }
          }
        }
      }
    }
  },
```

## AI-based image captioning

The Cloudinary AI Content Analysis add-on can be used to analyze an image and suggest a caption based on the image's contents.

Some example captions suggested by the AI:

 

1. **A brown dog standing on top of a street next to a sidewalk with a building in the background**
2. **A group of young children playing soccer on a soccer field with a goal post in the foreground and a goal post in the background**
3. **A hand reaching for a donut with chocolate and sprinkles on it on a dark surface**

By setting the `detection` parameter to `captioning` when calling Cloudinary's [upload](image_upload_api_reference#upload) or [update](admin_api#update_details_of_an_existing_resource) methods, the add-on automatically analyzes the content of the image. For example, invoking  the `captioning` detection model while uploading `toy_room.jpg`:

```multi
|ruby
Cloudinary::Uploader.upload("toy_room.jpg", 
  detection: "captioning")

|php_2
$cloudinary->uploadApi()->upload("toy_room.jpg", 
  ["detection" => "captioning"]);

|python
cloudinary.uploader.upload("toy_room.jpg",
  detection = "captioning")

|nodejs
cloudinary.v2.uploader
.upload("toy_room.jpg", 
  { detection: "captioning" })
.then(result=>console.log(result)); 

|java
cloudinary.uploader().upload("toy_room.jpg", ObjectUtils.asMap(
  "detection", "captioning"));

|csharp
var uploadParams = new ImageUploadParams() 
{
  File = new FileDescription(@"toy_room.jpg"),
  Detection = "captioning"
};
var uploadResult = cloudinary.Upload(uploadParams); 

|go
resp, err := cld.Upload.Upload(ctx, "toy_room.jpg", uploader.UploadParams{
		Detection: "captioning"})

|android
MediaManager.get().upload("toy_room.jpg")
  .option("detection", "captioning").dispatch();

|swift
let params = CLDUploadRequestParams().setDetection("captioning")
var mySig = MyFunction(params)  // your own function that returns a signature generated on your backend
params.setSignature(CLDSignature(signature: mySig.signature, timestamp: mySig.timestamp))
let request = cloudinary.createUploader().signedUpload(
  url: "toy_room.jpg", params: params) 

|cli
cld uploader upload "toy_room.jpg" detection="captioning"

|curl
curl https://api.cloudinary.com/v1_1/demo/image/upload -X POST -F 'file=@/path/to/toy_room.jpg' -F 'detection=captioning' -F 'timestamp=173719931' -F 'api_key=436464676' -F 'signature=a781d61f86a6f818af'
```

![Toy room](https://res.cloudinary.com/demo/image/upload/w_500/toy_room.jpg "with_code: false, with_url: false")

> **TIP**:
>
> :title=Tip
> You can use **upload presets** to centrally define a set of upload options including add-on operations to apply, instead of specifying them in each upload call. You can define multiple upload presets, and apply different presets in different upload scenarios. You can create new upload presets in the **Upload Presets** page of the [Console Settings](https://console.cloudinary.com/app/settings/upload/presets) or using the [upload_presets](admin_api#upload_presets) Admin API method. From the **Upload** page of the Console Settings, you can also select default upload presets to use for image, video, and raw API uploads (respectively) as well as default presets for image, video, and raw uploads performed via the Media Library UI. 
> **Learn more**: [Upload presets](upload_presets)

The upload API response includes the captioning information: 

```json
{
  "asset_id": "a30dc93a8580b272f05db9f3d47dbeab",
  "public_id": "1mqow1pnmgfxkkoackdp",
  ...
  ...
  "info": {
    "detection": {
      "captioning": {
        "status": "complete",
        "data": {
           "caption": "A little girl playing with a toy tablet in a room with other children’s toys and toys on the floor"
        },
        "model_version": 1.0,
        "schema_version": 1.0
      }
    }
  },
  "original_filename": "toy_room"
  ...
  ...  
}
```

> **TIP**:
>
> :title=Tips

> * You can retrieve the caption text value from the response and then use the `update` method of the Admin API to add the caption text to the metadata of images stored in your product environment, such as the contextual metadata (`context`) or a structured metadata field (`metadata`).

> * After you've requested a caption using the `upload` or `update` method, you can use the Admin API [get details of a single resource](admin_api#get_details_of_a_single_resource_by_public_id) method to return details of the image, including the stored caption value.

> * You can also request analysis using the [Analyze API](analyze_api_guide) (Beta) which also accepts external assets to analyze.

> * [Watch a video tutorial](use_ai_to_generate_image_captions_tutorial) showing how to automatically set alt text for images in a Next.js application.

### Asynchronous handling

As the response may not be immediate, it is good practice to use asynchronous handling for these calls.

To make the call asynchronous, set the `async` parameter of the `upload` method to `true`. To be notified when the processing is complete, you can either set the `notification_url` parameter of the `upload` method (as in the example below) or the global webhook **Notification URL** in the **Upload** page of your Cloudinary Console Settings.

```multi
|ruby
Cloudinary::Uploader.upload("toy_room.jpg", 
  detection: "captioning",
  async: true, 
  notification_url: "https://mysite.example.com/upload_endpoint")

|php_2
$cloudinary->uploadApi()->upload("toy_room.jpg", 
  ["detection" => "captioning",
  "async" => true, 
  "notification_url" => "https://mysite.example.com/upload_endpoint"]);

|python
cloudinary.uploader.upload("toy_room.jpg",
  detection = "captioning", 
  async = True, 
  notification_url = "https://mysite.example.com/upload_endpoint")

|nodejs
cloudinary.v2.uploader
.upload("toy_room.jpg", 
  { detection: "captioning", 
    async: true,
    notification_url: "https://mysite.example.com/upload_endpoint" })
.then(result=>console.log(result)); 

|java
cloudinary.uploader().upload("toy_room.jpg", ObjectUtils.asMap(
  "detection", "captioning", 
  "async", true,
  "notification_url", "https://mysite.example.com/upload_endpoint"));

|csharp
var uploadParams = new ImageUploadParams() 
{
  File = new FileDescription(@"toy_room.jpg"),
  Detection = "captioning",
  Async = true,
  NotificationUrl = "https://mysite.example.com/upload_endpoint"
};
var uploadResult = cloudinary.Upload(uploadParams);  

|go
resp, err := cld.Upload.Upload(ctx, "toy_room.jpg", uploader.UploadParams{
		Detection:   "captioning",
    Async: true,
    NotificationURL: "https://mysite.example.com/upload_endpoint"})

|android
MediaManager.get().upload("toy_room.jpg")
  .option("detection", "captioning")
  .option("async", true)
  .option("notification_url", "https://mysite.example.com/upload_endpoint").dispatch();

|swift
let params = CLDUploadRequestParams()
  .setDetection("captioning")
  .setAsync(true)
  .setNotificationUrl("https://mysite.example.com/upload_endpoint")
var mySig = MyFunction(params)  // your own function that returns a signature generated on your backend
params.setSignature(CLDSignature(signature: mySig.signature, timestamp: mySig.timestamp))
let request = cloudinary.createUploader().signedUpload(
  url: "toy_room.jpg", params: params) 

|cli
cld uploader upload "toy_room.jpg" detection="captioning" async=true notification_url="https://mysite.example.com/upload_endpoint"

|curl
curl https://api.cloudinary.com/v1_1/demo/image/upload -X POST -F 'file=@/path/to/toy_room.jpg' -F 'detection=captioning' -F 'async=true' -F 'notification_url=https://mysite.example.com/upload_endpoint' -F 'timestamp=173719931' -F 'api_key=436464676' -F 'signature=a781d61f86a6f818af'
```

The response to an asynchronous upload call looks similar to this:

```json
{
  "status": "pending",
  "type": "upload",
  "batch_id": "a7877927ae1af0d1115485018ce92a6792c97938bb3edb9b0777d4663d6abbee"
}
```

When the processing is finished, the complete upload response is sent to the notification URL that you specified.

```json
{
  "notification_type": "upload",
  "timestamp": "2023-03-03T14:49:17+00:00",
  "request_id": "ce1fa51d4b3fcdc820276a345d28286b",
  "asset_id": "beaba30dc93a8580b272f05db9f3d47d",
  "public_id": "kdp1mqow1pnmgfxkkoac",
  ...
  ...
  "access_mode": "public",
    "info": {
      "detection": {
        "captioning": {
          "status": "complete",
          "data": {
             "caption": "A little girl playing with a toy tablet in a room with other children’s toys and toys on the floor"
          },
          "model_version": 1.0,
          "schema_version": 1.0
        }
    }
  },
  "original_filename": "toy_room"
  ...
  ...
}
```

.select-wrapper {
    position: relative;
    width: 200px;
    margin-bottom: 20px;
}
.custom-select {
    position: relative;
    width: 100%;
}
.select-selected {
    background-color: var(--dropdown-menu-bg-color);
    padding: 10px 35px 10px 10px;
    font-size: 16px;
    border: 1px solid #ccc;
    border-radius: 4px;
    cursor: pointer;
}
.select-selected::after {
    content: '\25BC';
    position: absolute;
    top: 50%;
    right: 10px;
    transform: translateY(-50%);
    pointer-events: none;
}
.select-items {
    position: absolute;
    background-color: var(--dropdown-menu-bg-color);
    top: 100%;
    left: 0;
    right: 0;
    z-index: 99;
    border: 1px solid #ccc;
    border-top: none;
    border-radius: 0 0 4px 4px;
    max-height: 300px;
    overflow-y: auto;
    display: none;
}
.select-item {
    padding: 10px;
    cursor: pointer;
}
.select-item:hover {
    background-color: var(--dropdown-background-active-color);
}
.language-icon {
    width: 20px;
    height: 20px;
    margin-right: 8px;
    vertical-align: middle;
}
select {
    appearance: none;
    -webkit-appearance: none;
    width: 100%;
    padding: 10px 35px 10px 10px;
    font-size: 16px;
    border: 1px solid #ccc;
    border-radius: 4px;
    background-color: var(--dropdown-menu-bg-color);
    cursor: pointer;
}
.select-wrapper::after {
    content: '\25BC';
    position: absolute;
    top: 50%;
    right: 10px;
    transform: translateY(-50%);
    pointer-events: none;
}

/*
.custom-select {
    position: relative;
    width: 200px;
}
.select-selected {
    background-color: #fff;
    border: 1px solid #ccc;
    padding: 8px;
    cursor: pointer;
    border-radius: 4px;
    display: flex;
    align-items: center;
}
.select-selected:after {
    content: "\25BC";
    position: absolute;
    top: 50%;
    right: 10px;
    transform: translateY(-50%);
}
.select-items {
    position: absolute;
    background-color: #fff;
    top: 100%;
    left: 0;
    right: 0;
    z-index: 99;
    border: 1px solid #ccc;
    border-top: none;
    border-radius: 0 0 4px 4px;
}
.select-hide {
    display: none;
}
.select-items div {
    padding: 8px;
    cursor: pointer;
    display: flex;
    align-items: center;
}
.select-items div:hover {
    background-color: #f1f1f1;
}
.language-icon {
    width: 20px;
    height: 20px;
    margin-right: 8px;
}
*/

.select-css-pocs {
	display: block;
	font-size: 16px;
	font-family: sans-serif;
	font-weight: 700;
	color: #3448c5;
	line-height: 1.3;
	padding: .6em 1.4em .5em .8em;
	width: 100%;
	max-width: 100%; /* useful when width is set to anything other than 100% */
	box-sizing: border-box;
	margin: 0;
	border: 1px solid #aaa;
	box-shadow: 0 1px 0 1px rgba(0,0,0,.04);
	border-radius: .5em;
	-moz-appearance: none;
	-webkit-appearance: none;
	appearance: none;
	background-color: #fff;
	/* note: bg image below uses 2 urls. The first is an svg data uri for the arrow icon, and the second is the gradient. 
		for the icon, if you want to change the color, be sure to use `%23` instead of `#`, since it's a url. You can also swap in a different svg icon or an external image reference

	*/

	background-image: url('data:image/svg+xml;charset=US-ASCII,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20width%3D%22292.4%22%20height%3D%22292.4%22%3E%3Cpath%20fill%3D%22%233448c5%22%20d%3D%22M287%2069.4a17.6%2017.6%200%200%200-13-5.4H18.4c-5%200-9.3%201.8-12.9%205.4A17.6%2017.6%200%200%200%200%2082.2c0%205%201.8%209.3%205.4%2012.9l128%20127.9c3.6%203.6%207.8%205.4%2012.8%205.4s9.2-1.8%2012.8-5.4L287%2095c3.5-3.5%205.4-7.8%205.4-12.8%200-5-1.9-9.2-5.5-12.8z%22%2F%3E%3C%2Fsvg%3E');

	background-repeat: no-repeat, repeat;

	/* arrow icon position (1em from the right, 50% vertical) , then gradient position*/
	background-position: right .7em top 50%, 0 0;
	/* icon size, then gradient */
	background-size: .65em auto, 100%;
}

/* CSS for demos */

.select-css {
	display: block;
	font-size: 16px;
	font-family: sans-serif;
	font-weight: 700;
	color: #FF5050;
	line-height: 1.3;
	padding: .6em 1.4em .5em .8em;
	width: 100%;
	max-width: 100%; /* useful when width is set to anything other than 100% */
	box-sizing: border-box;
	margin: 0;
	border: 1px solid #aaa;
	box-shadow: 0 1px 0 1px rgba(0,0,0,.04);
	border-radius: .5em;
	-moz-appearance: none;
	-webkit-appearance: none;
	appearance: none;
	background-color: #fff;
	/* note: bg image below uses 2 urls. The first is an svg data uri for the arrow icon, and the second is the gradient. 
		for the icon, if you want to change the color, be sure to use `%23` instead of `#`, since it's a url. You can also swap in a different svg icon or an external image reference

	*/

	background-image: url('data:image/svg+xml;charset=US-ASCII,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20width%3D%22292.4%22%20height%3D%22292.4%22%3E%3Cpath%20fill%3D%22%23FF5050%22%20d%3D%22M287%2069.4a17.6%2017.6%200%200%200-13-5.4H18.4c-5%200-9.3%201.8-12.9%205.4A17.6%2017.6%200%200%200%200%2082.2c0%205%201.8%209.3%205.4%2012.9l128%20127.9c3.6%203.6%207.8%205.4%2012.8%205.4s9.2-1.8%2012.8-5.4L287%2095c3.5-3.5%205.4-7.8%205.4-12.8%200-5-1.9-9.2-5.5-12.8z%22%2F%3E%3C%2Fsvg%3E');

	background-repeat: no-repeat, repeat;

	/* arrow icon position (1em from the right, 50% vertical) , then gradient position*/
	background-position: right .7em top 50%, 0 0;
	/* icon size, then gradient */
	background-size: .65em auto, 100%;
}
/* Hide arrow icon in IE browsers */
.select-css::-ms-expand {
	display: none;
}
/* Hover style */
.select-css:hover {
	border-color: #888;
}
/* Focus style */
.select-css:focus {
	border-color: #FF5050;
	/* It'd be nice to use -webkit-focus-ring-color here but it doesn't work on box-shadow */
	box-shadow: 0 0 1px 3px rgba(255, 80, 80, .7);
	box-shadow: 0 0 0 3px -moz-mac-focusring;
	color: #FF5050; 
	outline: none;
}

/* Set options to normal weight */
.select-css option {
	font-weight:normal;
}

/* Support for rtl text, explicit support for Arabic and Hebrew */
*[dir="rtl"] .select-css, :root:lang(ar) .select-css, :root:lang(iw) .select-css {
	background-position: left .7em top 50%, 0 0;
	padding: .6em .8em .5em 1.4em;
}

/* Disabled styles */
.select-css:disabled, .select-css[aria-disabled=true] {
	color: graytext;
	background-image: url('data:image/svg+xml;charset=US-ASCII,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20width%3D%22292.4%22%20height%3D%22292.4%22%3E%3Cpath%20fill%3D%22graytext%22%20d%3D%22M287%2069.4a17.6%2017.6%200%200%200-13-5.4H18.4c-5%200-9.3%201.8-12.9%205.4A17.6%2017.6%200%200%200%200%2082.2c0%205%201.8%209.3%205.4%2012.9l128%20127.9c3.6%203.6%207.8%205.4%2012.8%205.4s9.2-1.8%2012.8-5.4L287%2095c3.5-3.5%205.4-7.8%205.4-12.8%200-5-1.9-9.2-5.5-12.8z%22%2F%3E%3C%2Fsvg%3E'),
	  linear-gradient(to bottom, #ffffff 0%,#e5e5e5 100%);
}

.select-css:disabled:hover, .select-css[aria-disabled=true] {
	border-color: #aaa;

}

table{
    table-layout: fixed;
}

.time_warn{
  color: #FF0000;
  font-size:12px;
}

.instructions{
font-family: Tahoma;
text-align: center;
padding-left: 10%;
padding-right: 10%;
  color: #0c163b;
}

.instructions-large{
  font-family: Tahoma;
  text-align: center;
  font-size:20px;
  padding-left: 10%;
  padding-right: 10%;
  color: #0c163b;
  }

.selectcontainer {
   color: #FF5050;
   font-weight: bold;
   font-size:90%;
}

.selectcontainer-padleft {
  color: #FF5050;
  font-weight: bold;
  font-size:90%;
  padding-left: 15%;
}

.size_value{
  color: #FF5050;
  font-weight: bold;
}

.thumb-img {
  border: solid 6px #aaa;
  border-radius: 6px;
  opacity: 0.5;
}

.thumb-img:hover {
  border: solid 6px #FF8383;
  border-radius: 6px;
  cursor: pointer;
  opacity: 1;
}

.thumb-img.active {
  border: solid 6px #FF5050;
  border-radius: 6px;
  opacity: 1;
}

.art-img, .photo-img  {
  border: solid 6px #aaa;
  border-radius: 6px;
  opacity: 0.5;
}

.art-img:hover,  .photo-img:hover{
  border: solid 6px #f5956c;
  border-radius: 6px;
  cursor: pointer;
  opacity: 1;
}

.art-img.active, .photo-img.active {
  border: solid 6px #FF5050;
  border-radius: 6px;
  opacity: 1;
}

.select_label{
   color: #3448C5;
   font-weight: bold;
}

.select_label1{
  color: #0c163b;
  font-weight: bold;
  font-size: 12px;
}

.select_label2{
  color: #0c163b;
  font-weight: normal;
}

.env_select_label{
   color: #3448C5;
   font-weight: bold;
   padding-left: 15%;
}

.sliders{
  display: inline;
}

.slider_value{
   color: #FF5050;
   font-weight: bold;
}

.slider_label{
   color: #3448C5;
   font-weight: bold;
  padding-left: 15%;
}

.step_number {
        background:  black;
        color:  white;
        width: 24px;
        height: 24px;
        display: inline-block;
        text-align: center;
        line-height: 24px;
        border-radius: 100px;
}

.slidecontainer {
  width: 85%; /* Width of the outside container */
  text-align: center;
float: right;
padding-right: 15%;
}

/* The slider itself */

/* Mouse-over effects */
.slider:hover {
  opacity: 1; /* Fully shown on mouse-over */
}

.slider {
  -webkit-appearance: none;
  width: 100%;
  height: 15px;
  border-radius: 5px;  
  background: #3448C5;
  outline: none;
  opacity: 0.7;
  -webkit-transition: .2s;
  transition: opacity .2s;
}

.slider::-webkit-slider-thumb {
  -webkit-appearance: none;
  appearance: none;
  width: 25px;
  height: 25px;
  border-radius: 50%; 
  background: #FF5050;
  cursor: pointer;
}

.slider::-moz-range-thumb {
  width: 25px;
  height: 25px;
  border-radius: 50%;
  background: #FF5050;
  cursor: pointer;
}

.cloudinary-button {
  display: inline-block;
  padding: 15px 25px;
  font-size: 18px;
  cursor: pointer;
  text-align: center;
  text-decoration: none;
  outline: none;
  color: #fff;
  background-color: #FF5050;
  border: none;
  border-radius: 15px;
  box-shadow: 0 9px #999;
}

.cloudinary-button:hover {
  background-color: #ff0303;
  cursor: pointer;
}

.cloudinary-button:active {
  background-color: #ff0303;
  box-shadow: 0 5px #666;
  transform: translateY(4px);
}

.fix {
 display: block;
}

.loader {
  position: static;
  margin: auto;
  border: 16px solid #5A616A; 
  border-top: 16px solid #3448C5; 
  border-radius: 50%;
  width: 120px;
  height: 120px;
  animation: spin 2s linear infinite;
} 

@keyframes spin {
  0% { transform: rotate(0deg); }
  100% { transform: rotate(360deg); }
}

.current-img {
	padding:8px 16px;
}

.demo-btn {
	border: 0px;
	background-color:#FF5050;
	border-radius:30px;
	display:inline-block;
	cursor:pointer;
	color:#ffffff;
	font-size:14px;
	font-weight:600;;
	padding:10px 16px;
	text-decoration:none;
	
}
.demo-btn:hover {
	background-color:#ff0303;
}

a.demo-btn:link, a.demo-btn:visited, a.demo-btn:hover, a.demo-btn:active {
	color: #ffffff;
	text-decoration:none;
}

.demo-btn:active {
	position:relative;
	top:1px;
}

span.mystep {
  background: #FF5050;
  border-radius: 0.8em;
  -moz-border-radius: 0.8em;
  -webkit-border-radius: 0.8em;
  color: #ffffff;
  display: inline-block;
  font-weight: bold;
  line-height: 1.6em;
  margin-right: 5px;
  text-align: center;
  width: 1.6em;
}

.coordinates {
  text-align: center;
  color: #0c163b;
}

.tr_all {
  display:inline-block;
  vertical-align:top;
  margin-left: 3em;
  margin-bottom: 1em;
  text-align: left;
}

.tl_all {
  display:inline-block;
  vertical-align:top;
  margin-right: 3em;
  margin-bottom: 1em;
  text-align: right;
}

.br_all {
  display:inline-block;
  vertical-align:top;
  margin-left: 3em;
  margin-bottom: 1em;
  text-align: left;
}

.bl_all {
  display:inline-block;
  vertical-align:top;
  margin-right: 3em;
  margin-bottom: 1em;
  text-align: right;
}

.options {
  font-size: 18px;
  font-weight: bold;
  color: #0c163b;
}

.coordinate-value {
  color: #FF5050;
  font-weight: bold; 
  text-align: right;  
}

/* Accessible Media Demo Styles */

/* Dark theme support for audio description demo */
[data-theme="dark"] .audio-description-demo {
  border-color: var(--dark-border) !important;
  background: var(--dark-bg) !important;
  color: var(--dark-text) !important;
}

[data-theme="dark"] .audio-description-demo h4 {
  color: var(--dark-border) !important;
}

/* Video player demo styles */
#wordHighlight {
  height: 400px;
  padding-top: unset;
}
#wordHighlight > div.vjs-poster > picture > img {
  object-fit: contain;
}

#wordHighlight > div.vjs-poster > picture {
  background: var(--main-content-color);
}
#wordHighlight.video-js {
  background-color: var(--main-content-color);
}

/* Dark theme support for colorblind demo */
[data-theme="dark"] .colorblind-demo {
  background-color: #2d3748 !important;
  border-color: #4a5568 !important;
  color: #e2e8f0 !important;
}

[data-theme="dark"] .colorblind-demo label {
  color: #e2e8f0 !important;
}

[data-theme="dark"] .colorblind-demo select {
  background-color: #4a5568 !important;
  color: #e2e8f0 !important;
  border: 1px solid #718096 !important;
}

[data-theme="dark"] .url-display {
  background-color: #2c5282 !important;
  border: 1px solid #3182ce !important;
}

[data-theme="dark"] .url-display h4 {
  color: #63b3ed !important;
}

[data-theme="dark"] .url-display code {
  color: #e2e8f0 !important;
}

[data-theme="dark"] .tips-section {
  background-color: #744210 !important;
  border: 1px solid #975a16 !important;
}

[data-theme="dark"] .tips-section h4 {
  color: #fbb041 !important;
}

[data-theme="dark"] .tips-section,
[data-theme="dark"] .tips-section ul,
[data-theme="dark"] .tips-section li {
  color: #faf089 !important;
}

/* Dark theme support for text overlay demo */
[data-theme="dark"] .text-overlay-demo label,
[data-theme="dark"] .text-overlay-demo input,
[data-theme="dark"] .text-overlay-demo select {
  --text-color: #e2e8f0;
  --input-bg: #2d3748;
}

/* Dark theme support for OCR text content */
[data-theme="dark"] .ocr-text-content {
  background: var(--dark-bg) !important;
  color: var(--dark-text) !important;
}

/* Dark theme support for audio mixing demo */
[data-theme="dark"] .db-status-container {
  background: var(--dark-bg) !important;
  color: var(--dark-text) !important;
}

/* Dark theme support for motion demo */
[data-theme="dark"] .motion-demo-container {
  border-color: var(--dark-border) !important;
  background: var(--dark-bg) !important;
  color: var(--dark-text) !important;
}

/* Dark theme support for gallery demo container */
[data-theme="dark"] #accessible-gallery-demo {
  background: #2d3748 !important;
  border-color: #4a90e2 !important;
}

[data-theme="dark"] #accessible-gallery-demo h4 {
  color: #4a90e2 !important;
}

[data-theme="dark"] #accessible-gallery-demo p {
  color: #e2e8f0 !important;
}

/* Dark theme support for keyboard controls */
[data-theme="dark"] .keyboard-controls-container {
  border-color: var(--dark-border) !important;
  background: var(--dark-bg) !important;
  color: var(--dark-text) !important;
}

[data-theme="dark"] .keyboard-controls-container h4 {
  color: var(--dark-border) !important;
}

[data-theme="dark"] .keyboard-key {
  background: var(--dark-kbd-bg) !important;
  color: var(--dark-kbd-text) !important;
  border-color: #718096 !important;
}

/* Dark theme support for video player demo */
[data-theme="dark"] .video-player-demo {
  border-color: var(--dark-border) !important;
  background: var(--dark-bg) !important;
  color: var(--dark-text) !important;
}

[data-theme="dark"] .video-player-demo h4 {
  color: var(--dark-border) !important;
}

[data-theme="dark"] .video-demo-features {
  color: var(--dark-subtext) !important;
}

/* Dark theme support for upload widget demo */
[data-theme="dark"] .upload-widget-demo {
  border-color: var(--dark-border) !important;
  background: var(--dark-bg) !important;
  color: var(--dark-text) !important;
}

[data-theme="dark"] .upload-widget-demo h4 {
  color: var(--dark-border) !important;
}

[data-theme="dark"] .upload-widget-demo > div:last-child {
  color: var(--dark-subtext) !important;
}

/* X-Cld-Error Inspector Tool Styles */
.x-cld-error-inspector {
  max-width: 800px;
  margin: 20px 0;
  padding: 20px;
  border: 1px solid var(--inspector-border, #ddd);
  border-radius: 8px;
  background-color: var(--inspector-bg, #f9f9f9);
  color: var(--inspector-text, #333);
}

.x-cld-error-inspector .input-wrapper {
  margin-bottom: 15px;
}

.x-cld-error-inspector label {
  display: block;
  margin-bottom: 8px;
  font-weight: bold;
  color: var(--inspector-text, #333);
}

.x-cld-error-inspector input[type="text"] {
  width: 100%;
  padding: 10px;
  border: 1px solid var(--inspector-input-border, #ccc);
  border-radius: 4px;
  font-family: monospace;
  font-size: 14px;
  background-color: var(--inspector-input-bg, #fff);
  color: var(--inspector-text, #333);
  box-sizing: border-box;
}

.x-cld-error-inspector button.x-cld-inspect-btn {
  padding: 10px 25px;
  line-height: 1.4;
  background-color: var(--button-background-color);
  font-family: "Inter", Helvetica, Arial, sans-serif;
  color: var(--sign-up-button-color);
  font-weight: 600;
  font-size: 14px;
  text-transform: uppercase;
  border: none;
  border-radius: 20px;
  cursor: pointer;
  transition: filter 0.2s ease;
}

.x-cld-error-inspector button.x-cld-inspect-btn:hover {
  filter: brightness(85%);
}

.x-cld-error-inspector #result-container {
  margin-top: 20px;
}

.x-cld-error-inspector #loading {
  color: var(--inspector-loading, #666);
}

.x-cld-error-inspector .result-success {
  padding: 15px;
  background-color: var(--result-warning-bg, #fff3cd);
  border: 1px solid var(--result-warning-border, #ffc107);
  border-radius: 4px;
  margin-top: 10px;
  color: var(--inspector-text, #333);
}

.x-cld-error-inspector .result-error {
  padding: 15px;
  background-color: var(--result-error-bg, #f8d7da);
  border: 1px solid var(--result-error-border, #dc3545);
  border-radius: 4px;
  margin-top: 10px;
  color: var(--inspector-text, #333);
}

.x-cld-error-inspector .result-ok {
  padding: 15px;
  background-color: var(--result-success-bg, #d4edda);
  border: 1px solid var(--result-success-border, #28a745);
  border-radius: 4px;
  margin-top: 10px;
  color: var(--inspector-text, #333);
}

.x-cld-error-inspector .header-info {
  margin-top: 10px;
  padding: 10px;
  background-color: var(--header-info-bg, #e9ecef);
  border-radius: 4px;
  font-family: monospace;
  font-size: 13px;
  color: var(--inspector-text, #333);
}

.x-cld-error-inspector .header-label {
  font-weight: bold;
  color: var(--inspector-label, #495057);
}

/* Support for explicit dark theme class */
[data-theme="dark"] .x-cld-error-inspector {
  --inspector-border: #4a5568;
  --inspector-bg: #2d3748;
  --inspector-text: #e2e8f0;
  --inspector-input-border: #4a5568;
  --inspector-input-bg: #1a202c;
  --inspector-loading: #a0aec0;
  --inspector-label: #cbd5e0;
  --header-info-bg: #1a202c;
  --result-warning-bg: #744210;
  --result-warning-border: #d69e2e;
  --result-error-bg: #742a2a;
  --result-error-border: #fc8181;
  --result-success-bg: #22543d;
  --result-success-border: #48bb78;
}

/* Image Enhancement Demo Styles */

#image-enhancement-demo {
  max-width: 1200px;
  margin: 20px auto;
  padding: 20px;
  border: 1px solid var(--inspector-border, #ddd);
  border-radius: 8px;
  background-color: var(--inspector-bg, #f9f9f9);
}

#image-enhancement-demo h4 {
  color: var(--inspector-text, #333);
  margin-top: 0;
}

#image-thumbs {
  display: flex;
  flex-wrap: wrap;
  justify-content: center;
  margin-bottom: 30px;
  gap: 10px;
}

.thumb-container {
  text-align: center;
  margin: 10px;
}

.thumb-img {
  cursor: pointer;
  max-width: 150px;
  height: 100px;
  object-fit: cover;
}

.thumb-label {
  font-size: 12px;
  margin-top: 5px;
  color: var(--inspector-text, #333);
}

.demo-grid {
  display: grid;
  grid-template-columns: 1fr 1fr;
  gap: 20px;
  margin-bottom: 30px;
}

.image-section {
  margin-bottom: 20px;
}

.image-label-wrapper {
  margin-bottom: 10px;
}

.comparison-label {
  color: var(--inspector-text, #333);
}

.comparison-image {
  max-width: 100%;
  height: auto;
  border-radius: 4px;
  display: block;
  cursor: pointer;
}

.comparison-image.original {
  border: 2px solid var(--inspector-border, #ddd);
}

.comparison-image.enhanced {
  border: 2px solid #3448c5;
}

.enhancement-option-wrapper {
  margin-bottom: 15px;
}

.enhancement-option-label {
  display: flex;
  align-items: flex-start;
  cursor: pointer;
  padding: 12px;
  border-radius: 6px;
  border: 2px solid transparent;
  transition: all 0.2s ease;
  background: var(--inspector-input-bg, #fff);
  color: var(--inspector-text, #333);
}

.enhancement-option-label:hover {
  background: var(--dropdown-background-active-color, #f0f0f0);
}

.enhancement-option-label.selected {
  background: var(--inspector-input-bg, #fff);
  border-color: #3448c5;
}

.enhancement-option-label input[type="radio"] {
  margin-right: 10px;
  margin-top: 4px;
}

.enhancement-option-name {
  font-weight: bold;
  margin-bottom: 4px;
}

.enhancement-option-description {
  font-size: 13px;
  opacity: 0.8;
}

#transformation-url {
  margin-top: 20px;
  padding: 20px;
  background-color: #f8fafc;
  border-radius: 8px;
  border: 1px solid #e2e8f0;
}

.url-link {
  text-decoration: none;
}

.url-code {
  background: #f1f5f9;
  padding: 8px 12px;
  border-radius: 6px;
  font-size: 13px;
  font-family: 'Monaco', 'Menlo', 'Courier New', monospace;
  color: #0f172a;
  word-break: break-all;
  cursor: pointer;
  display: block;
  border: 1px solid #e2e8f0;
  transition: all 0.2s ease;
}

.url-code:hover {
  background: #e2e8f0;
  border-color: #cbd5e1;
}

#transformation-url > div {
  margin-bottom: 12px;
}

#transformation-url > div:last-child {
  margin-bottom: 0;
}

#transformation-url strong {
  color: #64748b;
  display: block;
  margin-bottom: 6px;
  font-size: 13px;
  text-transform: uppercase;
  letter-spacing: 0.5px;
  font-weight: 600;
}

.thumb-label {
  color: var(--inspector-text, #333);
}

.comparison-label {
  color: var(--inspector-text, #333);
}

/* Dark theme support for Image Enhancement Demo */
[data-theme="dark"] #image-enhancement-demo {
  background-color: #2d3748 !important;
  border-color: #4a5568 !important;
  color: #e2e8f0 !important;
}

[data-theme="dark"] #image-enhancement-demo h4 {
  color: #e2e8f0 !important;
}

[data-theme="dark"] .comparison-image.original {
  border-color: #4a5568;
}

[data-theme="dark"] #transformation-url {
  background-color: #1e293b !important;
  border: 1px solid #334155 !important;
}

[data-theme="dark"] #transformation-url strong {
  color: #94a3b8 !important;
}

[data-theme="dark"] .url-code {
  background: #1e293b;
  color: #94a3b8;
  border-color: #334155;
}

[data-theme="dark"] .url-code:hover {
  background: #334155;
  border-color: #475569;
}

[data-theme="dark"] .enhancement-option-label {
  background: #1a202c;
  color: #e2e8f0;
}

[data-theme="dark"] .enhancement-option-label:hover {
  background: #2d3748;
}

[data-theme="dark"] .enhancement-option-label.selected {
  background: #2d3748;
  border-color: #4a90e2 !important;
}

[data-theme="dark"] .thumb-label {
  color: #e2e8f0 !important;
}

[data-theme="dark"] .comparison-label {
  color: #e2e8f0 !important;
}

