Last updated: Dec-29-2022
Cloudinary is a cloud-based service that provides an end-to-end image and video management solution including uploads, storage, transformations, optimizations and delivery. It offers a rich set of image transformation capabilities, including cropping, overlays, graphic improvements, and a large variety of special effects.
The OCR Text Detection and Extraction add-on, powered by the Google Vision API, integrates seamlessly with Cloudinary's upload and transformation functionality. It extracts all detected text from images, including multi-page documents like TIFFs and PDFs.
You can use the extracted text directly for a variety of purposes, such as organizing or tagging images. Additionally, you can take advantage of special OCR-based transformations, such as blurring, pixelating, or overlaying other images on all detected text with simple transformation parameters. You can also use the add-on to ensure that important texts aren't cut off when you crop your images.
You can use the add-on in normal mode for capturing text elements within a photograph or other graphical image, or in
document mode for capturing dense text such as a scan of a document. If you expect images to include non-latin characters, you can instruct the add-on to analyze the image for a specific language.
The following example uses the normal mode of the OCR add-on to pixelate the license plate text in this car photograph:
Before you can use the OCR Text Detection and Extraction add-on:
You must have a Cloudinary account. If you don't already have one, you can sign up for a free account.
Keep in mind that many of the examples on this page use our SDKs. For SDK installation and configuration details, see the relevant SDK guide.
If you are new to Cloudinary, you may want to take a look at How to integrate Cloudinary in your app for a walk through on the basics of creating and setting up your account, working with SDKs, and then uploading, transforming and delivering assets.
You can return all text detected in an image file in the JSON response of any
The returned content includes a summary of all returned text and the bounding box coordinates of the entire captured text, plus a breakdown of each text element (an individual word or other set of characters without a space) captured and the bounding box of each such text element.
To request inclusion of detected text in the response of your
update method call, set the
ocr parameter to
adv_ocr (for photos or images containing text elements) or
adv_ocr:document (for best results on text-heavy images such as scanned documents).
For example, when using the upload method:
Learn more: Upload presets
Or when using the update method:
upload an image (or perform an
update operation) with the
ocr parameter set to
adv_ocr:document, the JSON response includes an
ocr node under the
ocr node of the response includes the following:
- The name of the OCR engine used by the add-on (
- The status of the OCR operation
- The detected locale of the text
- The outer bounding rectangle containing all of the detected text
descriptionlisting the entirety of the detected text content, with a newline character (
\n) separating groups of text
- For multi-page files (e.g. PDFs), a node indicating the containing page
- The bounding rectangle of each individual detected text element and the
description(text content) of that individual element
For example, an excerpt from the
ocr section of the JSON response from a scanned restaurant receipt image may look something like this:
Once you have extracted text in your response, you can access it based on the response structure.
Below are a few examples of ways to use the text extracted from an image:
In the example below, the text extracted from the image is saved in the file system in an
image_texts subfolder using the filename result_<public_id>.txt.
In the example below, the
rename method is used to update the public IDs of images without text to sit under a
no_text path, and changes the public ID's of images with text to an ID under the
For example, for each resume scanned into a career site, check whether the words "Cloudinary", "MBA", or "algorithm" appear. If so, tag the resume file with the relevant keywords.
Many images may have text, such as phone numbers, web site addresses, license plates, or other personal or commercial data, that you don't want visible in your delivered images. To blur or pixelate all detected text in an image, you can use Cloudinary's built-in
blur_region effect with the gravity parameter set to
ocr_text. For example, we've blurred out the brand and model names on this smartphone:
Overlaying an image based on OCR text detection is similar to the process for overlaying images in other scenarios: you specify the image to overlay, the width of the overlay, and the gravity (location) for the overlay. When you specify
ocr_text as the gravity, each detected text element is automatically covered with the specified image.
In most cases, it works best to specify a relative width instead of an absolute width for the overlay. The relative width adjusts the size of the overlay image relative to the size of the detected text element. To do this, just add the
fl_region_relative flag to your transformation, and specify the width of the overlay image as a percentage (1.0 = 100%) of the text element.
For example, suppose you run a real estate website where individuals or companies can list homes for sale. For revenue recognition purposes, it's important that the listings do not display private phone numbers or those of other real estate organizations. So instead, you overlay an image with your site's contact information that covers any detected text in the uploaded images.
When you want to be sure that text in an image is retained during a crop transformation, you can specify
ocr_text as the
g_ocr_text in URLs).
For example, the following example demonstrates what happens to the itsSnacktime.com text in the picture below if you crop it to a square with default (center gravity) cropping,
auto gravity cropping, or
ocr_text gravity cropping:
(centered) auto gravity
(focus on most prominent elements) ocr_text gravity
(focus on text regions)
The transformation code for the last image looks like this:
Alternatively, in cases where text is only one consideration of cropping priority, you can set the
gravity parameter to
auto with the
ocr_text option (
g_auto:ocr_text in URLs), which gives a higher priority to detected text, but also gives priority to faces and other very prominent elements of an image.
To minimize the likelihood of having text in a cropped image, set the
gravity parameter to
auto with the
ocr_text_avoid option (
g_auto:ocr_text_avoid in URLs).
For example, in the photo below, you may not want to show the name of the flower shop.
g_auto by itself makes the shop front the focal point, but if we use
g_auto:ocr_text_avoid, the side of the photo without the text is shown.
Cloudinary's dynamic image transformation URLs are powerful tools for agile web and mobile development. However, due to the potential costs of your customers accessing unplanned dynamic URLs that apply the OCR text detection or extraction functionality, image transformation add-on URLs are required (by default) to be signed using Cloudinary's authenticated API or, alternatively, you can eagerly generate the requested derived images using Cloudinary's authenticated API.
To create a signed Cloudinary URL, set the
sign_url parameter to true when building a URL or creating an image tag.
For example, to generate a signed URL when applying a blur effect on the text of an image:
The generated Cloudinary URL shown below includes a signature component (
/s--BDoTEjNU--/). Only URLs with a valid signature that matches the requested image transformation will be approved for on-the-fly image transformation and delivery.
For more details on signed URLs, see Signed delivery URLs.
- You can optionally remove the signed URL default requirement for a particular add-on by selecting it in the Allow unsigned add-on transformations section of the Security page in the Cloudinary Settings.
- No OCR mechanism can identify 100% of the text in all images. The results may be affected by things like font, color, contrast between text and background, text angle, and more.
- The OCR engine requires images with a minimum resolution of 1024 X 768.
By default, the add-on supports latin languages. You can instruct the add-on to perform the text detection in a non-latin language by adding the 2-letter language code to the
adv_ocr value, separated by a colon. For example, if you expect your image to include Russian characters, set the value to
adv_ocr:ru. Note that when you include a language code, the structure and breakdown of the response is different than without. The full list of supported languages and their language codes can be found here.