Manually extracting and transforming text contents from images can be a time-consuming and tedious task. Some examples include attempting to auto-tag or categorize an image based on its text content, blurring or adding an overlay to the text on an image, etc.
Cloudinary provides an OCR (Optical character recognition) text detection and extraction add-on powered by Google’s vision API that integrates smoothly with its image upload and transformation capability. The add-on makes it easier to capture text elements in an image and apply some Cloudinary transformation effects to the image.
This post will demonstrate how to use the Cloudinary OCR text detection and extraction add-on. We will create a simple application to demonstrate the process of extracting detected text from an image, blurring or pixelating detected text, and adding an image over text in an image.
Here is a link to the demo CodeSandbox.
Create a Next.js app using the following command:
npx create-next-app ocr-demo
Next, run this command to change into the newly created directory:
cd ocr-demo
Now, add the project dependencies using the following command:
npm install cloudinary axios
The Node Cloudinary SDK will provide easy-to-use methods to interact with the Cloudinary APIs, while axios will serve as the HTTP client for communicating with our serverless functions.
Run this command to preview the running application:
npm run dev
To use Cloudinary’s provisioned services, you need to first sign up for a free Cloudinary account if you don’t have one already. Displayed on your account’s Management Console (aka Dashboard) are important details: your cloud name, API key, etc.
Next, let’s create environment variables to hold the details of our Cloudinary account. Create a new file called .env
at the root of your project and add the following to it:
CLOUD_NAME = YOUR CLOUD NAME HERE
API_KEY = YOUR API API KEY
API_SECRET = YOUR API API SECRET
This will be used as a default when the project is set up on another system. To update your local environment, create a copy of the .env
file using the following command:
cp.env.env.local;
Code language: CSS (css)
By default, this local file resides in the .gitignore
folder, mitigating the security risk of inadvertently exposing secret credentials to the public. You can update the .env.local
file with your Cloudinary credentials.
When we create an account, access to the Cloudinary add-ons is not provided out of the box. You need to register for the OCR Text Detection and Extraction add-on to access this feature. Each add-on provides us with several plans and their associated prices. Thankfully, most of them also offer free plans, and since this is a demo application, we’ll go with the free plan here, which gives us access to 50 monthly OCR detections.
Using the Cloudinary OCR text detection and extraction add-on, we can extract all detected text from an image by setting the ocr
parameter in an image upload
or update
method call to adv_ocr
or adv_ocr:document
for text-heavy images.
cloudinary.v2.uploader.upload(
"your-image.jpg",
{ ocr: "adv_ocr" },
function (error, result) {
//some other code
}
);
Code language: JavaScript (javascript)
Cloudinary attaches an ocr node nested in an info section of the JSON response when the ocr
parameter is set to adv_ocr
or adv_ocr:document
. The ocr
node contains detailed information about all the returned text in the group and a breakdown of the individual text, such as the default language, the bounding box coordinates, etc.
Now let’s create an API route in our Next.js application that accepts an image, extracts the image’s text content, and sends the response back to the client. To achieve that, create an extractText.js
file in the pages/api
folder of the project and add the following to it:
const cloudinary = require("cloudinary").v2;
cloudinary.config({
cloud_name: process.env.CLOUD_NAME,
api_key: process.env.CLD_API_KEY,
api_secret: process.env.CLD_API_SECRET,
secure: true,
});
export default async function handler(req, res) {
const { baseImage } = req.body;
try {
await cloudinary.uploader.upload(
baseImage,
{
ocr: "adv_ocr",
folder: "ocr-demo",
},
async function (error, result) {
res.status(200).json(result);
}
);
} catch (error) {
res.status(500).json(error);
}
}
export const config = {
api: {
bodyParser: {
sizeLimit: "4mb",
},
},
};
Code language: JavaScript (javascript)
In the code above, we import Cloudinary and configure it with an object consisting of our Cloudinary credentials. We then defined a route handler function that expects a base64 image attached to the request’s body. The image will then be uploaded to a folder called ocr-demo
in your Cloudinary account, and its text content will be extracted from it.
We also exported the Next.js default config object to set the default payload size limit to 4MB.
Let’s create a file that will hold helper functions used to make Axios requests to our API routes. At the root level of your application, create a folder called util
, and inside it, create a file called axiosReq.js
. Add the following to the axiosReq.js
file:
import axios from "axios";
export const extractText = async (baseImage, setStatus, setOutputData) => {
setStatus("loading");
try {
const extractedImage = await axios.post("/api/extractText", { baseImage });
const { textAnnotations, fullTextAnnotation } =
extractedImage.data.info.ocr.adv_ocr.data[0];
setOutputData({
type: "text",
data: fullTextAnnotation.text || textAnnotations[0].description,
});
setStatus("");
} catch (error) {
setStatus("error");
}
};
Code language: JavaScript (javascript)
The file exports a function called extractText
that takes in an image, a function to set the loading state, and a function to set the response. The function makes an Axios call to the extractText
API route and processes the response to extract the text annotations, which will then be set to state.
Next, let’s use this function. Replace the content of your pages/index.js
file withe the following:
import { useState, useRef } from "react";
import { extractText } from "../util/axiosReq";
import styles from "../styles/Home.module.css";
export default function Home() {
const [baseImage, setBaseImage] = useState();
const [outputData, setOutputData] = useState();
const [status, setStatus] = useState();
const baseFileRef = useRef();
const handleSelectImage = (e, setStateFn) => {
const reader = new FileReader();
reader.readAsDataURL(e.target.files[0]);
reader.onload = function (e) {
setStateFn(e.target.result);
};
};
const handleExtractText = async () => {
extractText(baseImage, setStatus, setOutputData);
};
const isBtnDisabled = !baseImage || status === "loading";
return (
<main className={styles.app}>
<h1>Cloudinary OCR demo App</h1>
<div>
<div className={styles.input}>
<div
className={`${styles.image} ${styles.flex}`}
onClick={() => baseFileRef.current.click()}
>
<input
type="file"
ref={baseFileRef}
style={{ display: "none" }}
onChange={(e) => handleSelectImage(e, setBaseImage)}
/>
{baseImage ? (
<img src={baseImage} alt="selected image" />
) : (
<h2>Click to select image</h2>
)}
<div>
<h2>Click to select image</h2>
</div>
</div>
<div className={styles.actions}>
<button onClick={handleExtractText} disabled={isBtnDisabled}>
Extract text
</button>
</div>
</div>
<div className={styles.output}>
{status ? (
<h4>{status}</h4>
) : (
outputData &&
(outputData.type === "text" ? (
<div>
<span>{outputData.data}</span>
</div>
) : (
""
))
)}
</div>
</div>
</main>
);
}
Code language: JavaScript (javascript)
In the code above, we defined three state variables to hold the image to the processed, request status state, and output data. We also created a baseFileRef
ref to workaround dynamically opening the image file picker. The handleSelectImage
function handles file selection changes and converts the file to its base64 equivalence.
Next, we rendered a div
element that can be clicked to select and preview a file and a button that triggers the handleExtractText
function when clicked.
We used the status
state to set loading feedback and disable the button when required.
Before previewing the application in the browser, let’s update our styles/Home.module.css
file with the styles from this codeSandbox link to give our application a decent look.
Now, save the changes, and you should be able to select an image and extract text contents from the image to the screen.
Leveraging the add-on’s text detection capability, Cloudinary allows us to integrate that with its built-in blur_region
and pixelate_region
image transformation effects. To blur all detected text in an image, set the effect parameter on the default image transformation method to blur_region:<blur-value
and the gravity
parameter to ocr_text
.
The blur-value
can be any value within 0 – 2000, and the higher the value, the more blurry the effect.
cloudinary.image("your-image-public_id.jpg", {
effect: "blur_region:800",
gravity: "ocr_text",
});
Code language: CSS (css)
To add this feature to our demo application, create a blurImage.js
file in the pages/api
folder and add the following to it:
const cloudinary = require("cloudinary").v2;
cloudinary.config({
cloud_name: process.env.CLOUD_NAME,
api_key: process.env.API_KEY,
api_secret: process.env.API_SECRET,
secure: true,
});
export default async function handler(req, res) {
const { baseImage } = req.body;
try {
await cloudinary.uploader.upload(
baseImage,
{ folder: "ocr-demo" },
async function (error, result) {
const response = await cloudinary.image(`${result.public_id}.jpg`, {
effect: "blur_region:800",
gravity: "ocr_text",
sign_url: true,
});
res.status(200).json(response);
}
);
} catch (error) {
res.status(500).json(error);
}
}
export const config = {
api: {
bodyParser: {
sizeLimit: "4mb",
},
},
};
Code language: JavaScript (javascript)
The code is similar to what we did in the extactText.js
file, except that we now have to first upload the image passed from the client to Cloudinary and extract its public ID, which is then passed as a parameter to Cloudinary’s image transformation method.
We also set the gravity
parameter to ocr_text
and the effect
parameter to blur_region
with a value of 800.
We added a signature to the generated URLs by setting the sign_url
parameter to true
as required by Cloudinary due to the potential cost of accessing unplanned dynamic URLs that apply the OCR text detection or extraction functionality.
Now, open the util/axiosReq.js
file and add the function below to the bottom of the file. The function will be used to make a request and get back some response from our api/blurImage
route.
export const blurImage = async (baseImage, setStatus, setOutputData) => {
setStatus("loading");
try {
const blurredImage = await axios.post("/api/blurImage", { baseImage });
const url = /'(.+)'/.exec(blurredImage.data);
setOutputData({
type: "imgUrl",
data: url[1],
});
setStatus("");
} catch (error) {
setStatus("error");
}
};
Code language: JavaScript (javascript)
The function expects the image and functions needed to set the request status and response data state. It then makes a request to the API route, extracts the blurred image URL, and sets the states accordingly.
Next, update your pages/index.js
file with the following:
import { useState, useRef } from "react";
// import blurImage
import { extractText, blurImage } from "../util/axiosReq";
import styles from "../styles/Home.module.css";
export default function Home() {
//...
const handleSelectImage = (e, setStateFn) => {
//...
};
};
const handleExtractText = async () => {
//...
};
// Add this
const handleBlurImage = async () => {
blurImage(baseImage, setStatus, setOutputData)
};
const isBtnDisabled = !baseImage || status === "loading";
return (
<main className={styles.app}>
<h1>Cloudinary OCR demo App</h1>
<div>
<div className={styles.input}>
//...
</div>
<div className={styles.actions}>
<button onClick={handleExtractText} disabled={isBtnDisabled}>
Extract text
</button>
{/* Add this */}
<button onClick={handleBlurImage} disabled={isBtnDisabled}>Blur text content</button>
</div>
</div>
<div className={styles.output}>
{status ? (
<h4>{status}</h4>
) : (
outputData &&
(outputData.type === "text" ? (
<div>
<span>{outputData.data}</span>
</div>
)
:<img src={outputData.data} alt="" />)
)}
</div>
</div>
</main>
);
}
Code language: HTML, XML (xml)
We updated the code by adding a new button that triggers the handleBlurImage
function when clicked. The function calls the blurImage
function and passes it the expected arguments. We reformatted the output div to render an image if the output data is a URL and not text.
Save the changes and test the application in your browser.
Instead of blurring detected text in an image, we can rather choose to add an image overlay instead. Achieving this with the add-on is similar to the default way of adding an overlay on images. The only difference is that we now have to set the gravity parameter to ocr_text
as seen below.
cloudinary.image("your-image-public_id.jpg", {
transformation: [
{ overlay: "overlay-public_id" },
{ flags: "region_relative", width: "1.1", crop: "scale" },
{ flags: "layer_apply", gravity: "ocr_text" },
],
});
Code language: CSS (css)
Notice how we didn’t set a fixed value for the width in the code above. Cloudinary allows us to set values relative to the width of the detected text in the image.
Create a file called addOverlay.js
in your pages/api
folder and add the following to it:
const cloudinary = require("cloudinary").v2;
cloudinary.config({
cloud_name: process.env.CLOUD_NAME,
api_key: process.env.API_KEY,
api_secret: process.env.API_SECRET,
secure: true,
});
export default async function handler(req, res) {
const { baseImage, overlay } = req.body;
try {
await cloudinary.uploader.upload(
baseImage,
{ folder: "ocr-demo" },
async function (error, baseImageCld) {
await cloudinary.uploader.upload(
overlay,
{ folder: "ocr-demo" },
async function (error, overlayImageCld) {
const overlayedImage = await cloudinary.image(
`${baseImageCld.public_id}.jpg`,
{
transformation: [
{
overlay: `${overlayImageCld.public_id}`.replace(/\//g, ":"),
},
{ flags: "region_relative", width: "1.1", crop: "scale" },
{ flags: "layer_apply", gravity: "ocr_text" },
],
sign_url: true,
}
);
res.status(200).json(overlayedImage);
}
);
}
);
} catch (error) {
res.status(500).json(error);
}
}
export const config = {
api: {
bodyParser: {
sizeLimit: "4mb",
},
},
};
Code language: JavaScript (javascript)
In the code above, we configured Cloudinary. We define the API route handler to extract the base image and the overlay from the request body, which will then be uploaded to Cloudinary. We then use the public ID extracted from the response to transform the overlay image.
Let’s create a function we can use to make a request to this newly created route. Open the util/axiosReq.js
file and add the following to the bottom of the file:
export const addOverlay = async (
baseImage,
overlay,
setStatus,
setOutputData
) => {
setStatus("loading");
try {
const overlayedImage = await axios.post("/api/addOverlay", {
baseImage,
overlay,
});
const url = /'(.+)'/.exec(overlayedImage.data);
setOutputData({ type: "imgUrl", data: url[1] });
setStatus("");
} catch (error) {
setStatus("error");
}
};
Code language: JavaScript (javascript)
The function accepts an image overlay in addition to the base image and the state functions. It manages the request status state and makes the request to our API route.
To conclude this section, update your pages/index.js
file with the following:
import { useState, useRef } from "react";
//import addOverlay
import { extractText, blurImage, addOverlay } from "../util/axiosReq";
import styles from "../styles/Home.module.css";
export default function Home() {
const [baseImage, setBaseImage] = useState();
const [outputData, setOutputData] = useState();
const [status, setStatus] = useState();
//Add this
const [overlay, setOverlay] = useState();
const baseFileRef = useRef();
//Add this
const overlayFileRef = useRef();
const handleSelectImage = (e, setStateFn) => {
//...
};
const handleExtractText = async () => {
//...
};
const handleBlurImage = async () => {
//...
};
//Add this
const handleAddOverlay = async () => {
addOverlay(baseImage, overlay, setStatus, setOutputData);
};
const isBtnDisabled = !baseImage || status === "loading";
return (
<main className={styles.app}>
<h1>Cloudinary OCR demo App</h1>
<div>
<div className={styles.input}>
<div
className={`${styles.image} ${styles.flex}`}
onClick={() => baseFileRef.current.click()}
>
<input
type="file"
ref={baseFileRef}
style={{ display: "none" }}
onChange={(e) => handleSelectImage(e, setBaseImage)}
/>
{baseImage ? (
<img src={baseImage} alt="selected image" />
) : (
<h2>Click to select image</h2>
)}
<div>
<h2>Click to select image</h2>
</div>
</div>
<div className={styles.actions}>
<button onClick={handleExtractText} disabled={isBtnDisabled}>
Extract text
</button>
<button onClick={handleBlurImage} disabled={isBtnDisabled}>
Blur text content
</button>
{/* Add this */}
<button
onClick={handleAddOverlay}
disabled={!overlay || isBtnDisabled}
>
Add overlay
</button>
<div
className={`${styles.overlay} ${styles.flex}`}
onClick={() => overlayFileRef.current.click()}
>
<input
type="file"
ref={overlayFileRef}
onChange={(e) => handleSelectImage(e, setOverlay)}
style={{ display: "none" }}
/>
{overlay ? (
<img src={overlay} alt="overlay" />
) : (
<p>Click to select overlay</p>
)}
<div>
<p>Click to select overlay</p>
</div>
</div>
</div>
</div>
<div className={styles.output}>//...</div>
</div>
</main>
);
}
Code language: JavaScript (javascript)
In the updated code, we added an overlay
state to hold the image selected by the user to be used as an overlay. Next, we worked around opening the image file picker by adding a ref
to a hidden input element and calling its click
method dynamically. We previewed the selected image to be used as an overlay image and added a button that calls our request function.
After that, you can finally overlay any text detected in a selected image with an overlay image.
Find the complete project here on GitHub.
So far, we’ve covered how we can use Cloudinary’s OCR text detection and extraction add-on to extract and transform text from images. However, the add-on is not limited to just the three functionalities explained in this article. A lot more can be achieved with it. For example, it can be used for text-based image cropping, ensuring that text in an image is preserved during a crop transformation.
Resources you may find helpful: