{"id":39315,"date":"2025-11-26T07:00:00","date_gmt":"2025-11-26T15:00:00","guid":{"rendered":"https:\/\/cloudinary.com\/blog\/?p=39315"},"modified":"2025-11-26T10:22:41","modified_gmt":"2025-11-26T18:22:41","slug":"ai-vision-smart-image-labeling","status":"publish","type":"post","link":"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling","title":{"rendered":"Automate Metadata With Cloudinary AI Vision Add-on for Smart Image Labeling"},"content":{"rendered":"<div class=\"wp-block-cloudinary-markdown \"><p>Image metadata contains useful details but are often left blank, making images impossible to search for. Manually tagging important metadata for each image just isn\u2019t feasible as the volume of your visual collection grows. It\u2019s also inconsistent because people can label similar clusters of images differently, even if they should fall into the same category.<\/p>\n<p>Cloudinary\u2019s AI Vision add-on solves this by reading an image and returning tags that match your own list. This improves the searchability of your assets, ensures your metadata is clean, reduces manual work, and speeds up your review cycles for large image sets.<\/p>\n<p>In this demo, you\u2019ll upload an image using Cloudinary, send it through AI Vision Tagging mode, read the tags, store them, and show them in a small gallery. The whole flow sits inside a simple Next.js app.<\/p>\n<ul>\n<li>\n<p>Test the live version here: <a href=\"https:\/\/cloudinary-ai-vision-demo.vercel.app\/gallery\">https:\/\/cloudinary-ai-vision-demo.vercel.app\/gallery<\/a><\/p>\n<\/li>\n<li>\n<p>View the full source code here: <a href=\"https:\/\/github.com\/musebe\/cloudinary-ai-vision-demo\">https:\/\/github.com\/musebe\/cloudinary-ai-vision-demo<\/a><\/p>\n<\/li>\n<\/ul>\n<h2>Cloudinary Setup, AI Vision, and Upload<\/h2>\n<ol>\n<li>\n<strong>Create a free Cloudinary account.<\/strong> The <a href=\"https:\/\/cloudinary.com\/users\/login\">setup is simple<\/a> and only takes a few steps. Then open your dashboard. You\u2019ll find your cloud name, API key, and API secret. You\u2019ll use these in your <code>.env.local<\/code> file.<\/li>\n<li>\n<strong>Enable the AI Vision add-on.<\/strong> Open the <strong><a href=\"(https:\/\/cloudinary.com\/documentation\/cloudinary_ai_vision_addon)\">Add-ons<\/a><\/strong> page in the Cloudinary console. Search for <strong>AI Vision<\/strong> and enable it. This gives your account access to the Analyze API for tagging and queries. AI Vision supports tagging, moderation, and general questions. In this demo, we\u2019ll use <strong>Tagging mode<\/strong> so we can generate metadata.<\/li>\n<li>\n<strong>Create an Upload Preset.<\/strong> Go to <strong>Settings<\/strong>, then <strong>Upload<\/strong>. Create a new preset. You can allow unsigned uploads for demos. Name the preset something simple, like <code>demo_unsigned<\/code>. You\u2019ll pass this preset to the Upload Widget in your client component.<\/li>\n<li>\n<strong>Add Cloudinary variables to <code>.env.local<\/code>.<\/strong> Your Next.js app needs the Cloudinary values. Add these to <code>.env.local<\/code>:<\/li>\n<\/ol>\n<pre class=\"js-syntax-highlighted\"><code>NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME=your_cloud_name\nCLOUDINARY_API_KEY=your_api_key\nCLOUDINARY_API_SECRET=your_api_secret\nCLOUDINARY_UPLOAD_PRESET=demo_unsigned\n<\/code><\/pre>\n<p>Restart the dev server after adding them.<\/p>\n<p>The Upload Preset controls how images enter Cloudinary. The AI Vision add-on processes the images through the Analyze API. These two parts create the entire metadata flow used later in the demo.<\/p>\n<h2>Bootstrapping the Next.js App<\/h2>\n<p>You set up the project with the Next.js App Router and a few key packages. This gives you a clean base for uploads, API routes, and Cloudinary integration.<\/p>\n<h3>Create the Next.js Project<\/h3>\n<p>Run:<\/p>\n<pre class=\"js-syntax-highlighted\"><code>npx create-next-app@latest cloudinary-ai-vision-demo\n<\/code><\/pre>\n<p>Choose TypeScript and App Router.<\/p>\n<h3>Enable Cache Components<\/h3>\n<p>Turn on <code>cacheComponents<\/code> in <code>next.config.ts<\/code>:<\/p>\n<pre class=\"js-syntax-highlighted\" aria-describedby=\"shcb-language-1\" data-shcb-language-name=\"JavaScript\" data-shcb-language-slug=\"javascript\"><span><code class=\"hljs language-javascript shcb-wrap-lines\"><span class=\"hljs-keyword\">import<\/span> type { NextConfig } <span class=\"hljs-keyword\">from<\/span> <span class=\"hljs-string\">'next'<\/span>\n\n<span class=\"hljs-keyword\">const<\/span> nextConfig: NextConfig = {\n  <span class=\"hljs-attr\">cacheComponents<\/span>: <span class=\"hljs-literal\">true<\/span>,\n}\n\n<span class=\"hljs-keyword\">export<\/span> <span class=\"hljs-keyword\">default<\/span> nextConfi\n<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-1\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">JavaScript<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">javascript<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n<p>This keeps pages dynamic by default and you can still cache specific components with <code>use cache<\/code> when needed.<\/p>\n<h3>Install Packages<\/h3>\n<p>Add Cloudinary and UI tools:<\/p>\n<pre class=\"js-syntax-highlighted\"><code>npm install next-cloudinary\nnpm install cloudinary\n<\/code><\/pre>\n<p>Then install <strong>shadcn\/ui<\/strong>:<\/p>\n<pre class=\"js-syntax-highlighted\"><code>npx shadcn@latest init\n<\/code><\/pre>\n<p>Add the UI components used in the demo:<\/p>\n<pre class=\"js-syntax-highlighted\"><code>npx shadcn@latest add card button badge skeleton\n<\/code><\/pre>\n<p>These cover the upload panel, gallery layout, and image cards.<\/p>\n<h3>Environment Variables<\/h3>\n<p>Add your Cloudinary keys in <code>.env.local<\/code>:<\/p>\n<pre class=\"js-syntax-highlighted\"><code>CLOUDINARY_CLOUD_NAME=xxxx\nCLOUDINARY_UPLOAD_PRESET=xxxx\nCLOUDINARY_API_KEY=xxxx\nCLOUDINARY_API_SECRET=xxxx\n<\/code><\/pre>\n<h2>Connecting Cloudinary Uploads to the UI<\/h2>\n<p>With Cloudinary ready, you now integrate uploads into the Next.js interface. This step adds two small client components: the Cloudinary Upload Widget and the Upload Panel that previews the uploaded image and triggers AI Vision.<\/p>\n<h3>Cloudinary Upload Widget<\/h3>\n<p>This component opens the Cloudinary upload dialog in the browser.<\/p>\n<p>Once an image is uploaded, it returns <code>public_id<\/code>, <code>asset_id<\/code>, size, format, and more.<\/p>\n<p><strong><code>components\/upload-widget-client.tsx<\/code><\/strong><\/p>\n<blockquote>\n<p><a href=\"https:\/\/github.com\/musebe\/cloudinary-ai-vision-demo\/blob\/main\/components\/upload-widget-client.tsx\">View on GitHub<\/a><\/p>\n<\/blockquote>\n<p>Core idea:<\/p>\n<pre class=\"js-syntax-highlighted\" aria-describedby=\"shcb-language-2\" data-shcb-language-name=\"HTML, XML\" data-shcb-language-slug=\"xml\"><span><code class=\"hljs language-xml shcb-wrap-lines\"><span class=\"hljs-tag\">&lt;<span class=\"hljs-name\">CldUploadWidget<\/span>\n  <span class=\"hljs-attr\">uploadPreset<\/span>=<span class=\"hljs-string\">{process.env.NEXT_PUBLIC_CLOUDINARY_UPLOAD_PRESET}<\/span>\n  <span class=\"hljs-attr\">onUpload<\/span>=<span class=\"hljs-string\">{(result)<\/span> =&gt;<\/span> {\n    if (result?.info) onUploaded(result.info);\n  }}\n&gt;\n  {({ open }) =&gt; (\n    <span class=\"hljs-tag\">&lt;<span class=\"hljs-name\">button<\/span> <span class=\"hljs-attr\">type<\/span>=<span class=\"hljs-string\">\"button\"<\/span> <span class=\"hljs-attr\">onClick<\/span>=<span class=\"hljs-string\">{()<\/span> =&gt;<\/span> open()}&gt;\n      Upload Image\n    <span class=\"hljs-tag\">&lt;\/<span class=\"hljs-name\">button<\/span>&gt;<\/span>\n  )}\n<span class=\"hljs-tag\">&lt;\/<span class=\"hljs-name\">CldUploadWidget<\/span>&gt;<\/span>;\n<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-2\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">HTML, XML<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">xml<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n<p>This gives you a clean uploader that works inside any client component.<\/p>\n<h3>Upload Panel: Preview + Analyze Button<\/h3>\n<p>The Upload Panel wraps the widget.<\/p>\n<p>It stores the upload, shows a preview, and gives the user an <strong>Analyze<\/strong> button.<\/p>\n<p><strong><code>components\/upload-panel-client.tsx<\/code><\/strong><\/p>\n<blockquote>\n<p><a href=\"https:\/\/github.com\/musebe\/cloudinary-ai-vision-demo\/blob\/main\/components\/upload-panel-client.tsx\">View on GitHub<\/a><\/p>\n<\/blockquote>\n<p>Key pattern:<\/p>\n<pre class=\"js-syntax-highlighted\" aria-describedby=\"shcb-language-3\" data-shcb-language-name=\"JavaScript\" data-shcb-language-slug=\"javascript\"><span><code class=\"hljs language-javascript shcb-wrap-lines\"><span class=\"hljs-keyword\">const<\/span> &#91;asset, setAsset] = (useState &lt; UploadedResource) | (<span class=\"hljs-literal\">null<\/span> &gt; <span class=\"hljs-literal\">null<\/span>);\n\n<span class=\"hljs-function\"><span class=\"hljs-keyword\">function<\/span> <span class=\"hljs-title\">handleUploaded<\/span>(<span class=\"hljs-params\">resource: UploadedResource<\/span>) <\/span>{\n  setAsset(resource);\n}\n<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-3\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">JavaScript<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">javascript<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n<p>Preview from Cloudinary:<\/p>\n<pre class=\"js-syntax-highlighted\" aria-describedby=\"shcb-language-4\" data-shcb-language-name=\"HTML, XML\" data-shcb-language-slug=\"xml\"><span><code class=\"hljs language-xml shcb-wrap-lines\"><span class=\"hljs-tag\">&lt;<span class=\"hljs-name\">CldImage<\/span>\n  <span class=\"hljs-attr\">src<\/span>=<span class=\"hljs-string\">{asset.public_id}<\/span>\n  <span class=\"hljs-attr\">alt<\/span>=<span class=\"hljs-string\">\"Uploaded image\"<\/span>\n  <span class=\"hljs-attr\">fill<\/span>\n  <span class=\"hljs-attr\">sizes<\/span>=<span class=\"hljs-string\">\"(max-width: 768px) 100vw, 40vw\"<\/span>\n  <span class=\"hljs-attr\">className<\/span>=<span class=\"hljs-string\">\"object-cover\"<\/span>\n\/&gt;<\/span>;\n<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-4\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">HTML, XML<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">xml<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n<p>At this point:<\/p>\n<ul>\n<li>Uploads go straight to Cloudinary.<\/li>\n<li>You have the <code>public_id<\/code>.<\/li>\n<li>You\u2019re ready to send the image to AI Vision Tagging.<\/li>\n<\/ul>\n<h2>Analyzing Images With the Cloudinary AI Vision Add-on<\/h2>\n<p>Now that uploads work, the next step is sending those images to Cloudinary\u2019s <strong>AI Vision Tagging<\/strong> service. This happens in two parts:<\/p>\n<ol>\n<li>A <strong>client action<\/strong> that calls.<\/li>\n<li>A <strong>Next.js API Route<\/strong> that talks to Cloudinary\u2019s Analyze API.<\/li>\n<\/ol>\n<p>This keeps your Cloudinary API secret safe on the server.<\/p>\n<h3>The Analyze Button in the Upload Panel<\/h3>\n<p>When a user uploads an image, they can click <strong>Analyze image with AI Vision<\/strong>. This button sends the <code>asset_id<\/code>, <code>public_id<\/code>, and URL to a Next.js API endpoint.<\/p>\n<p>Inside the Upload Panel, the key part is:<\/p>\n<pre class=\"js-syntax-highlighted\" aria-describedby=\"shcb-language-5\" data-shcb-language-name=\"JavaScript\" data-shcb-language-slug=\"javascript\"><span><code class=\"hljs language-javascript shcb-wrap-lines\"><span class=\"hljs-keyword\">const<\/span> res = <span class=\"hljs-keyword\">await<\/span> fetch(<span class=\"hljs-string\">\"\/api\/analyze\"<\/span>, {\n  <span class=\"hljs-attr\">method<\/span>: <span class=\"hljs-string\">\"POST\"<\/span>,\n  <span class=\"hljs-attr\">body<\/span>: <span class=\"hljs-built_in\">JSON<\/span>.stringify({\n    <span class=\"hljs-attr\">asset_id<\/span>: asset.asset_id,\n    <span class=\"hljs-attr\">public_id<\/span>: asset.public_id,\n    <span class=\"hljs-attr\">secure_url<\/span>: asset.secure_url,\n  }),\n});\n<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-5\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">JavaScript<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">javascript<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n<p>This keeps the client simple. It just sends the image info and waits.<\/p>\n<p>File: <strong><code>components\/upload-panel-client.tsx<\/code><\/strong><\/p>\n<blockquote>\n<p><a href=\"https:\/\/github.com\/musebe\/cloudinary-ai-vision-demo\/blob\/main\/components\/upload-panel-client.tsx\">View on GitHub<\/a><\/p>\n<\/blockquote>\n<h3>AI Vision API Route<\/h3>\n<p>The real work happens inside the API route.<\/p>\n<p>This endpoint receives the uploaded image and calls Cloudinary\u2019s <strong>Tagging Mode<\/strong>. After you provide the definitions for each tag, AI Vision checks the image and returns only the tags that match.<\/p>\n<p>File: <strong><code>app\/api\/analyze\/route.ts<\/code><\/strong><\/p>\n<blockquote>\n<p><a href=\"https:\/\/github.com\/musebe\/cloudinary-ai-vision-demo\/blob\/main\/app\/api\/analyze\/route.ts\">View on GitHub<\/a><\/p>\n<\/blockquote>\n<p>The important part looks like this:<\/p>\n<pre class=\"js-syntax-highlighted\" aria-describedby=\"shcb-language-6\" data-shcb-language-name=\"JavaScript\" data-shcb-language-slug=\"javascript\"><span><code class=\"hljs language-javascript shcb-wrap-lines\"><span class=\"hljs-keyword\">await<\/span> cloudinary.analyze.ai_vision_tagging({\n  <span class=\"hljs-attr\">source<\/span>: { asset_id },\n  <span class=\"hljs-attr\">tag_definitions<\/span>: &#91;\n    { <span class=\"hljs-attr\">name<\/span>: <span class=\"hljs-string\">\"person\"<\/span>, <span class=\"hljs-attr\">description<\/span>: <span class=\"hljs-string\">\"Does the image contain a person?\"<\/span> },\n    { <span class=\"hljs-attr\">name<\/span>: <span class=\"hljs-string\">\"food\"<\/span>, <span class=\"hljs-attr\">description<\/span>: <span class=\"hljs-string\">\"Does the image contain food?\"<\/span> },\n    { <span class=\"hljs-attr\">name<\/span>: <span class=\"hljs-string\">\"text\"<\/span>, <span class=\"hljs-attr\">description<\/span>: <span class=\"hljs-string\">\"Does the image contain text?\"<\/span> },\n  ],\n});\n<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-6\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">JavaScript<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">javascript<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n<p>You can define up to <strong>10 tags<\/strong>.<\/p>\n<p>AI Vision checks the image and returns the tags that fit.<\/p>\n<h3>Saving Tags to a Simple JSON <code>Database<\/code><\/h3>\n<p>For the demo, each analyzed image is saved to a small JSON file stored locally.<\/p>\n<p>This helps us render a gallery later without using a real database.<\/p>\n<p>The helper lives here: <strong><code>lib\/db.ts<\/code><\/strong><\/p>\n<blockquote>\n<p><a href=\"https:\/\/github.com\/musebe\/cloudinary-ai-vision-demo\/blob\/main\/lib\/db.ts\">View on GitHub<\/a><\/p>\n<\/blockquote>\n<p>Each saved item includes:<\/p>\n<ul>\n<li>public_id<\/li>\n<li>created date<\/li>\n<li>AI Vision tags<\/li>\n<li>secure URL<\/li>\n<\/ul>\n<p>This makes it easy to build a basic gallery.<\/p>\n<h3>Your Flow So Far<\/h3>\n<p>Let\u2019s recap our process so far:<\/p>\n<ol>\n<li>\n<strong>Upload an image<\/strong> with the Cloudinary widget.<\/li>\n<li>The app stores the upload info.<\/li>\n<li>You click <strong>Analyze<\/strong>.<\/li>\n<li>Next.js sends the image to Cloudinary AI Vision.<\/li>\n<li>AI Vision returns the detected tags.<\/li>\n<li>The tags and image info are saved to <code>db.json<\/code>.<\/li>\n<\/ol>\n<p>You now have smart metadata for each uploaded image.<\/p>\n<h2>Saving Images and AI Tags in a Simple JSON Store<\/h2>\n<p>Once Cloudinary AI Vision returns the tags, you\u2019ll need a place to keep them. In this demo, you\u2019ll use a very simple approach: a JSON file on disk. This is great for learning and local testing. In a real app you\u2019d swap this for a real database.<\/p>\n<h3>The \u2018Database\u2019 File<\/h3>\n<p>All analyzed images are saved to a single JSON file: <strong><code>data\/assets.json<\/code><\/strong><\/p>\n<blockquote>\n<p><a href=\"https:\/\/github.com\/musebe\/cloudinary-ai-vision-demo\/blob\/main\/data\/assets.json\">View on GitHub<\/a><\/p>\n<\/blockquote>\n<p>Each entry contains:<\/p>\n<ul>\n<li>\n<code>asset_id<\/code>\n<\/li>\n<li>\n<code>public_id<\/code>\n<\/li>\n<li>\n<code>secure_url<\/code>\n<\/li>\n<li>\n<code>format<\/code>\n<\/li>\n<li>\n<code>bytes<\/code>\n<\/li>\n<li>\n<code>tags<\/code> (from AI Vision)<\/li>\n<li>\n<code>createdAt<\/code>\n<\/li>\n<li>\n<code>updatedAt<\/code>\n<\/li>\n<\/ul>\n<p>Think of it as a small table of all your processed images.<\/p>\n<h3>DB Helper Functions<\/h3>\n<p>You won\u2019t work with the JSON file directly. Instead, you\u2019ll use helper functions in a small module: <strong><code>lib\/db.ts<\/code><\/strong><\/p>\n<blockquote>\n<p><a href=\"https:\/\/github.com\/musebe\/cloudinary-ai-vision-demo\/blob\/main\/lib\/db.ts\">View on GitHub<\/a><\/p>\n<\/blockquote>\n<p>The important functions are:<\/p>\n<ul>\n<li>\n<strong><code>upsertAsset(record)<\/code>.<\/strong> Adds a new asset or updates an existing one.<\/li>\n<li>\n<strong><code>listAssets()<\/code>.<\/strong> Returns all stored images.<\/li>\n<li>\n<strong><code>findAssetByAssetId(asset_id)<\/code>.<\/strong> Looks up a single image.<\/li>\n<\/ul>\n<p>A simplified shape of the stored object looks like this:<\/p>\n<pre class=\"js-syntax-highlighted\"><span><code class=\"hljs shcb-wrap-lines\">type StoredAsset = {\n  asset_id: string\n  public_id: string\n  secure_url: string\n  bytes: number\n  format: string\n  tags: string&#91;]\n  createdAt: string\n  updatedAt: string\n}\n<\/code><\/span><\/pre>\n<p>This type keeps the metadata tidy and predictable.<\/p>\n<h3>Saving Data Inside the Analyze Route<\/h3>\n<p>The AI Vision API route takes care of saving and updating entries. After it gets tags from Cloudinary, it builds a <code>StoredAsset<\/code> object and calls <code>upsertAsset<\/code>. The route also sets <code>createdAt<\/code> and <code>updatedAt<\/code> timestamps. If an image is analyzed again, tags can be merged or updated.<\/p>\n<p>File for the route: <strong><code>app\/api\/analyze\/route.ts<\/code><\/strong><\/p>\n<blockquote>\n<p><a href=\"https:\/\/github.com\/musebe\/cloudinary-ai-vision-demo\/blob\/main\/app\/api\/analyze\/route.ts\">View on GitHub<\/a><\/p>\n<\/blockquote>\n<p>So the flow is:<\/p>\n<ol>\n<li>Analyze image with AI Vision.<\/li>\n<li>Get tags from Cloudinary.<\/li>\n<li>Build a <code>StoredAsset<\/code> object.<\/li>\n<li>Save it into <code>data\/assets.json<\/code> using <code>upsertAsset<\/code>.<\/li>\n<\/ol>\n<p>Later, the gallery page reads from <code>listAssets()<\/code> to render everything.<\/p>\n<h2>Showing Tagged Images in a Clean Gallery View<\/h2>\n<p>With uploads stored and tagged, you can now display them in a simple gallery. This part of the app helps you see how AI Vision improves metadata, because every image appears alongside the tags Cloudinary generated.<\/p>\n<p>The gallery is a Server Component page. It loads all saved assets from the JSON store and then renders small cards for each one.<\/p>\n<h3>Reading Saved Images<\/h3>\n<p>The gallery page calls a helper that returns all stored assets: <strong><code>lib\/db.ts<\/code><\/strong><\/p>\n<blockquote>\n<p><a href=\"https:\/\/github.com\/musebe\/cloudinary-ai-vision-demo\/blob\/main\/lib\/db.ts\">View on GitHub<\/a><\/p>\n<\/blockquote>\n<pre class=\"js-syntax-highlighted\" aria-describedby=\"shcb-language-7\" data-shcb-language-name=\"JavaScript\" data-shcb-language-slug=\"javascript\"><span><code class=\"hljs language-javascript shcb-wrap-lines\"><span class=\"hljs-keyword\">const<\/span> assets = <span class=\"hljs-keyword\">await<\/span> listAssets();\n<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-7\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">JavaScript<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">javascript<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n<p>Because it runs on the server, it can read the JSON file directly without exposing anything to the client.<\/p>\n<h3>Gallery Page<\/h3>\n<p>The page lives here: <strong><code>app\/gallery\/page.tsx<\/code><\/strong><\/p>\n<blockquote>\n<p><a href=\"https:\/\/github.com\/musebe\/cloudinary-ai-vision-demo\/blob\/main\/app\/gallery\/page.tsx\">View on GitHub<\/a><\/p>\n<\/blockquote>\n<p>Its job is straightforward. It fetches all the images, wraps the display in a small layout, and then passes each asset to a card component, so the file stays clear and easy to follow.<\/p>\n<h3>Asset Card Component<\/h3>\n<p>Each image is displayed in a dedicated card that shows:<\/p>\n<ul>\n<li>The Cloudinary image.<\/li>\n<li>File format and size.<\/li>\n<li>AI Vision tags.<\/li>\n<li>Created time.<\/li>\n<\/ul>\n<p>File: <strong><code>components\/gallery\/asset-card.tsx<\/code><\/strong><\/p>\n<blockquote>\n<p><a href=\"https:\/\/github.com\/musebe\/cloudinary-ai-vision-demo\/blob\/main\/components\/gallery\/asset-card.tsx\">View on GitHub<\/a><\/p>\n<\/blockquote>\n<p>Inside each card, the image is rendered using <code>CldImage<\/code>:<\/p>\n<pre class=\"js-syntax-highlighted\" aria-describedby=\"shcb-language-8\" data-shcb-language-name=\"HTML, XML\" data-shcb-language-slug=\"xml\"><span><code class=\"hljs language-xml shcb-wrap-lines\"><span class=\"hljs-tag\">&lt;<span class=\"hljs-name\">CldImage<\/span>\n  <span class=\"hljs-attr\">src<\/span>=<span class=\"hljs-string\">{asset.public_id}<\/span>\n  <span class=\"hljs-attr\">alt<\/span>=<span class=\"hljs-string\">\"Uploaded image\"<\/span>\n  <span class=\"hljs-attr\">width<\/span>=<span class=\"hljs-string\">{400}<\/span>\n  <span class=\"hljs-attr\">height<\/span>=<span class=\"hljs-string\">{300}<\/span>\n\/&gt;<\/span>;\n<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-8\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">HTML, XML<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">xml<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n<p>The rest of the card displays clean text snapshots of the metadata. Tags appear as small UI badges so they stand out clearly.<\/p>\n<h2>Conclusion<\/h2>\n<p>You now have a full workflow that uploads an image to Cloudinary, runs it through the AI Vision add-on, stores the returned metadata, and displays everything in a small gallery.<\/p>\n<p>This demo keeps the code readable and easy to extend. You can replace the JSON store with a real database, add more AI Vision modes, or build a full asset dashboard.<\/p>\n<p>You can explore the full code here:<\/p>\n<ul>\n<li>\n<p><strong>GitHub Repo:<\/strong> <a href=\"https:\/\/github.com\/musebe\/cloudinary-ai-vision-demo\">https:\/\/github.com\/musebe\/cloudinary-ai-vision-demo<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Live Demo:<\/strong> <a href=\"https:\/\/cloudinary-ai-vision-demo.vercel.app\/gallery\">https:\/\/cloudinary-ai-vision-demo.vercel.app\/gallery<\/a><\/p>\n<\/li>\n<\/ul>\n<\/div>","protected":false},"excerpt":{"rendered":"","protected":false},"author":87,"featured_media":39371,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_cloudinary_featured_overwrite":false,"footnotes":""},"categories":[1],"tags":[336,212,286],"class_list":["post-39315","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized","tag-ai","tag-next-js","tag-tagging"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.6 (Yoast SEO v26.9) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Automate Metadata With Cloudinary AI Vision Add-on for Smart Image Labeling<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Automate Metadata With Cloudinary AI Vision Add-on for Smart Image Labeling\" \/>\n<meta property=\"og:url\" content=\"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling\" \/>\n<meta property=\"og:site_name\" content=\"Cloudinary Blog\" \/>\n<meta property=\"article:published_time\" content=\"2025-11-26T15:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-11-26T18:22:41+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/res.cloudinary.com\/cloudinary-marketing\/images\/f_auto,q_auto\/v1763684063\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling.jpg?_i=AA\" \/>\n\t<meta property=\"og:image:width\" content=\"2000\" \/>\n\t<meta property=\"og:image:height\" content=\"1100\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"melindapham\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling#article\",\"isPartOf\":{\"@id\":\"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling\"},\"author\":{\"name\":\"melindapham\",\"@id\":\"https:\/\/cloudinary.com\/blog\/#\/schema\/person\/0d5ad601e4c3b5be89245dfb14be42d9\"},\"headline\":\"Automate Metadata With Cloudinary AI Vision Add-on for Smart Image Labeling\",\"datePublished\":\"2025-11-26T15:00:00+00:00\",\"dateModified\":\"2025-11-26T18:22:41+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling\"},\"wordCount\":11,\"publisher\":{\"@id\":\"https:\/\/cloudinary.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling#primaryimage\"},\"thumbnailUrl\":\"https:\/\/res.cloudinary.com\/cloudinary-marketing\/images\/f_auto,q_auto\/v1763684063\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling.jpg?_i=AA\",\"keywords\":[\"AI\",\"Next.js\",\"Tagging\"],\"inLanguage\":\"en-US\",\"copyrightYear\":\"2025\",\"copyrightHolder\":{\"@id\":\"https:\/\/cloudinary.com\/#organization\"}},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling\",\"url\":\"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling\",\"name\":\"Automate Metadata With Cloudinary AI Vision Add-on for Smart Image Labeling\",\"isPartOf\":{\"@id\":\"https:\/\/cloudinary.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling#primaryimage\"},\"image\":{\"@id\":\"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling#primaryimage\"},\"thumbnailUrl\":\"https:\/\/res.cloudinary.com\/cloudinary-marketing\/images\/f_auto,q_auto\/v1763684063\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling.jpg?_i=AA\",\"datePublished\":\"2025-11-26T15:00:00+00:00\",\"dateModified\":\"2025-11-26T18:22:41+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling#primaryimage\",\"url\":\"https:\/\/res.cloudinary.com\/cloudinary-marketing\/images\/f_auto,q_auto\/v1763684063\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling.jpg?_i=AA\",\"contentUrl\":\"https:\/\/res.cloudinary.com\/cloudinary-marketing\/images\/f_auto,q_auto\/v1763684063\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling.jpg?_i=AA\",\"width\":2000,\"height\":1100},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/cloudinary.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Automate Metadata With Cloudinary AI Vision Add-on for Smart Image Labeling\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/cloudinary.com\/blog\/#website\",\"url\":\"https:\/\/cloudinary.com\/blog\/\",\"name\":\"Cloudinary Blog\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/cloudinary.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/cloudinary.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/cloudinary.com\/blog\/#organization\",\"name\":\"Cloudinary Blog\",\"url\":\"https:\/\/cloudinary.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/cloudinary.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/res.cloudinary.com\/cloudinary-marketing\/images\/f_auto,q_auto\/v1649718331\/Web_Assets\/blog\/cloudinary_logo_for_white_bg_1937437aa7_19374666c7_193742f877\/cloudinary_logo_for_white_bg_1937437aa7_19374666c7_193742f877.png?_i=AA\",\"contentUrl\":\"https:\/\/res.cloudinary.com\/cloudinary-marketing\/images\/f_auto,q_auto\/v1649718331\/Web_Assets\/blog\/cloudinary_logo_for_white_bg_1937437aa7_19374666c7_193742f877\/cloudinary_logo_for_white_bg_1937437aa7_19374666c7_193742f877.png?_i=AA\",\"width\":312,\"height\":60,\"caption\":\"Cloudinary Blog\"},\"image\":{\"@id\":\"https:\/\/cloudinary.com\/blog\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/cloudinary.com\/blog\/#\/schema\/person\/0d5ad601e4c3b5be89245dfb14be42d9\",\"name\":\"melindapham\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/cloudinary.com\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e6f989fa97fe94be61596259d8629c3df65aec4c7da5c0000f90d810f313d4f4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e6f989fa97fe94be61596259d8629c3df65aec4c7da5c0000f90d810f313d4f4?s=96&d=mm&r=g\",\"caption\":\"melindapham\"}}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Automate Metadata With Cloudinary AI Vision Add-on for Smart Image Labeling","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling","og_locale":"en_US","og_type":"article","og_title":"Automate Metadata With Cloudinary AI Vision Add-on for Smart Image Labeling","og_url":"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling","og_site_name":"Cloudinary Blog","article_published_time":"2025-11-26T15:00:00+00:00","article_modified_time":"2025-11-26T18:22:41+00:00","og_image":[{"width":2000,"height":1100,"url":"https:\/\/res.cloudinary.com\/cloudinary-marketing\/images\/f_auto,q_auto\/v1763684063\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling.jpg?_i=AA","type":"image\/jpeg"}],"author":"melindapham","twitter_card":"summary_large_image","schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling#article","isPartOf":{"@id":"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling"},"author":{"name":"melindapham","@id":"https:\/\/cloudinary.com\/blog\/#\/schema\/person\/0d5ad601e4c3b5be89245dfb14be42d9"},"headline":"Automate Metadata With Cloudinary AI Vision Add-on for Smart Image Labeling","datePublished":"2025-11-26T15:00:00+00:00","dateModified":"2025-11-26T18:22:41+00:00","mainEntityOfPage":{"@id":"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling"},"wordCount":11,"publisher":{"@id":"https:\/\/cloudinary.com\/blog\/#organization"},"image":{"@id":"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling#primaryimage"},"thumbnailUrl":"https:\/\/res.cloudinary.com\/cloudinary-marketing\/images\/f_auto,q_auto\/v1763684063\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling.jpg?_i=AA","keywords":["AI","Next.js","Tagging"],"inLanguage":"en-US","copyrightYear":"2025","copyrightHolder":{"@id":"https:\/\/cloudinary.com\/#organization"}},{"@type":"WebPage","@id":"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling","url":"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling","name":"Automate Metadata With Cloudinary AI Vision Add-on for Smart Image Labeling","isPartOf":{"@id":"https:\/\/cloudinary.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling#primaryimage"},"image":{"@id":"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling#primaryimage"},"thumbnailUrl":"https:\/\/res.cloudinary.com\/cloudinary-marketing\/images\/f_auto,q_auto\/v1763684063\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling.jpg?_i=AA","datePublished":"2025-11-26T15:00:00+00:00","dateModified":"2025-11-26T18:22:41+00:00","breadcrumb":{"@id":"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling#primaryimage","url":"https:\/\/res.cloudinary.com\/cloudinary-marketing\/images\/f_auto,q_auto\/v1763684063\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling.jpg?_i=AA","contentUrl":"https:\/\/res.cloudinary.com\/cloudinary-marketing\/images\/f_auto,q_auto\/v1763684063\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling.jpg?_i=AA","width":2000,"height":1100},{"@type":"BreadcrumbList","@id":"https:\/\/cloudinary.com\/blog\/ai-vision-smart-image-labeling#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/cloudinary.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Automate Metadata With Cloudinary AI Vision Add-on for Smart Image Labeling"}]},{"@type":"WebSite","@id":"https:\/\/cloudinary.com\/blog\/#website","url":"https:\/\/cloudinary.com\/blog\/","name":"Cloudinary Blog","description":"","publisher":{"@id":"https:\/\/cloudinary.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/cloudinary.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/cloudinary.com\/blog\/#organization","name":"Cloudinary Blog","url":"https:\/\/cloudinary.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/cloudinary.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/res.cloudinary.com\/cloudinary-marketing\/images\/f_auto,q_auto\/v1649718331\/Web_Assets\/blog\/cloudinary_logo_for_white_bg_1937437aa7_19374666c7_193742f877\/cloudinary_logo_for_white_bg_1937437aa7_19374666c7_193742f877.png?_i=AA","contentUrl":"https:\/\/res.cloudinary.com\/cloudinary-marketing\/images\/f_auto,q_auto\/v1649718331\/Web_Assets\/blog\/cloudinary_logo_for_white_bg_1937437aa7_19374666c7_193742f877\/cloudinary_logo_for_white_bg_1937437aa7_19374666c7_193742f877.png?_i=AA","width":312,"height":60,"caption":"Cloudinary Blog"},"image":{"@id":"https:\/\/cloudinary.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/cloudinary.com\/blog\/#\/schema\/person\/0d5ad601e4c3b5be89245dfb14be42d9","name":"melindapham","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/cloudinary.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e6f989fa97fe94be61596259d8629c3df65aec4c7da5c0000f90d810f313d4f4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e6f989fa97fe94be61596259d8629c3df65aec4c7da5c0000f90d810f313d4f4?s=96&d=mm&r=g","caption":"melindapham"}}]}},"jetpack_featured_media_url":"https:\/\/res.cloudinary.com\/cloudinary-marketing\/images\/f_auto,q_auto\/v1763684063\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling\/Blog_Automate_Cloudinary_Metadata_with_Google_Vision_Add-On_for_Smart_Image_Labelling.jpg?_i=AA","_links":{"self":[{"href":"https:\/\/cloudinary.com\/blog\/wp-json\/wp\/v2\/posts\/39315","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cloudinary.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cloudinary.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cloudinary.com\/blog\/wp-json\/wp\/v2\/users\/87"}],"replies":[{"embeddable":true,"href":"https:\/\/cloudinary.com\/blog\/wp-json\/wp\/v2\/comments?post=39315"}],"version-history":[{"count":3,"href":"https:\/\/cloudinary.com\/blog\/wp-json\/wp\/v2\/posts\/39315\/revisions"}],"predecessor-version":[{"id":39372,"href":"https:\/\/cloudinary.com\/blog\/wp-json\/wp\/v2\/posts\/39315\/revisions\/39372"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cloudinary.com\/blog\/wp-json\/wp\/v2\/media\/39371"}],"wp:attachment":[{"href":"https:\/\/cloudinary.com\/blog\/wp-json\/wp\/v2\/media?parent=39315"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cloudinary.com\/blog\/wp-json\/wp\/v2\/categories?post=39315"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cloudinary.com\/blog\/wp-json\/wp\/v2\/tags?post=39315"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}