Skip to content

RESOURCES / BLOG

Personalize E-commerce and Marketing Visuals in Next.js With Generative Replace and Overlays

GitHub Repository | Live Demo

E-commerce thrives on visual storytelling: a quick, personalized banner or badge can turn a casual browser into a buyer. With Cloudinary’s AI-driven Generative Replace and Overlays, you can dynamically swap out product elements (e.g., “50% OFF” → “75% OFF”) or superimpose seasonal callouts (“🎉 Holiday Sale!”) all without manual Photoshop work.

In this tutorial, we’ll take you from zero to a working Next.js 15 app that:

  • Generates AI-powered image edits (e_gen_replace).
  • Applies text or image overlays with custom fonts, colors, and positioning.
  • Persists your exact preview via a cache-busted query string (?v=).
  • Displays a drag-and-drop uploader, live preview, and recent transform gallery.

Our starter-template branch already includes:

  • Next.js 15 App Router scaffold.
  • Tailwind CSS, shadcn/ui.
  • Motion.dev for animations.
  • Placeholder components for Cloudinary integration.
git  clone  https://github.com/musebe/cloudinary-personalized-visuals.git  

cd  cloudinary-personalized-visuals 

git  checkout  starter-template
Code language: PHP (php)
npm  install

The key packages in package.json include:

  • next@15.x, react@19.x, next-cloudinary
  • cloudinary (Node SDK)
  • lucide-react, motion, sonner
  • tailwindcss, shadcn/ui, clsx

Create a .env.local at your project root:

NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME=your-cloud-name

NEXT_PUBLIC_CLOUDINARY_UPLOAD_PRESET=your-upload-preset

NEXT_PUBLIC_CLOUDINARY_API_KEY=your-api-key

CLOUDINARY_API_SECRET=your-api-secret

Tip: Cloudinary’s Upload Widget reads the upload_preset and cloud_name client-side, while your server action uses the secret key to generate secure URLs.

Before we can optimize and serve Cloudinary images (and use the Upload Widget), we need to tell Next.js to trust those external domains and load the widget script early.

Add your Cloudinary cloud to the images.remotePatterns array so Next.js can fetch and optimize those assets:

// next.config.js
export default {
  images: {
    remotePatterns: [
      {
        protocol: "https",
        hostname: "res.cloudinary.com",
        pathname: `/${process.env.NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME}/**`,
      },
    ],
  },
};

See Next.js docs on configuring external images: https://nextjs.org/docs/app/api-reference/config/next-config-js/images

Make sure the Upload Widget’s global script is loaded before any client-side code runs:

// app/head.tsx
import Script from "next/script";

export default function Head() {
  return (
    <Script
      src="https://upload-widget.cloudinary.com/global/all.js"
      strategy="beforeInteractive"
    />
  );
}
Code language: JavaScript (javascript)

Learn more in the Cloudinary Upload Widget docs: https://cloudinary.com/documentation/upload_widget_reference

These three minimal modules under src/lib power uploads, persistence, and URL segment building:

  1. cloudinary.ts. Server-side Cloudinary SDK init.
  2. db.ts. Simple JSON store for transform records.
  3. transform.ts. Assemble your Generative Replace + Overlay path segment.

Configure the Cloudinary Node SDK with your account credentials (server-side only).

// src/lib/cloudinary.ts
import { v2 as cloudinary } from "cloudinary";

cloudinary.config({
  cloud_name:  process.env.NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME!,
  api_key:     process.env.NEXT_PUBLIC_CLOUDINARY_API_KEY!,
  api_secret:  process.env.CLOUDINARY_API_SECRET!,
  secure:      true,
});

export default cloudinary;
Code language: JavaScript (javascript)

Manages a local transforms.json file, letting us read all saved transform records (newest first) and prepend new ones.

// src/lib/db.ts
import fs from "fs/promises";
import path from "path";

const DB_PATH = path.join(process.cwd(), "transforms.json");

// Read all saved records, returning an empty array if the file doesn't exist
export async function readAll() {
  try {
    return JSON.parse(await fs.readFile(DB_PATH, "utf-8"));
  } catch {
    return [];
  }
}

// Prepend a new record to the array and write back to disk
export async function write(record: any) {
  const data = await readAll();
  data.unshift(record);
  await fs.writeFile(DB_PATH, JSON.stringify(data, null, 2));
}

Code language: JavaScript (javascript)

Full source: db.ts

Turn your options (generative replace + text/image overlay) into a Cloudinary URL path segment:

  1. Generative Replace
if (from && to) {
  chain.push(
    `e_gen_replace:from_${encodeURIComponent(from)};to_${encodeURIComponent(to)}`
  );
}
Code language: JavaScript (javascript)
  1. Text Overlay
chain.push(
  [
    `l_text:${encodeURIComponent(`${fontFamily}_${fontSize}_${fontWeight}`)}:${encodeURIComponent(overlay)}`,
    `co_rgb:${textColor}`,
    bgColor ? `b_rgb:${bgColor}` : "",
    `g_${gravity}`, `x_${x}`, `y_${y}`,
  ]
    .filter(Boolean)
    .join(",")
);
Code language: JavaScript (javascript)
  1. Image Overlay
chain.push(
  [`l_${overlay}`, `g_${gravity}`, `x_${x}`, `y_${y}`].join(",")
);
Code language: JavaScript (javascript)
  1. Final Assembly
return  chain.length  ?  `${chain.join("/")}/`  :  "";
Code language: JavaScript (javascript)

Full source: transform.ts

With these helpers ready, we can now build a LivePreview component that debounces inputs, shows a loader during AI replace, and mirrors exactly what will be saved.

We’ll break down how the preview “stage” is set up, how overlays get their “stage directions” (gravity + offsets), and finally how we render text badges vs. image badges—step by step.

<div className="relative w-full h-[400px] rounded-lg border bg-gray-50 overflow-hidden">
  {baseUrl && (
    <Image
      src={baseUrl}
      alt="preview"
      fill
      className="object-contain"
      unoptimized
    />
  )}
</div>
Code language: HTML, XML (xml)
  1. Stage (relative, overflow-hidden):
  • Think of this as a theater stage with borders. Anything that moves beyond its edges gets clipped, so overlays never “bleed” outside.
  1. Backdrop (Image with object-contain):
  • The product image or AI‐replaced render is the backdrop. object-contain ensures the full image fits within the stage, like shrink‐to‐fit scenery.
const GRAVITY_TO_CLASS: Record<Gravity, string> = {
  north_west: "items-start justify-start",
  north:      "items-start justify-center",
  north_east: "items-start justify-end",
  west:       "items-center justify-start",
  center:     "items-center justify-center",
  east:       "items-center justify-end",
  south_west: "items-end justify-start",
  south:      "items-end justify-center",
  south_east: "items-end justify-end",
};
Code language: JavaScript (javascript)
  • Analogy: Gravity is like telling an actor where to stand on stage: front‐left, center, back‐right, etc.
  • We map each of the 9 positions to Flexbox alignments (items-… + justify-…).
{overlay && (
  <div
    className={`absolute inset-0 flex ${
      GRAVITY_TO_CLASS[overlay.gravity]
    }`}
    style={{ pointerEvents: "none" }}
  >
    <div style={{ transform: `translate(${overlay.x}px, ${overlay.y}px)` }}>
      {/* overlay badge goes here */}
    </div>
  </div>
)}
Code language: HTML, XML (xml)
  1. Overlay Wrapper (absolute inset-0 flex …):
  • Covers the whole stage, then uses our gravity mapping to anchor the overlay container.
  1. Offset Nudge (translate(x, y)):
  • After anchoring, we nudge the overlay by a few pixels horizontally/vertically for fine‐tuning, like stage marks on the floor.
<motion.span
  className="inline-block rounded-lg shadow-md"
  style={{
    color:           `#${overlay.textColor}`,
    backgroundColor: `#${overlay.bgColor}`,
    fontFamily:      overlay.fontFamily,
    fontSize:        overlay.fontSize,
    fontWeight:      overlay.fontWeight,
    padding:         "4px 12px",
  }}
  initial={{ opacity: 0, scale: 0.9 }}
  animate={{ opacity: 1, scale: 1 }}
  transition={{ duration: 0.15 }}
>
  {overlay.text}
</motion.span>
Code language: HTML, XML (xml)
  • Entrance cue: We fade+scale the text into view (0.9 → 1.0) for a subtle “actor entrance.”

Costume and props:

  • Rounded corners + drop‐shadow act like stage lighting and set design.
  • Font, colours, padding style, the badge.
<motion.div
  className="relative w-32 h-32 overflow-hidden rounded-lg shadow-md"
  initial={{ opacity: 0, scale: 0.9 }}
  animate={{ opacity: 1, scale: 1 }}
  transition={{ duration: 0.15 }}
>
  <Image
    src={`https://res.cloudinary.com/${process.env.NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME}/image/upload/${overlay.imageId}.png`}
    alt="overlay"
    fill
    className="object-contain"
    unoptimized
  />
</motion.div>
Code language: HTML, XML (xml)

Actor is an image:

  • Container keeps the aspect ratio, rounds the edges, and adds a drop‐shadow “footlight.”
  • Animates in just like the text badge.

With the stage, stage directions, and actor entrances defined, LivePreview mirrors exactly what the final Cloudinary‐transformed URL will show—right down to gravity, offsets, colours, and sizing.

Full component for reference: LivePreview.tsx

Building on LivePreview (which simply renders any baseUrl + overlays), Uploader is where we:

  1. Turn your “from → to” and overlay options into a Cloudinary URL.
  2. Debounce typing so we don’t flood the UI with reloads.
  3. Show a “Generating…” spinner while the AI does its work.
  4. Let you “Try a new variation” by bumping a cache‐buster.
  5. Save the exact preview you saw into our gallery.

Inputs + debounce → transform segment → baseUrl

Think of your raw inputs repFrom and repTo as paint colors. Before we mix them, we wait (debounce) for the painter to finish each stroke:

// 350 ms after you stop typing, lock in the values
const fromDeb = useDebounced(repFrom);
const toDeb   = useDebounced(repTo);

// Build the Cloudinary path just once both words are set
const segment = buildTransform({
  from: repEnabled && fromDeb && toDeb ? fromDeb : undefined,
  to:   repEnabled && fromDeb && toDeb ?   toDeb   : undefined,
  // overlay props omitted…
});

// Append ?v=version so each regenerate yields a fresh AI-render
const baseUrl = publicId
  ? `https://res.cloudinary.com/${process.env.NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME}` +
    `/image/upload/${segment}${publicId}.png?v=${version}`
  : "";
Code language: JavaScript (javascript)

Flow: typing stops → fromDeb/toDeb update → segment rebuilds → baseUrl changes → LivePreview rerenders.

New URL → preload → hide spinner

Just like dimming the stage lights while scenery changes, we hide the preview until it’s fully ready:

useEffect(() => {
  if (!publicId) return;
  setLoading(true);
  const img = new Image();
  img.src = baseUrl;       // start loading AI‐rendered image
  img.onload = () => setLoading(false);
  img.onerror = () => setLoading(false);
}, [baseUrl, publicId]);
Code language: JavaScript (javascript)
  • Before: loading = true → spinner overlays the LivePreview
  • After: image loads → loading = false → spinner disappears → new image appears

Version bump → fresh AI output

If you’re not happy with the first take and ask the director for another shot:

const regenerate = () => {
  if (!repEnabled || !publicId || !fromDeb || !toDeb) return;
  setVersion(v => v + 1);
};
Code language: JavaScript (javascript)
  • Each time version increments, ?v= changes → browser treats it as a new request → Cloudinary runs AI‐replace again.

Preview freeze‐frame → gallery entry

Once you’ve got your perfect shot, you “Save” it:

const handleSave = async () => {
  if (!publicId) return;
  const form = new FormData();
  form.set("publicId", publicId);
  form.set("version", String(version));    // lock in this exact ?v=
  form.set("from",    repEnabled ? repFrom : "");
  form.set("to",      repEnabled ? repTo   : "");
  // …overlay, colors, gravity, x/y…
  await addTransform(null, form);
  router.refresh(); // show your saved transform in the gallery
};
Code language: JavaScript (javascript)
  • The server reads version and appends ?v= to the saved URL.
  • TransformGallery loads that same cache‐busted URL, so what you previewed is exactly what’s persisted.

With Uploader In place, you now have a seamless in-browser workflow: type your replace prompts, fine-tune overlays, watch a live AI-powered preview with graceful loading, and save exactly what you see. This sets the stage for our final piece—the gallery that showcases every transformation you’ve created.

Full component for reference: Uploader.tsx › GitHub

Continuing from Uploader (where we captured your perfect AI render with ?v=), our server action in src/app/actions/transforms.ts faithfully recreates — and freezes — that exact preview before adding it to our gallery. Here’s how it all fits together:

  1. Unpack the preview. The client “Save” step sends every parameter—especially the version that is locked in your cache-buster.
  2. Rebuild the transform. We call the exact same buildTransform helper you used in the browser, ensuring no drift between what you saw and what we persist.
  3. Append ?v=. By tacking on the same query string, the saved URL hits Cloudinary’s CDN with that unique AI-generated asset.
  4. Write to disk. We prepend the new record into transforms.json, so your gallery always shows the newest edits first.

Like a film director calling “action,” the server grabs every detail you define in the UI.

const publicId = data.get("publicId") as string;
const version  = Number(data.get("version") ?? 0);

// The “from” & “to” prompts for AI replace:
const from = (data.get("from") as string) || undefined;
const to   = (data.get("to")   as string) || undefined;

// Overlay parameters for text or image badges:
const overlay     = (data.get("overlay")     as string) || undefined;
const overlayMode = (data.get("overlayMode") as string) === "image" ? "image" : "text";

// Placement & styling:
const gravity    = (data.get("gravity")    as string) ?? "north_west";
const x          = Number(data.get("x")    ?? 0);
const y          = Number(data.get("y")    ?? 0);
const textColor  = (data.get("overlayColor") as string) ?? "000000";
const bgColor    = (data.get("overlayBg")    as string) || undefined;
const fontFamily = (data.get("fontFamily")  as string) ?? "Arial";
const fontSize   = Number(data.get("fontSize") ?? 40);
const fontWeight = (data.get("fontWeight") as string) === "normal" ? "normal" : "bold";
Code language: JavaScript (javascript)

Analogy: Think of these as your call sheet: every actor (parameter) must be exactly where you directed.

We’ll reuse your client’s recipe, so the saved URL matches the live preview exactly.

const segment = buildTransform({
  from, to,
  overlay, overlayMode,
  gravity, x, y,
  textColor, bgColor,
  fontFamily, fontSize, fontWeight,
});
Code language: JavaScript (javascript)
  • Why reuse? It guarantees parity: what you mixed in the browser is what we’ll serve from the server.
  • Result: A string like

e_gen_replace:from_shoe;to_boot/l_text:Arial_40_bold:SALE,co_rgb:FF0000,.../

Attach the ?v= so Cloudinary returns that unique AI-rendered asset.

const transformedUrl =
  `https://res.cloudinary.com/${process.env.NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME}` +
  `/image/upload/${segment}${publicId}.png` +
  (version ? `?v=${version}` : "");
Code language: JavaScript (javascript)
  • If version = 3, your URL ends in .../image.png?v=3.
  • This prevents any caching mismatch: your gallery image is the identical frame you approved in LivePreview.

We’ll timestamp and store every transform, newest-first, for our gallery to consume.

const record = {
  id:             crypto.randomUUID(),
  publicId,
  transformedUrl,   // includes ?v=
  from, to,
  overlay, overlayMode,
  pos: { x, y },
  createdAt:      Date.now(),
};

await write(record);  // prepends to transforms.json
return record;
Code language: JavaScript (javascript)
  • createAt lets us sort transforms chronologically.
  • write(record) uses our db.ts helper to insert it at the top of transforms.json.

Flow Recap: Uploader sends FormData (with version) → server parses fields → rebuilds the same transform segment → crafts cache-busted URL → writes record → client refreshes gallery.

With this robust persistence layer, your TransformGallery will always display the exact AI-powered image you previewed and saved no surprises, just seamless consistency.

Full action for reference: transforms.ts › GitHub

Now that our server action has frozen in your exact AI-powered preview URLs, TransformGallery brings everything together by:

  1. Out-painting each original and transformed image to a consistent 800×400 “stage” using AI fill.
  2. Injecting that same out-paint segment into your saved transform URLs—so the ?v=… you approved remains intact.
  3. Pairing the padded original with its matching padded transform.
  4. Sliding between them in a responsive, polished grid.

Turn any image into a full-bleed 800×400 card, no white gutters.

const BASE    = `https://res.cloudinary.com/${process.env.NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME}/image/upload`;
const genFill = 'c_pad,b_gen_fill,w_800,h_400';
Code language: JavaScript (javascript)
  • c_pad adds transparent padding around the original.
  • b_gen_fill asks Cloudinary’s AI to “paint” those padded areas — seamlessly out-painting left/right and top/bottom.
  • Every card now shares a uniform 800×400 aspect, edge-to-edge.

Reuse the exact preview version you saved, then apply out-paint.

initial.slice(0, 3).map((t) => {
  // BEFORE: pad the raw asset
  const beforeUrl = `${BASE}/${genFill}/${t.publicId}.png`;

  // AFTER: inject genFill into the saved transform path,
  // keeping your original ?v= version intact
  const [path, qs] = t.transformedUrl.split("?");
  const injected   = path.replace("/upload/", `/upload/${genFill}/`);
  const afterUrl   = qs ? `${injected}?${qs}` : injected;

  return { beforeUrl, afterUrl };
});
Code language: JavaScript (javascript)
  • Split on ? so we preserve the exact ?v=… query that matches your LivePreview.
  • replace('/upload/', '/upload/'+genFill+'/') layers AI out-paint before any generative replace or overlays.

Show your before/after side by side with a sleek slider UI.

<section className="grid grid-cols-1 sm:grid-cols-2 lg:grid-cols-3 gap-8">
  {pairs.map(({ beforeUrl, afterUrl }, i) => (
    <div key={i} className="rounded-lg shadow-md overflow-hidden bg-white">
      <Comparison before={beforeUrl} after={afterUrl} />
    </div>
  ))}
</section>
Code language: HTML, XML (xml)
  • Responsive layout: 1 column on mobile, up to 3 on desktop.
  • Visual polish: cards with rounded corners, drop-shadow, and no overflow bleed—just like a finished gallery.

With TransformGallery, your end-to-end flow is complete:

  • UploadLivePreview AI replace and overlay.
  • Save exact ?v= URL → Server Action.
  • Showcase before/after out-painted cards in a slidable grid.

Full component for reference: TransformGallery.tsx › GitHub

You’ve built a fully integrated, AI-powered imaging pipeline in Next.js:

  • Dynamic Generative Replace. Swap out any element (e.g., “Cup” → “Plate”) on the fly with Cloudinary’s e_gen_replace.
  • Flexible overlays. Add styled text or image badges, complete with font, colour, background, gravity and pixel-perfect offsets.
  • Live in-browser preview. Debounce inputs, cache-busted URLs, and a graceful loading spinner ensure you see exactly what Cloudinary will deliver.
  • Exact persistence. Lock in the preview you love via ?v= versioning, so saved gallery items match your live preview 1:1.
  • Polished transform gallery. Uniform 800×400 AI out-painted canvases, before/after sliders, and a responsive grid to showcase your creations.

With these building blocks, you can now personalize e-commerce and marketing visuals at scale dynamically, programmatically, and beautifully.

Start Using Cloudinary

Sign up for our free plan and start creating stunning visual experiences in minutes.

Sign Up for Free