Dora is here. If you're bouncing between inpainting and Img2Img inside Z-Image and losing time, I've been there. This quick, real-world guide explains exactly when I pick each tool, the settings I use, and how I keep text crisp and readable. If you care about AI images with accurate text for real campaigns, keep reading. I'll weave in what works for me when producing realistic AI images for marketing and why it matters.

Quick Decision Rule: Inpainting vs Img2Img in Z-Image

Inpainting vs Img2Img 2.png

If I need to change a small area (logos, product labels, headlines, faces, hands) while keeping everything else untouched, I use Inpainting. If I want to restyle the entire image (lighting, color grade, background mood) or gently reshape composition without repainting tiny details, I use Img2Img.

My 30-second rule:

  • Text fix, logo swap, packaging correction, sharp edges β†’ Inpainting.
  • Global look, cohesive style, gentle transformations β†’ Img2Img.
  • Mixed scenario? Start with Img2Img to land the vibe, then Inpainting to lock text precision.

I test both on the same base image with a fixed seed, so changes are controlled. That's how I get closer to the best AI image generator for text outcomes without wasting a morning.

If you want to apply this rule without wrestling with complex setups, Z-Image is where I test these decisions first. Img2Img for vibe, Inpainting for text β€” both are right there, fast enough to validate an idea before I commit to heavier workflows. For an official overview of Z-Image's speed and core capabilities, the Z-Image blog, which explains how the tool delivers high-quality outputs with simple prompts.

When to Use Img2Img in Z-Image

I reach for Img2Img when the base composition is fine, but the image needs a new look that's consistent across frames.

Use cases I see often:

  • Brand style alignment: Match a campaign's warm film tone across a set.
  • Lighting/mood shift: Turn flat studio light into moody backlight.
  • Background coherence: Replace messy environments with clean, on-brand scenes.
  • Gentle cleanup: Reduce artifacts globally without losing the layout.

Settings that work for me in Z-Image:

  • Strength/Denoise: 0.35–0.55 for subtle restyle: 0.6–0.75 if I'm okay with bigger changes. Above ~0.75, composition drifts.
  • CFG/Prompt weight: Moderate (6–8) to honor the prompt without nuking the source.
  • Resolution: Keep it close to the original to avoid warped type or stretched labels.
  • Seed: Lock it when iterating so changes are attributable to settings, not randomness.

Text note: If a poster headline already reads correctly, I keep strength under 0.5. That's how I keep AI images with accurate text intact while still improving overall aesthetics. This is especially useful for AI tools for designers who need consistency across multiple deliverables.

When to Use Inpainting in Z-Image

I use Inpainting when precision matters: correcting misspelled packaging, swapping product SKUs, fixing crooked kerning, or replacing a logo without touching skin tones or shadows.

Best scenarios:

  • Text correction on packaging, menus, posters, UI mockups.
  • Logo replacement while keeping material reflections.
  • Micro-detail fixes: hands, edges, stitching, wiring.

My practical settings:

  • Mask: Paint slightly beyond the problem area (2–4 px bleed) so edges blend.
  • Feather/Blur: Low-to-medium (5–15) to avoid hard seams: go lower on sharp labels.
  • Denoise: 0.2–0.45 to preserve context but allow clean redraw of the masked area.
  • Prompt weight: Lower than global edits. I describe the exact target: "clean sans-serif text, β€˜SUMMER SALE 50%', straight baseline, high contrast."
  • Reference tokens: If Z-Image supports text conditioning or reference hints, I include them sparingly.

Honesty moment: Inpainting won't magically print perfect typography every time. If the base resolution is too low, letters will mush. I upscale first, then inpaint. It's slower, but it's how I hit production-ready, realistic AI images for marketing.

Side-by-Side Examples (describe)

Inpainting vs Img2Img 4.png

Scenario A: Poster headline is wrong by one word.

  • Img2Img: Even at 0.5, the headline style drifts slightly and spacing shifts. Looks nicer, but the text isn't guaranteed.
  • Inpainting: I mask the headline, prompt the exact phrase, keep denoise at ~0.35, feather 8. Result: readable, aligned, matches the paper texture. Winner: Inpainting.

Scenario B: Product photo looks flat: colors don't match brand kit.

  • Img2Img: Strength 0.45, prompt a warm studio look, slight film grain. Text on the label stays mostly intact. Winner: Img2Img.

Scenario C: Packaging has the wrong SKU and a scuffed edge.

  • Start with Img2Img for overall polish (0.35). Then inpaint the SKU with denoise 0.3 and feather 6, plus a tiny inpaint on the edge. Winner: Combo.

Scenario D: Social ad needs the same layout in three styles.

  • Img2Img with locked seed for layout consistency, vary the style prompt per brand. If any headline breaks, I inpaint it per variant. This combo is my reliable route toward the best AI image generator for text workflows.

Z-Image Settings Cheat Sheet for Inpainting and Img2Img

Here's the snapshot I keep taped to my monitor.

Img2Img (global restyle):

  • Strength/Denoise: 0.35–0.55 subtle, 0.6–0.75 bold
  • CFG/Prompt weight: 6–8
  • Keep resolution near source: lock seed for iterations
  • Use when composition is fine but the vibe isn't

Inpainting (precision fixes):

  • Mask beyond edges: feather 5–15
  • Denoise: 0.2–0.45: go lower for typography
  • Prompt exact strings for text: describe material: "matte paper, crisp print"
  • Upscale first if letters blur

Extra:

  • Save prompts and seeds for auditability.
  • Keep a reference board so style prompts stay consistent across campaigns.
  • If licensing matters (it does), confirm rights for any reference assets before production.

Common Mistakes

  • Over-denoising text in Img2Img: Anything above ~0.6 can wobble letterforms.
  • Tiny masks: If you don't include a safety margin, edges ghost. Add 2–4 px around the target.
  • Hard edges on glossy materials: Feather too low creates visible seams in reflections.
  • Prompting vibes for inpainting text: Be literal. Put the exact words in quotes.
  • Ignoring scale: Small canvases kill legibility. Upscale, then inpaint.
  • Seed hopping during troubleshooting: You can't compare results if the seed keeps changing.
  • Forgetting brand color checks: After Img2Img, sample hex values to confirm you're still on-brand.

If you're using AI tools for designers on a deadline, these small slips cost the most time.

Try It Now

Inpainting vs Img2Img 5.png

Here's a quick 10-minute drill:

1. Pick a product shot with a messy background and a slightly wrong label.

2. Run Img2Img at 0.45 strength to set a clean studio mood.

3. Lock the seed, then inpaint the label text at 0.3 denoise with feather 8. Type the exact phrase.

4. Zoom to 200% and check baselines, kerning, and haloing around letters. If off, expand the mask 2–3 px and redo.

Do that twice and you'll feel the difference between Inpainting vs Img2Img in Z-Image in your hands, not just in theory. If you need AI images with accurate text today, this combo is the fastest path I know. If you want to try this workflow without setting up a full SD stack, Z-Image is an easy place to start. Run one Img2Img pass, lock the seed, then inpaint the text β€” you’ll know in minutes whether the image is campaign-ready.

And if something breaks, send me the settings that failed, I've probably tested that edge case already.