From Individual Renders to Production — One Evening, Three Tools

Vibe coding · By Peter · 3 min read
From Individual Renders to Production — One Evening, Three Tools

The transition from individual AI renders to a consistent production pipeline.

This evening began with an email. Not complicated, just that familiar problem: AI images look impressive, but they don't form a series. Lighting varies, backgrounds differ, each run feels distinct. Individually they work, but together they don't. Simply not production-ready.

So, the logical path was revisited. Zappa was involved, tightening prompts, leaving less room for error. A lot was already there — 310 lines of prompt discipline, five modules, everything locked. You think that's where the solution lies. But at some point in the evening, it starts to irritate. You keep tweaking and correcting, and each run is slightly different. And then you realise: this won't be solved here.

That's where the shift occurred.

No longer trying to perfect it before generation, but accepting that Higgsfield starts anew each time. So that part keeps shifting, no matter what you do. From there, it became practical. Continuously generating images and observing where it goes wrong. Not focusing on aesthetics but on behaviour. Where does the light shift, where does the background change, where does it break?

Simultaneously, Claude was brought in — not to write prompts, but to think technically. What can you do post-generation without regenerating? That's where Replicate came into play. New to me, discovered this evening. A platform where open-source AI models run on others' GPUs, callable via API. Not as a replacement for Higgsfield, but for one thing: masking. Separating the model from the background using rembg, an AI model that generates an alpha mask — determining pixel by pixel what is model and what is background.

And that's when it clicked.

Because that mask gives you control. You're no longer dependent on how Higgsfield sets up its studio. You can intervene yourself. Make the background uniform, stabilise the lighting. Not to beautify, but to make consistent.

Technically, it works like this: the mask separates foreground from background, but not in a binary way — hair pixels are in-between, semi-transparent. A hard cut results in colour fringes in the hair. The solution is soft alpha-blending: each pixel is weighted and blended between original and tinted. Hair at alpha 0.4 gets a subtle mix instead of a hard colour boundary. That was the difference between "looks edited" and "looks studio".

On top of that: texture enhancement for fabric detail, sharpness for edges, and two automatic crops based on where the model stands in the image. Full body at 1060×1280 ratio, knee-cut at 65% model height. All server-side via Sharp, all in one API call.

The reference logic for Higgsfield has also been established. Strict separation: the first image is the face (identity lock), then clothing (isolated, product only), the last image solely for pose and expression. No blending between roles. If you don't explicitly enforce this in your prompt, Higgsfield mixes everything together. That prompt is now copyable on the page.

And that's where it changed this evening. Not one image, but the series. It starts to hold together. It no longer feels like individual renders, but like one setting.

The chain is now: Zappa generates the prompt from a reference image. Higgsfield creates the image with separated references. Post-production aligns the whole — masking, tint, texture, crops. Three tools, one pipeline, built and integrated in one evening.

The entire post-production module now runs as a demo in Zappa. Upload your Higgsfield image, set your background colour, and you have studio-ready cutouts.

This is just the beginning. The first prototype. But this is also far from where we started, because this morning I didn't have this. And that's precisely what makes it interesting — now it truly begins.

Peter
Peter
Creative Directors
Oprichters van Studio PB.NL met 20 jaar ervaring in fashion, e-commerce en AI-gedreven innovatie. Samen bouwen we aan de toekomst van creatieve technologie.