Integrating ComfyUI AI-generated concepts into Houdini

   367   0   0
User Avatar
Member
14 posts
Joined: Oct. 2014
Offline
Hi everyone,

I’m currently building a pipeline where I generate concept images (vehicles, terrain, etc.) using ComfyUI (Stable Diffusion) and then feed them into Houdini for further processing – either as reference or for semi-automated geometry creation (through ML-enhanced Top Networks, depth maps, segmentation, etc.).

The idea is to have a semi-automatic flow:

Input prompt in ComfyUI → image generated

Houdini picks latest image from directory (using COP2 or TOPs)

(Later) Use ML to extract depth / structure → create procedural model or layout

Final output goes to Unreal Engine as asset/environment

🔧 I’m still in the early phase, so I’m experimenting with:

integrating ONNX inference directly inside Houdini

controlling generation from within Houdini (via Python)

proceduralizing based on 2D-to-3D logic or tagged structure

I'm wondering:

Has anyone built something similar (AI > Houdini > UE pipeline)?
What would you recommend for handling image-driven procedural generation?
Any tips for integrating ComfyUI output dynamically into COP2 / TOPs networks?
Here’s a screenshot of the current system in action (ComfyUI + Houdini preview):



Thanks a lot – I’d love to share more as this progresses!

Pavel
https://www.novusion.eu/ [www.novusion.eu] | https://www.artstation.com/infinitex/ [www.artstation.com]
Edited by Pavel Dostal - June 10, 2025 02:13:04

Attachments:
Houdini_-_ComfyUI.png (216.0 KB)

  • Quick Links