Rafael Drelich
rafaeldrelich
About Me
Rafael Drelich is an Effects TD and educator based in Toronto, specializing in digital simulations and procedural solutions for VFX and games.
Believes that a collaborative mindset, rapid prototyping and iterative processes can solve any creative challenge.
Previously worked at WeFX, Pixomondo,... more
Rafael Drelich is an Effects TD and educator based in Toronto, specializing in digital simulations and procedural solutions for VFX and games.
Believes that a collaborative mindset, rapid prototyping and iterative processes can solve any creative challenge.
Previously worked at WeFX, Pixomondo, and Mr. X, contributing to high-profile projects such as "Guillermo Del Toro's Cabinet of Curiosities" (2023), "From" (2023), "Halo" (2022), "American Gods" (2021), and other notable shows.
Rafael holds a Bachelor's degree in Design from PUC-Rio and a Postgraduate Degree in Visual Effects from George Brown College.
Currently a Professor at Seneca Polytechnic School of Creative Arts and Animation, teaching procedural thinking for games using Houdini. less
EXPERTISE
Technical Director
INDUSTRY
Film/TV
Connect
LOCATION
Toronto,
Canada
WEBSITE
Houdini Skills
ADVANCED
Procedural Modeling | Digital Assets | Crowds | Mantra | Lighting | Pyro FX | Fluids | Destruction FX
INTERMEDIATE
Environments | Motion Editing | Karma | PDG | VEX
Availability
I am available for Contract Work
Recent Forum Posts
Integrating ComfyUI AI-generated concepts into Houdini Jan. 19, 2026, 10:05 p.m.
Pavel DostalHi Pavel!!
Hi everyone,
I’m currently building a pipeline where I generate concept images (vehicles, terrain, etc.) using ComfyUI (Stable Diffusion) and then feed them into Houdini for further processing – either as reference or for semi-automated geometry creation (through ML-enhanced Top Networks, depth maps, segmentation, etc.).
The idea is to have a semi-automatic flow:
Input prompt in ComfyUI → image generated
Houdini picks latest image from directory (using COP2 or TOPs)
(Later) Use ML to extract depth / structure → create procedural model or layout
Final output goes to Unreal Engine as asset/environment
🔧 I’m still in the early phase, so I’m experimenting with:
integrating ONNX inference directly inside Houdini
controlling generation from within Houdini (via Python)
proceduralizing based on 2D-to-3D logic or tagged structure
I'm wondering:
Has anyone built something similar (AI > Houdini > UE pipeline)?
What would you recommend for handling image-driven procedural generation?
Any tips for integrating ComfyUI output dynamically into COP2 / TOPs networks?
Here’s a screenshot of the current system in action (ComfyUI + Houdini preview):Image Not Found
Thanks a lot – I’d love to share more as this progresses!
Pavel
https://www.novusion.eu/ [www.novusion.eu] | https://www.artstation.com/infinitex/ [www.artstation.com]
That's super cool, I don't know if you saw this already but I have a similar project and open sourced it recently.
Let me know your thoughts
https://youtu.be/KB7gsxHXVFM [youtu.be]
https://github.com/CapybaraCrowporation/houdini-comfyui-bridge [github.com]
Rafa
Day 1 | Elements: Earth July 1, 2020, 11:29 p.m.
My entry