element33

element33

About Me

Connect

LOCATION
Not Specified
ウェブサイト

Houdini Engine

Availability

Not Specified

Recent Forum Posts

Maze of Doom (self-changing) 2026年4月21日20:30



Continuation of my R&D into virtual creatures in strange environments. This time it's a procedural mutating maze, built in Houdini. The initial state uses the Aldous-Broder/Wilson hybrid algo to generate a nice, perfect maze. The maze then keeps changing its internal walls via the Edge Swap algo, targeting walls likely to invalidate the current solution path. The outer boundary also shrinks inward over time, reducing the playfield. One or two agents navigate using the Trémaux algorithm, leaving breadcrumb markers that become stale / invalid as the maze shifts. The viewer sees the solution path, the agents don't. The video goes over the setup.

Field Amoeba - a virtual creature simulation 2026年4月1日21:42



I'm continuing my R&D of virtual creatures living in abstract spaces (like geometric graphs etc). The current project switches from graphs to fields. The creature ("Field Amoeba") lives in a field generated by repulsors (red dots, dangerous), an attractor (the food, green dot), and the creature's own preference for a radial rest shape. Its body is a point cloud driving anisotropic Voronoi cells, which align to the field. A single central "brain" point governs the navigation. It runs as a simulation Houdini. Main challenges:

1. Alpha shape extraction from a point cloud
2. Detection of body continuity (has a chunk been severed?)
3. Automated selection of "interesting" scenarios

Possible outcomes: creature gets the food, dies from accumulated damage, dies from catastrophic severing (~50% of the body lost at once), death when a red dot touches the "brain". The YT video walks through the Houdini setup, Voronoi R&D, then sample scenarios are shown.

Rule-based vs ML agents compete on graphs (Houdini vs Unity) 2026年2月13日3:25



I’m developing an idea called Competitive Organisms on Networks: AI agents constrained to geometric graphs (not free space), where the graph is a "habitat".

The current project is a 2-agent pursuit/evasion scenario: the prey must collect all food to win, the chaser must catch the prey to win.

The rule-based agents were created in Houdini (VEX). Then the environment was re-created in Unity to train ML agents.

- Verdict: both approaches can create competitive agents

- Biggest mistake: to equalize chances for the rule-based agents, I gave the chaser a small speed advantage. This, unintentionally, created a "super chaser" in Unity

- Takeaways: 1) better to equalize chances with environmental conditions, not advantages, 2) the environment defines the performance ceiling more than the brain "type": i.e. hand-coded vs ML agents.

The YT video walks through the setup, shows a few sample games and what was learned.