I’m developing an idea called Competitive Organisms on Networks: AI agents constrained to geometric graphs (not free space), where the graph is a "habitat".
The current project is a 2-agent pursuit/evasion scenario: the prey must collect all food to win, the chaser must catch the prey to win.
The rule-based agents were created in Houdini (VEX). Then the environment was re-created in in Unity to train ML agents.
- Verdict: both approaches can create competitive agents
- Biggest mistake: to equalize chances for the rule-based agents, I gave the chaser a small speed advantage. This, unintentionally, created a "super chaser" in Unity
- Takeaways: 1) better to equalize chances with environmental conditions, not advantages, 2) the environment defines the performance ceiling more than hand-coded vs ML agents.
The YT video walks through the setup, shows a few sample games and what was learned.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.