# orr

orr

EXPERTISE
Not Specified
INDUSTRY
Not Specified

LOCATION
Not Specified
WEBSITE

Not Specified
INTERMEDIATE
Not Specified
BEGINNER
Not Specified

Not Specified

## Recent Forum Posts

#### Thoughts on NeuralVDB/AI Simulation TechApril 7, 2023, 11:33 p.m.

wyhinton1
Honestly I might give applying a basic PINN a go.

ChatGPT gave me this nice example of the general steps on how to go about using a PINN to speed up a physics solve, btw!

I think the tough part of using the PINN, from my very basic understanding, is that unless you spend the time to build a very large data set, the PINN you train will have a fairly narrow set of applicable use cases.

But lets say you need to refine and iterate a simulation many times, or create many instances of a certain kind of simulation, then the PINN could greatly accelerate that.

Step 1: Define the geometry of the system that you want to simulate.

In Houdini, you can create a 3D model of the system that you want to simulate using its built-in modeling tools. For example, you could create a simple geometry, such as a ball falling onto a plane.

Step 2: Use a physics engine to simulate the system and generate observational data.

In Houdini, you can use its built-in physics engine to simulate the system and generate observational data. For example, you could set up a rigid body simulation where the ball falls onto the plane and bounces off.

Step 3: Train a PINN using the observational data and the physical laws that govern the system's behavior.

To train the PINN, you need to provide it with both the observational data and the physical laws that govern the system's behavior. For example, in the case of the ball falling onto a plane, you could provide the PINN with the initial position and velocity of the ball, as well as the laws of gravity and collision.

In Houdini, you could use a machine learning framework, such as TensorFlow or PyTorch, to create and train the PINN. You would need to define the PINN architecture and loss function, and then train the network using the observational data and physical laws as inputs.

Step 4: Use the trained PINN to predict the system's behavior under different conditions, instead of simulating it using the physics engine.

Once you have trained the PINN, you can use it to predict the system's behavior under different conditions. For example, you could use the PINN to predict how the ball would behave if it was dropped from a different height or if it was made of a different material.

In Houdini, you could create a new simulation where you input the different conditions and use the trained PINN to generate the simulation results.

To summarize, in this basic example, you could use Houdini to create a simple 3D model, simulate it using its built-in physics engine, train a PINN using the observational data and physical laws, and use the trained PINN to predict the system's behavior under different conditions. This approach can significantly reduce the computational cost of the simulation, as the PINN can make predictions much faster than the physics engine can simulate the system.

Besides PINNs, differentiable simulations are a very interesting field right now. You can check out nvidias wrap for example here [developer.nvidia.com]. One of the ideas is that you can compute the gradient of your input parameters through the simulation. This would in theory allow you to optimize for the initial parameters of your simulation from a given final state.
For example, and was shown in a talk by Miles Macklin, you want a piece of cloth to land on a ball, and by using the automatically computed gradient you can optimize for the initial velocity and force that will cause the cloth to land on the ball. This is a very simplistic example, but I think it gives an idea of where this could go, and it is already used in the deep RL community.
There is a houdini wrap plugin, but it does not allow you to perform gradient computations yet as far as I know.

#### Thoughts on NeuralVDB/AI Simulation TechOct. 13, 2022, 1:16 p.m.

I agree, it would be nice for Houdini not to lose its eye on innovation. Other than implementing most recent methods in neural graphics I would love to see Houdini enable users to make tools of their own. A Tensor Operator Context (TEOps) with automatic differentiation or a differentiable operators context (DIFOps) where every operator can compute gradients for backpropagation (like NVIDIAs Warp) would be great. If you look at Neural Fields, Differentiable Simulations, PINNs there is a lot out there that Houdinis toolset is not adapted to yet but will soon become relevant for vfx.

#### changing parms from a external processAug. 4, 2019, 4:13 p.m.

tpetrick
I think the problem in your case is that need to save the .hip after setting the parms. The parm you're setting is on the ROP node targeted for the cook, not on the TOP node itself. The parms on that ROP are not evaluated by PDG, but by ROPs itself when the job is cooking out of process. E.g.:

1. PDG generates a ROP work item to cook the node “/obj/topnet1/ropgeometry1”, and sets the target hip to \$HIP
2. The job is scheduled locally/on the farm
3. The job starts hython, loads the specified .hip file, and cooks the ROP

So if the .hip is not saved, the parm change won't be available.

The reason it works when you change the parms in the current Houdini session is that Houdini will normally pop up a graphical dialog prompting you to save you .hip before cooking the TOP network. There's a button on that dialog to “Do this every time”, which makes the saving happen automatically, but that option is stored as part of the dialog system. In a non graphical session, it doesn't enter the code path to check if the .hip file needs to saved.

I think we can fix it so that it always saves the .hip before cooking the TOP network in a non-graphical session, but in the mean time you can do that through hou.HipFile.

I see. That makes sense. Thanks so much for clarifying.