Shaderball speed with unused subnets

   5023   5   0
User Avatar
Member
4140 posts
Joined: July 2005
Offline
We've noticed that if you have a subnet in VOPs that is not being used, there still seems to be a serious effect on shaderball render times. Not as noticable, of course, on simple things, but with some of the uber vopnets we're using this is becoming an issue.

I can understand there's issues with this sort of app which sometimes requires including a little bit of dead code “just to be safe”, however try plunking down a subnet and fill it with 5 or 10 “Stone Wall” VOPs, then go up and connect a simple Lighting Model to the output - not connecting the subnet in any way. You should have very slow performace - delete the subnet - boom it's back.

On a related note, our understanding is that rotating the shaderball doesn't cause a re-compile of the code, but we've noticed another odd thing - just changing a param in any VOP *does*. That doesn't appear to make sense and shouldn't trigger a recompile…

Cheers,

J.C.
John Coldrick
User Avatar
Staff
4517 posts
Joined: July 2005
Offline
As you say, this is a case of us wanting to be completely sure that all required code gets put into the shader. So even if they aren't connected to any output, we always generate code for Print VOPs, Inline VOPs, Parameter VOPs, and, as you noticed, subnet VOPs. This last one is mostly because a subnet VOP may contain a Print VOP, or a Parameter VOP, or an Inline VOP. But we could probably do some more work to cut subnet VOPs out of the code generation process if none of these conditions apply.

As for changing a parameter of a VOP causing a recompile, this is because we recompile the network whenever anything changes that might change the code. And pretty much every parameter of every VOP is going to affect the code is one way or another. I'm not sure if there's a particular example you're thinking of where the recompile isn't necessary, but most changes really do require a recompile, so again we're just being safe.

I think perhaps a better solution to both these problems is not to change the recompile behavior, but just to make it less disruptive. The problem isn't that changing a VOP parameter causes a recompile. The problem is that a recompile basically freezes the interface. You can't continue making changes to your VOPs while it recompiles. You have to wait until it's done, then it updates part of the shader ball, then you're able to make another change to a VOP parameter. If the recompile didn't freeze the interface you could just continue your editing of the network, and when you're done and ready to see the results you could stop typing for a while and actually wait for the shader ball to update. Similarly with the first problem, if a recompile didn't disrupt your workflow it wouldn't matter so much that the compile included the code for all the subnets.

Just a thought.

Mark
User Avatar
Member
4140 posts
Joined: July 2005
Offline
Yup, you're right about the param change thing causing a recompile. One of the downsides of all this interactivity is a consistent misunderstanding of exactly what's going on.

As far as the “redundant” subnets - is the code literally recompiling as I tumble the shaderball? I didn't think it did - so that means the slow speed is literally the compiled code being rerun - and the slowness is caused by redundant code being part of it?

Yes, if there were only a handful or VOPs like Print, Inline or Param that could be a problem, it would be great if their absence could trigger a little faster behaviour…

Cheers,

J.C.
John Coldrick
User Avatar
Member
941 posts
Joined: July 2005
Offline
mtucker
I think perhaps a better solution to both these problems is not to change the recompile behavior, but just to make it less disruptive.

Hey Mark;

Yes, I agree! If you guys could background the recompiles (much like the redraw/rerender is now), it would be wonderful!

As an example, I'm working on some stuff now that uses lots of LUTs in the form of many spline() functions with hundreds of parameters each. The actual code is not heavy at all, but those tables slow the compile down to a crawl (several seconds, in fact). In this case, the shader ball is unusable due to the recompile time; not the shader execution time.

As far as the param changes invoking a recompile; you're right that param-value changes end up modifying the code, but that's because all params that are not explicitly defined as such by the user, get turned into constants inside the code. I understand why it's being done this way, since, unless they've been set by the user as actual params, then that's exactly what they are: constants. But here's a crazy idea:

What if you treated *all* parameters as actual parameters to the context function (even the ones that aren't set by the user as actual parameters through the Parameter vop)? This would be just for an “internal” version of the shader that gets used to update the shader ball; the final shader still only contains the parameters exposed by the user.

Just thinking out loud here, and there are likely a lot of issues I'm not aware of, but at least on the surface, it looks to me like:

1. You already have default values for each parameter as part of the dialog definition. These would become the defaults of each non-user-defined parameter in the internal version.

2. The internal version of the function then gets written as a fully parameterized function, with each “non-official” parameter receiving the defaults set in the dialog definition. Then the current values for each param (possibly different from default) get used for the actual calls to the shader when rerendering the ball.

This would mean a recompile need only get triggered when an Op gets either added, deleted, or rewired, no?
It also means that my hypothetical “internal” version gets defined in the reverse way to what happens now: it would start with a full set of params, with each user-defined Parameter vop simply overriding that parameter's default….

Again; I'm sure there are a tonne of things I'm not taking into account, but I just thought I'd throw it out there…. would something along those lines be just too crazy?

Thanks,

– Mario.
Mario Marengo
Senior Developer at Folks VFX [folksvfx.com] in Toronto, Canada.
User Avatar
Staff
4517 posts
Joined: July 2005
Offline
Mario Marengo
Again; I'm sure there are a tonne of things I'm not taking into account, but I just thought I'd throw it out there…. would something along those lines be just too crazy?

Yes, it would be a little crazy

That's a very cool idea for avoiding recompiles, but I think the implementation would be quite difficult. One problem would be keeping track of all those parameters (there could be thousands of them), keeping them in the right order, etc. Another problem is disabled parameters - do they get passed as parameters or not? And VOP signature parameters, which change which parameters get used in the code and change the data types that get used in the code. And VOPs that use “ifdefs” in their code based on parameter values can't be simulated by passing parameters - they would require recompiles. So you could say that all those things I mention are exceptions which should trigger recompiles, but I think there is no way to know for a given VOP and parameter whether one or more of these conditions applies…

Plus then there are all the internal issues of actually generating the two types of code, generating the shader calls with all the appropriate parameters, etc. And all this work just to reduce the number of recompiles. Again, I would rather just see recompiles being shoved further into background processing so as to prevent them from disrupting the interface.

Mark
User Avatar
Member
941 posts
Joined: July 2005
Offline
mtucker
Yes, it would be a little crazy



As I suspected… a tonne of things…

Oh well; it's fun to dream…



Thanks Mark!
Mario Marengo
Senior Developer at Folks VFX [folksvfx.com] in Toronto, Canada.
  • Quick Links