There are nodes for some of these steps, The attribute reorient can rotate the deltas given a reference pose and the animated one, and transform by attribute can apply the delta. The other two steps, computing deltas and masking are best done with attribute wrangles or attribute VOPs, depending on your comfort in writing code.
The delta would be the difference between the reference pose and the sculpt pose. (Delta = After - Before)
Applying a mask is multiplying the delta attribute by the mask value, which could either be a constant animated value or a painted mask.
Found 2052 posts.
Search results Show results as topic list.
Edited by jsmack - yesterday 17:20:45
What I would like is to get a subnetwork tree like I get when I import it via File - Import - Alembic Scene.
Then use file -> import alembic scene.
There isn't really an alternative if you must have the subnets.
Must users don't import that way though. The alembic read node can import all of the shapes in the alembic as separate packed primitives in Houdini, with path attributes to differentiate them and represent where they are located in the source hierarchy.
Cameras are the exception, as they must be imported at object level, the geometry level tools cannot import cameras. An alembic archive node must still be used to import cameras from alembic, (without creating custom tools to do so.)
If you have 18, another way to go is by importing the alembic in LOP's. The hierarchy and cameras are all visible in the scene graph tree. Lops is mostly for rendering, so the utility of importing it that way mostly depends on what you plan to do in Houdini.
Yes, but the floating window has to have at least one pane or it will close.
Convert the sculpt position to a delta attribute, and use attribute reorient to rotate the deltas as the mesh moves/deforms. Then apply the reoriented delta to the point positions. Transform by attribute is a convenient way to apply deltas. Multiplying the delta with an animated mask can allow fading out over time.
Unified noise is just that, analyzed noises remaped from their statistical output range to 0 to 1.
Use a unified noise, change the type to Perlin and Bob's your uncle.
Use a unified noise, change the type to Perlin and Bob's your uncle.
They're lined up with ‘up’ because the y axis was used as the up vector. Use the local y axis of the target shape to get the local up vector, although that isn't necessarily trivial.
I wonder reason too. This message must pop up when one create a hda from Geometry obj.
I happens when ever you save changes to hda that modify the parm dialog, or create a new hda from an existing node that has a different parm dialog than the source.
Generally, you want to select No changes. When rearranging the parameter layout, it might be necessary to ‘revert layout’ for the existing instances of the hda to reflect the changes saved to the asset. Note that this might cause spare parameters that are not part of the definition to lose their place in the layout and appear bottom of the dialog.
It might be good to re-instantiate the node to be sure it was saved with all the parameters that were intended to be saved.
I have tested karma with a static empty scene. It still needs 70 seconds for 10 frames.
Because karma can second render in viewport, I guess it is husk or some other translation process that eats the time.
Rendering from the usdrender rop still starts a separate process for each frame. Running husk from the command line should allow batching all of the frames in one process.
Even when running a separate process for each frame, it only take 30 seconds to render out 10 empty frames for me.
There's a checkbox on the usdrender node to render all the frames in one husk process. When enabling it, 10 blank frames render in 4 seconds for me.
Edited by jsmack - July 3, 2020 17:26:46
I don't think opengl can do matte shading.
1. Aovs, is it correct that I need to create one “rendervar” node per AOV I want to export? I have tried both Raw type with “RGBA”, “N” etc. in the source filed and LPEs: “C<RD>A” but don't get any output. The name shows up in the “render output” drop down, next to the viewport, but is black. Also the default “color” disappear when I add a new rendervar, so I know I'm doing something wrong.
It's up to the render delegate to support render vars. I don't know the status of Arnold with support, but maybe check with the vendor on that one.
2. Is it correct that after the rendervars are defined we add a renderproduct node to reference all the rendervars, and give the image a name. Is this where we add frame padding and fileformat, .exr?
Yes, that is correct. The Karma node serves as a good example of how the parts interact.
3. I have one question about rendering with arnold standalone on the farm. After the “renderproduct” we add “rendersettings” and “usdrender_rop”, none of those has any settings for exporting .ass files.
Do we render the final shot-usd file directly?
And in that case what do we submit to deadline, a husk-job?
If so what sort of licenses are used to pick up husk jobs?
There's no options for translating to other scene formats, because Houdini works in pure USD. The husk command accepts the usd file as input, and generates the images. Although it's not out of the question for a render delegate to produce a scene file in another format instead of/in addition to the image. I don't think any renderers ship with that functionality though.
I'm not sure about deadline support, but in the general sense, yes you would submit a usd output job followed by a husk job. Although a generic houdini job that executes the usdrender node would work too, although it would consume a houdini license. Husk does not consume a Houdini license at this time that I know of.
Edited by jsmack - July 2, 2020 13:54:49
I render a sequence for nothing(empty scene). But it is still slow(5-6 seconds per frame). The render is for a sequence. Mantra should not need to cold initialize for every frame. The CPU's usage is always below 10%.
What's the reason? Is it because of the windows 10 system?
Mantra cannot be used to render a sequence, that's why it has to start cold for each frame. Most of the time when mantra is used it only renders a single frame, with each frame spread out over the many computers of a renderfarm, so the start time doesn't matter that much. The overhead of starting is also very small compared to typical render times which are usually more than a few minutes.
The next generation lighting and lookdev engine solaris is more suited to rendering sequences, as the command for translating usd to the renderer, husk, is capable of accepting sequences of frames to batch eliminating much of the overhead. Some renderers will still obviously have overhead for starting and stopping, but it should be less than a cold starting executable.
Support can help you convert a file when you buy a license.
Thanks everyone. I'm fully convinced now that there needs to be a check box (on both the USD and USD Render ROPs) to control the behavior around Layer Breaks. I think the only question is what the defaults should be. Unless there is a strong argument against it, I would leave the defaults such that current behavior is preserved (strip layers for the USD ROP, don't strip layers for USD Render)…
Sounds most prudent to me.
Why are there layer breaks in the graph in the first place? Isn't the idea of a layer break that the layer will go back over another layer at some point? Maybe the usdrender should be fixed so that it doesn't ‘ignore’ the layer break and require layering over the root layer before rendering?
Is that something? What does it mean?
How does it apply to a Crowd sim?
I can't see anywhere on the SOPimport LOP where it says ‘usdinstancerpath’
so, is this a custom attribute you have to create somewhere?
It's not clear at all (to me)
instancerpath is for point instancers–used for copying geometry to particles for example. Sop import can translate packed geometries to point instancers, optinally. I don't think it's applicable to crowd packed agent prims, which are a different kind of packed primitive. They should be translated to native usdskel characters, unless I'm misunderstanding your premise.
Sorry I didn't explain myself properly, I'm not trying to control the falloff from the light I'm trying to control the shape of the light with a ramp or gradient parameter so it gets darker towards the edge of the surface. From my research it seems to be a legit workflow in C4D for product lighting. Is there a reason it would be bad practice?
I misunderstood your request. Would a spotlight work?
Spectron sounds like the ticket. I'm not sure of the status of their implementation in the Houdini plugin though.
Edited by jsmack - June 29, 2020 16:03:28
That's nvidia being dumb and thinking all of the Houdini UI elements are different games. Try turning off the shadowplay HUD elements.
Does anyone know how to do this with Octane in Houdini? https://youtu.be/wsHO23Af3TE?t=1955 [youtu.be]
Basically controlling a light shape with a texture, I know there are IES textures but it would be nicer to to edit it quickly using a ramp.
Octane is a physically based renderer, it doesn't make sense to control light falloff artistically.
bumping, need this question answered or I'll have to delete my account and make a new one. That's not ideal.
Did you contact support?
no, not yet, but I'm pretty sure Kai said it was planned.
- Quick Links