Here is my take on Mantra
Using it pretty much full time for around 7 years now. Strictly for FX rendering, Volumes RBDs and Particles.
Apart from these effects having their own workflows for optimized rendering, I have learned quiet a lot of tricks meanwhile.
IFD workflow is just a licensing issue of course, mantra executable does not need houdini to be able to render just like ass or rib. SideFX doesnt do site licensing (AFAIK), but engine licenses are god sent and the fact that Mantra is literally free just makes up for it.
There is quiet a deep explanation on instancing and rendering packed objects in the documentation
Packed objects are god sent, and even now with Mantra being able to manipulate Geometry with Mantra Engine Procedural, makes it even better.
But again, I work in a Studio Enviroment, however If I was a freelancer Indie comes with +3 engine licenses, and 3+PCs are defienetly well worth it for the price they offer.
Found 258 posts.
Search results Show results as topic list.
Houdini Lounge » Thoughts on mantra
- tricecold
- 258 posts
- Offline
Technical Discussion » Wayland or Xorg
- tricecold
- 258 posts
- Offline
I was curious to know, if SideFx is doing any work for Wayland integration or if it's something OS handles by itself. Even though Wayland is brand new and much cleaner it is know to be a slower renderer in comparison to Xorg server.
Should we stick to Xorg, or use Wayland and make tickets
Should we stick to Xorg, or use Wayland and make tickets
Houdini Lounge » Threadripper 1950X Production Style Benchmark Scene
- tricecold
- 258 posts
- Offline
nisachar
Why does the CPU speed read as 2.2 GHz?
The threadripper can boost two cores ( I think ) around the 4.2/4 GHz mark. That should take care of fast cores when multithreading isn't the better option. In renders, it can read reach 4GHZ on all 16 cores.
Also is the ram ECC?
Looking forward to the results.
I am myself looking to build a quad/Tri GPU setup with the threadripper platform.
Hi it shows the minimum CPU similar to what my os tells also
lscpu | grep MHz
CPU MHz: 2200.000
CPU max MHz: 3400.0000
CPU min MHz: 2200.0000
Houdini Lounge » Threadripper 1950X Production Style Benchmark Scene
- tricecold
- 258 posts
- Offline
LoL, I am also not a fan of it, but for now, it is the best way to keep my cores busy for any job other than render pyro or flip, but indeed you would be limited to the ram. At work the workflow is exactly same, except we have quiet many houdini engine licenses and
quiet many machines with 128GB ram.
But it is not really that easy to make it fail, as long as you have an idea of the ram usage per frame, it wouldn`t be so difficult to adjust accordingly.
The problem is not the hardware really, I do understand parallel computing is not an easy task for a programmer, but until more tools are rewritten for compile sop, this seems to be a temporary solution.
quiet many machines with 128GB ram.
But it is not really that easy to make it fail, as long as you have an idea of the ram usage per frame, it wouldn`t be so difficult to adjust accordingly.
The problem is not the hardware really, I do understand parallel computing is not an easy task for a programmer, but until more tools are rewritten for compile sop, this seems to be a temporary solution.
Edited by tricecold - Sept. 18, 2017 21:07:33
Houdini Lounge » Threadripper 1950X Production Style Benchmark Scene
- tricecold
- 258 posts
- Offline
Hi community,
After 6 years, I finallly upgraded my 4.5 GHZ OCed 2600K to Threadripper 1950X, another 6 or so years to stick to a CPU
For those who dont know me, I am a hardware enthusiasit, vfx hobbyist for the last 20+ years and I also do it professionally at MPC as a lead FX TD.
I have been confused with general claims of these many core consumer PCs, people have been saying, “yes but I render with GPU I need fast single threading for SOPs, and dont need slow cores”, and some think it is the only other way around. I am here to proove wrong that more cores at 3+ GHZ is the only way to go as a VFX artist/freelancer.
So I prepared one of my recent scenes to share for benchmaring at a whole new level. This file would mimmick a real life scenario.
I tried to make the scene OS independent, Please let me know if you catch anything.
Requirements
Project File:
Download Project… [goo.gl]
A spinning drive if possible
Houdini 16.0.705
Deadline Monitor and a slave as the same machine with Houdini Plugins installed DEADLINE- INSTALL ME [downloads.thinkboxsoftware.com]
Min 32GB Ram
Why do we need Deadline, because most occasionally in production, we cache every step, usually on a farm. A lot of SOP operators are not well multithreaded. With Deadline, I can tell a task to be divided into 10 subtasks for 100 frames. Then I can tell Deadline to run each of these tasks simultaneously, It would lunch 10 Houdinis in the background and process the information. As long as you can fit it into your ram you get almost full threading.
This makes a huge difference in scenarios like, cleaning up caches, meshing geometry, caching collision geometry or VDBs before pyro or flip simulations.
Basically after you ran submit, and then one ran one COP job, you should ge a sequence of EXR reesulting into this
but in 1080P and better sampling.
Mine is still running, but I will post the final video and results here, current progress is attached,
Deadline will be able to tell the memory usage, CPU Utilization , actual task run time, etc
This is a long benchmark. I would be very happy to finally have some real numbers to compare, It will take a few days to complete.
After 6 years, I finallly upgraded my 4.5 GHZ OCed 2600K to Threadripper 1950X, another 6 or so years to stick to a CPU
For those who dont know me, I am a hardware enthusiasit, vfx hobbyist for the last 20+ years and I also do it professionally at MPC as a lead FX TD.
I have been confused with general claims of these many core consumer PCs, people have been saying, “yes but I render with GPU I need fast single threading for SOPs, and dont need slow cores”, and some think it is the only other way around. I am here to proove wrong that more cores at 3+ GHZ is the only way to go as a VFX artist/freelancer.
So I prepared one of my recent scenes to share for benchmaring at a whole new level. This file would mimmick a real life scenario.
I tried to make the scene OS independent, Please let me know if you catch anything.
Requirements
Project File:
Download Project… [goo.gl]
A spinning drive if possible
Houdini 16.0.705
Deadline Monitor and a slave as the same machine with Houdini Plugins installed DEADLINE- INSTALL ME [downloads.thinkboxsoftware.com]
Min 32GB Ram
Why do we need Deadline, because most occasionally in production, we cache every step, usually on a farm. A lot of SOP operators are not well multithreaded. With Deadline, I can tell a task to be divided into 10 subtasks for 100 frames. Then I can tell Deadline to run each of these tasks simultaneously, It would lunch 10 Houdinis in the background and process the information. As long as you can fit it into your ram you get almost full threading.
This makes a huge difference in scenarios like, cleaning up caches, meshing geometry, caching collision geometry or VDBs before pyro or flip simulations.
Basically after you ran submit, and then one ran one COP job, you should ge a sequence of EXR reesulting into this
but in 1080P and better sampling.
Mine is still running, but I will post the final video and results here, current progress is attached,
Deadline will be able to tell the memory usage, CPU Utilization , actual task run time, etc
This is a long benchmark. I would be very happy to finally have some real numbers to compare, It will take a few days to complete.
Edited by tricecold - Sept. 18, 2017 09:19:11
Houdini Engine for Maya » Pyro to Maya Fluids Caching and Custom Fields
- tricecold
- 258 posts
- Offline
Thanks for the replies, no I am not using out of the shelf nodes for the rig.
I tought some how for each in import fields was breaking the vels, so i fetched each field seperately with dopimport1, display flag was set properly on where the data was.
The thing is, tools has to be used in Maya, and rerendered in Houdni, because we want to use volume procedural and i need the rest field to stick the noise.
I would be very happy to get back BGEOs also, but a filecache node doesnt work, because it cannot tell maya to change frame, unless there is a way I dont know, I would be happy to hear that.
I checked the mcx cache from Maya and it outputs color channel, which I already tried renaming rest from Houdini, but it overrites the values. I wonder if the connection editor would allow me to reqire these manually.
I will continue the research tomorrow.
In terms of scale, i will make a little script to reverse the values from scale to size etc, which seemed to work but, I guess ideally we would like to keep the original values.
Thanks
Tim
I tought some how for each in import fields was breaking the vels, so i fetched each field seperately with dopimport1, display flag was set properly on where the data was.
The thing is, tools has to be used in Maya, and rerendered in Houdni, because we want to use volume procedural and i need the rest field to stick the noise.
I would be very happy to get back BGEOs also, but a filecache node doesnt work, because it cannot tell maya to change frame, unless there is a way I dont know, I would be happy to hear that.
I checked the mcx cache from Maya and it outputs color channel, which I already tried renaming rest from Houdini, but it overrites the values. I wonder if the connection editor would allow me to reqire these manually.
I will continue the research tomorrow.
In terms of scale, i will make a little script to reverse the values from scale to size etc, which seemed to work but, I guess ideally we would like to keep the original values.
Thanks
Tim
Houdini Engine for Maya » Pyro to Maya Fluids Caching and Custom Fields
- tricecold
- 258 posts
- Offline
Hi Guys;
I am trying to figure out a few kinks on a rig I am working.
I have a rig in Houdini with pyro that works just fine with colliders etc in Houdini and in Maya through Maya Engine.
Houdini 15.5 and 16 and Maya 2015
The Problem is when I cache out maya fluids cache and load back in the cache, i cannot see the velocity field. I will be also needing custom fields like rest, so Maya fluids has color field that can be cached out, so regarding these I have a few questions.
My other problem is the scale, by default fluid container size is set to 1 and its transform is scaled instead it also get transforms, this becomes a problem for some of our tools, voxel position is wrong in objectspace. Was this an intended design ?
Before my output node,
Do I have to rename vel to velocity, or engine fluid will do the conversion?
What workflow would you suggest to pass through the rest field, i currently rename it to color as maya fluids have color caching, so i was thinking to replace that since thats a vector too.
thanks
I am trying to figure out a few kinks on a rig I am working.
I have a rig in Houdini with pyro that works just fine with colliders etc in Houdini and in Maya through Maya Engine.
Houdini 15.5 and 16 and Maya 2015
The Problem is when I cache out maya fluids cache and load back in the cache, i cannot see the velocity field. I will be also needing custom fields like rest, so Maya fluids has color field that can be cached out, so regarding these I have a few questions.
My other problem is the scale, by default fluid container size is set to 1 and its transform is scaled instead it also get transforms, this becomes a problem for some of our tools, voxel position is wrong in objectspace. Was this an intended design ?
Before my output node,
Do I have to rename vel to velocity, or engine fluid will do the conversion?
What workflow would you suggest to pass through the rest field, i currently rename it to color as maya fluids have color caching, so i was thinking to replace that since thats a vector too.
thanks
Houdini Lounge » Some Threadripper results
- tricecold
- 258 posts
- Offline
You need to be careful on how to squeeze the maximum performance from your hardware. Test both with openCL on and off, write straight to disk as bgeo.sc, never render your simulations without caching. Why, because you may want to play with shading, lightning etc. TR will make even more gap with higher-resolution simulations, why because you will keep threading busier longer instead of occupying CPU with thread management.
It was the same case when small simulation times compared between dual Xeons vs one fast I7. There is no magic button that makes everything faster. Especially with compile workflow TR will be so much faster in Houdini, why because Compile SOP compiles your many small SOPs into a an imaginary SOP that multithread so much better. I've had sped improvements 5 to 10 times after converting old tools with compile sop on same CPU
Grain solver works best with a fast GPU, so make your comparisons with it turned on and off
It was the same case when small simulation times compared between dual Xeons vs one fast I7. There is no magic button that makes everything faster. Especially with compile workflow TR will be so much faster in Houdini, why because Compile SOP compiles your many small SOPs into a an imaginary SOP that multithread so much better. I've had sped improvements 5 to 10 times after converting old tools with compile sop on same CPU
Grain solver works best with a fast GPU, so make your comparisons with it turned on and off
Edited by tricecold - Sept. 1, 2017 13:48:02
Technical Discussion » Houdini and Fedora 26
- tricecold
- 258 posts
- Offline
I installed 26 two days ago, it works fine here, but a few exceptions, running qt4 version, disabled Wayland, and installed Nvidia official Nvidia drivers, current openGL performance is slightly better than Windows 10
Technical Discussion » Point instancing workflows
- tricecold
- 258 posts
- Offline
Hi, you can create a point attribute named like instancefile, each point can also have an integer attribute for the frame number you want to instance, then you can rebuilt your string with sprintf.
Gather these points to an empty geo node.
Drop in a shopnet and create an empty point instance
On geometry level, assign this procedural shader to the geometry tab.
Gather these points to an empty geo node.
Drop in a shopnet and create an empty point instance
On geometry level, assign this procedural shader to the geometry tab.
Technical Discussion » math q: director vector to spherical coords
- tricecold
- 258 posts
- Offline
wikipedia says
In mathematics, a spherical coordinate system is a coordinate system for three-dimensional space where the position of a point is specified by three numbers: the radial distance of that point from a fixed origin, its polar angle measured from a fixed zenith direction, and the azimuth angle of its orthogonal projection …
so you are missing two variables no, distance and azimuth angle
In mathematics, a spherical coordinate system is a coordinate system for three-dimensional space where the position of a point is specified by three numbers: the radial distance of that point from a fixed origin, its polar angle measured from a fixed zenith direction, and the azimuth angle of its orthogonal projection …
so you are missing two variables no, distance and azimuth angle
Technical Discussion » 2017 MacBook Pro and Houdini?!?!
- tricecold
- 258 posts
- Offline
The problem is not the card but the OSX having its own opengl and opencl drivers.
https://support.apple.com/en-ca/HT202823 [support.apple.com]
Latest supported version from Apple is 4.1 for GL and 1.2 for CL, latest GL version is 4.5 which is supported by Nvidia or AMD with recent generation cards on linux and windows.
Houdini recommends 4.0+ so it works with 4.1 but I don`t now if it excludes any features from 4.1+ to the latest which is 4.5.
https://support.apple.com/en-ca/HT202823 [support.apple.com]
Latest supported version from Apple is 4.1 for GL and 1.2 for CL, latest GL version is 4.5 which is supported by Nvidia or AMD with recent generation cards on linux and windows.
Houdini recommends 4.0+ so it works with 4.1 but I don`t now if it excludes any features from 4.1+ to the latest which is 4.5.
Technical Discussion » Importing textured asset to Houdini for VFX effect? What aspects to consider when creating a textures for vfx?
- tricecold
- 258 posts
- Offline
for a realistic character render, you need a diffuse,displacement and bump or normal,3 layers of SSS,and a specular pass, all these textures can be used in any renderer including mantra.
Technical Discussion » Particle "clustering" - Flour explosion
- tricecold
- 258 posts
- Offline
unless you have pieces becoming bigger than the 20% of the screen space, I would literally scale them to 0, and use their area to emit points. bazillions of them, you should also wedge the particle pass
Technical Discussion » Launching hrender from python script in Houdini
- tricecold
- 258 posts
- Offline
maybe sth like this
define the filename as variable:
filename = hou.hscriptExpression(“$HIPFILE”)
subprocess.Popen()
this should work
define the filename as variable:
filename = hou.hscriptExpression(“$HIPFILE”)
subprocess.Popen()
this should work
Technical Discussion » Matrix orientation using wrangles / vops
- tricecold
- 258 posts
- Offline
everything you ever need is avaliable here
http://www.tokeru.com/cgwiki/index.php?title=HoudiniVex#Example:_Rotation
http://www.tokeru.com/cgwiki/index.php?title=HoudiniVex#Example:_Rotation
Technical Discussion » material(texture) overrides using variable to define number of image sequence to use as texture
- tricecold
- 258 posts
- Offline
Hi
You need a point attribute that would be your frame integer you want to load.
so for example
create frame number for points
int frametoLoad = i@offset + @Frame; //create an offset of some sort you may or not need
s@texturepath = sprintf(“texturepath.%d.jpg”,frametoLoad); // this will create a string and will replace the %d with your i@frametoLoad attribute per point.
Now in SOP level, plug a material node, assign your shader and create a local override with map selected.
It will create a material_override
in String section you can call the attribute using point vex function
Ok I read you msg again, you dont have sequences but random images
so then modify the vex like this
i@id = @ptnum; //(if you dont have id)
int mapnumber = i@id % 5; // will return you a mapnumber per point from 0 to 4 i think
s@texturepath = sprintf(“texturepath.%d.jpg”,mapnumber); // this will create a string and will replace the %d with your mapnumber attribute per point.
You need a point attribute that would be your frame integer you want to load.
so for example
create frame number for points
int frametoLoad = i@offset + @Frame; //create an offset of some sort you may or not need
s@texturepath = sprintf(“texturepath.%d.jpg”,frametoLoad); // this will create a string and will replace the %d with your i@frametoLoad attribute per point.
Now in SOP level, plug a material node, assign your shader and create a local override with map selected.
It will create a material_override
in String section you can call the attribute using point vex function
Ok I read you msg again, you dont have sequences but random images
so then modify the vex like this
i@id = @ptnum; //(if you dont have id)
int mapnumber = i@id % 5; // will return you a mapnumber per point from 0 to 4 i think
s@texturepath = sprintf(“texturepath.%d.jpg”,mapnumber); // this will create a string and will replace the %d with your mapnumber attribute per point.
Edited by tricecold - June 9, 2017 15:18:40
Technical Discussion » Ocean Spectrum Custom Combed directions
- tricecold
- 258 posts
- Offline
Technical Discussion » Point Instancing without writing the points into IFD
- tricecold
- 258 posts
- Offline
, yes , we are just trying to optimize some really big scenes here, so we are trying to avoid these IFDs with loads of points. and yes we can call it nested instancing
Technical Discussion » Point Instancing without writing the points into IFD
- tricecold
- 258 posts
- Offline
Hi
I have a question regarding point instancing, I have cached points which I instance geo on them (disk based with a string).
Is there a way to make this work on a farm without putting in the points into the IFD, kinda like delayed load the points and instance on top during render.
I have a question regarding point instancing, I have cached points which I instance geo on them (disk based with a string).
Is there a way to make this work on a farm without putting in the points into the IFD, kinda like delayed load the points and instance on top during render.
-
- Quick Links