Hi, unknown,
> I want to do small scale ‘abstract’ flip simulations and also large scale fluid sims (oceans, explosions, etc…
… I can not give any recommendation for that, because those examples seem to illustrate the exact opposites of one another. “Small abstract” smells like high-step-rate-4-core, “large scale fluid” smells like network distributed server-farm … And since, in the end, simulation is always *time* *consuming*, for me, personally, UXP comes first, second and fifth. That means: I want the software respond to my input. When it's time to simulate, I will definitely waste my own lifetime somewhere else, not in front of the computer, so 12 hours or 15 hours don't matter.
To me.
I am not talking big time-pressure-pipeline-scenarios here.
For me performance when *I* use the system is of highest value, so I'd go for fast single thread (why “would” - I did exactly that).
Also my bet would be on Houdini and other tools leveraging GPUs more and more, so I'd probably take care that my system can handle a bunch of 1080 or whatever comes next easily (memory pipelines etc).
Personally - this is by no means a suggestion you should base your life insurance on - I only see heavy multi-core performance pay off in CPU rendering. That's something I, again, personally, would outsource anyway, I don't need that noise and energy consumption in my closet. Read: I don't see much use in multi-multi-core tech. 6 cores: fine. 8 cores: fine. But that's already scratching at “data management bottlenecks” there, running 16 threads with data will put some heavy strain on your data IO. Without the data IO (PCIe SSD) I consider that somewhat academic …
Marc
Found 590 posts.
Search results Show results as topic list.
Technical Discussion » PC for Houdini and 5000 euros max.
-
- malbrecht
- 806 posts
- Offline
Technical Discussion » PC for Houdini and 5000 euros max.
-
- malbrecht
- 806 posts
- Offline
Hi,
I was wondering why someone would honestly quote the whole textwall of mine - maybe to make a point or something - but …
> Just a quick question (…)
… there isn't even a *question* in your comment :-D
OK, seriously: Like I tried to point out, it depends. It is NOT about the number of cores, but about how the software you use makes use of them, what cache advantage you get, how much data you juggle, how you access the data for read and write.
The best information you could gather would be from people doing the kind of work you do. Maybe someone chimes in and can give some numbers here, because I, personally, biased as I am, doubt that there is a “perfect answer”.
Marc
I was wondering why someone would honestly quote the whole textwall of mine - maybe to make a point or something - but …
> Just a quick question (…)
… there isn't even a *question* in your comment :-D
OK, seriously: Like I tried to point out, it depends. It is NOT about the number of cores, but about how the software you use makes use of them, what cache advantage you get, how much data you juggle, how you access the data for read and write.
The best information you could gather would be from people doing the kind of work you do. Maybe someone chimes in and can give some numbers here, because I, personally, biased as I am, doubt that there is a “perfect answer”.
Marc
Technical Discussion » PC for Houdini and 5000 euros max.
-
- malbrecht
- 806 posts
- Offline
That Multicore-discussion sometimes tends to involve religious beliefs … For sure it is wrong to assume that “everything” in programs like Maya or Houdini is “threaded” - that would not make much sense, since all multi-threading evaluation comes with additional computation costs.
Imagine you have a specific task - like folding a single sheet of paper once. You have to do that leveraging your 120 men and women crew, everyone *has* to be involved. It will take you considerably longer to execute that task actively involving 120 people than if you did it yourself, single-threaded.
Multi-Threading makes sense if you a) have the data volume to actually make use of more than one processor digging through it (see example: 120 sheets of paper seem like a better context than a single one), b) the data can be worked on separately (if everyone of those 120 people has to wait for the person before her to finish, 120 people doesn't give you much of a boost, only if each task is “exactly identical to all others” - possibly exaggerating here, but you get the picture - you can distribute the tasks to arbitrary cores), c) you have the protocol and tech to combine the data without loosing the speed advantage you gained by distributing the jobs (example: If piling up the paper takes you longer than folding and piling them yourself, because everyone of those 120 people finishes at a different time, you have to wait, maybe drive to their home to pick stuff up, drink some … you better do it yourself after all).
In heavy simulations, especially grid/net based, FLIPs you often can leverage multi-threading quite well. But if you want every particle's velocity to influence other particles within the same evaluation step, multi-threading won't work that well.
These are just basic ideas of where “multi-threading is always better” just doesn't work. Developers have to weight pro and con on every single job to maximize performance and minimize risks of errors.
Another issue with multi-threading is data access. If you need to access large amounts of shared data from every single thread that is running in parallel, you need to have a data pipeline that allows for this kind of parallel random access. If your data is stored on mechanical HDD and every thread constantly pushes the readwrite head back and forth, you will definitely loose more spped than you would gain from a well balanced sequential access model.
Again, this is just an idea with a big “if” inside.
In sum: For certain tasks a higher single core step rate may well outperform multi-threaded low-frequency CPU solutions. In other situations multi-threading may give you the better outcome. In theory - and unfortunately it does not really work that way, this is just to illustrate something - a 4-core/8-thread CPU at 4.4GHz should outperform a 8-core/16-thread 2.2GHz CPU, because less overhead for multi-threading is involved.
If your data, your data access means, the jobs that get distributed AND thread management allows for multi-threading, more threads quite often perform slightly better, because you can keep more data in first level CPU cache, giving you faster access than with larger data sets in lower level cache or even default RAM.
Before I write a book on this, I'll stop - I am not saying that “single threading” is faster, don't get me wrong. I am simply trying to underline that it isn't always about having the most cores. It's about the right combination of task and power.
A lower stepping rate on your main CPU will always slow down your UXP, though. And since most programs, including the ones you picked, will have single-threaded GUIs, a 2.2GHz model for me would not do the job if I can go for a 4.4GHz instead. This, again, is over-simplification, just to make the point.
Marc
Imagine you have a specific task - like folding a single sheet of paper once. You have to do that leveraging your 120 men and women crew, everyone *has* to be involved. It will take you considerably longer to execute that task actively involving 120 people than if you did it yourself, single-threaded.
Multi-Threading makes sense if you a) have the data volume to actually make use of more than one processor digging through it (see example: 120 sheets of paper seem like a better context than a single one), b) the data can be worked on separately (if everyone of those 120 people has to wait for the person before her to finish, 120 people doesn't give you much of a boost, only if each task is “exactly identical to all others” - possibly exaggerating here, but you get the picture - you can distribute the tasks to arbitrary cores), c) you have the protocol and tech to combine the data without loosing the speed advantage you gained by distributing the jobs (example: If piling up the paper takes you longer than folding and piling them yourself, because everyone of those 120 people finishes at a different time, you have to wait, maybe drive to their home to pick stuff up, drink some … you better do it yourself after all).
In heavy simulations, especially grid/net based, FLIPs you often can leverage multi-threading quite well. But if you want every particle's velocity to influence other particles within the same evaluation step, multi-threading won't work that well.
These are just basic ideas of where “multi-threading is always better” just doesn't work. Developers have to weight pro and con on every single job to maximize performance and minimize risks of errors.
Another issue with multi-threading is data access. If you need to access large amounts of shared data from every single thread that is running in parallel, you need to have a data pipeline that allows for this kind of parallel random access. If your data is stored on mechanical HDD and every thread constantly pushes the readwrite head back and forth, you will definitely loose more spped than you would gain from a well balanced sequential access model.
Again, this is just an idea with a big “if” inside.
In sum: For certain tasks a higher single core step rate may well outperform multi-threaded low-frequency CPU solutions. In other situations multi-threading may give you the better outcome. In theory - and unfortunately it does not really work that way, this is just to illustrate something - a 4-core/8-thread CPU at 4.4GHz should outperform a 8-core/16-thread 2.2GHz CPU, because less overhead for multi-threading is involved.
If your data, your data access means, the jobs that get distributed AND thread management allows for multi-threading, more threads quite often perform slightly better, because you can keep more data in first level CPU cache, giving you faster access than with larger data sets in lower level cache or even default RAM.
Before I write a book on this, I'll stop - I am not saying that “single threading” is faster, don't get me wrong. I am simply trying to underline that it isn't always about having the most cores. It's about the right combination of task and power.
A lower stepping rate on your main CPU will always slow down your UXP, though. And since most programs, including the ones you picked, will have single-threaded GUIs, a 2.2GHz model for me would not do the job if I can go for a 4.4GHz instead. This, again, is over-simplification, just to make the point.
Marc
Technical Discussion » Cloth Simulation help
-
- malbrecht
- 806 posts
- Offline
Hmm … I don't really see any sucking-effect or vacuum. I see two things: A figure pressing against some material, deforming it towards the camera - plus the figure, then, as a second step, turning its head, tightening/stretching the material with its chin/jaw …
Marc
Marc
Technical Discussion » PC for Houdini and 5000 euros max.
-
- malbrecht
- 806 posts
- Offline
Moin,
not pretending to have the perfect answer, I would make a few assumptions:
- there is a considerate trend towards leveraging GPU for number crunching. Depending on the kind of simulation you are doing, investing into a 1080 or the like might make sense.
- RAM can only be replaced by more RAM, ideally MUCH MORE RAM. When in doubt, use even more RAM.
- writing huge simulation caches to disk (or reading them) can suffer from mechanical HDD bottlenecking, so some TB SSD would be next on my list
- more cores (Xeon) are only useful if your simulations actually leverage multithreading. Personally (this is probably very much discussable) I would go for higher core frequencies more than for more cores.
But, seriously, for specific requirements I would opt for using some Azure or AWS resource when needed instead of investing into a single machine. I do know the feeling of having “it all at your command”, but it's very much possible that just renting horse power is a lot cheaper and more productive - probably depends on how your licensing works, though.
Marc
not pretending to have the perfect answer, I would make a few assumptions:
- there is a considerate trend towards leveraging GPU for number crunching. Depending on the kind of simulation you are doing, investing into a 1080 or the like might make sense.
- RAM can only be replaced by more RAM, ideally MUCH MORE RAM. When in doubt, use even more RAM.
- writing huge simulation caches to disk (or reading them) can suffer from mechanical HDD bottlenecking, so some TB SSD would be next on my list
- more cores (Xeon) are only useful if your simulations actually leverage multithreading. Personally (this is probably very much discussable) I would go for higher core frequencies more than for more cores.
But, seriously, for specific requirements I would opt for using some Azure or AWS resource when needed instead of investing into a single machine. I do know the feeling of having “it all at your command”, but it's very much possible that just renting horse power is a lot cheaper and more productive - probably depends on how your licensing works, though.
Marc
Technical Discussion » OpenGL fatal error
-
- malbrecht
- 806 posts
- Offline
Moin,
try setting Houdini to using the Nvidia card in the Nvidia settings panel. It might be that it is, for whatever reason, trying to use the Intel GPU.
Marc
try setting Houdini to using the Nvidia card in the Nvidia settings panel. It might be that it is, for whatever reason, trying to use the Intel GPU.
Marc
Houdini Learning Materials » Recast required in vex ?
-
- malbrecht
- 806 posts
- Offline
Technical Discussion » About xyzdist vex function.
-
- malbrecht
- 806 posts
- Offline
Moin,
it might be an unhappy math rounding issue with 32bit floating point that only kicks in with those specific position values.
Do you get the same result when channel-linking the position of the point to that of the polygon's point position? If yes, it's a math thing and you may have to do some rounding. If no, then one of the positions is not what you think it is …
Marc
it might be an unhappy math rounding issue with 32bit floating point that only kicks in with those specific position values.
Do you get the same result when channel-linking the position of the point to that of the polygon's point position? If yes, it's a math thing and you may have to do some rounding. If no, then one of the positions is not what you think it is …
Marc
Houdini Learning Materials » Recast required in vex ?
-
- malbrecht
- 806 posts
- Offline
I agree with Thomas, the code snippet you posted makes it way harder to grasp what you want to do. Maybe a screenshot of your desired result, with some scribbling, would help. Or if you can write your code in any other programming language (JS, Java, C, C++, Python, Perl, PHP, Basic, Fortran, Pascal, Logo - we figure it out, just let it make sense, as Thomas puts it).
*If* you should mean “how can I adjust the color by three separate floating point sliders”, then you could do something like this:
Marc
*If* you should mean “how can I adjust the color by three separate floating point sliders”, then you could do something like this:
v@Cd.r=chf("path-to-r"); v@Cd.g=chf("path-to-g"); v@Cd.b=chf("path-to-b");
Marc
Houdini Learning Materials » Recast required in vex ?
-
- malbrecht
- 806 posts
- Offline
Moin, Christopher,
I think your question is ambiguous - you ask about “recasting a vector to a float”, but if you want to “cast” a vector with 3 components to a float with, obviously, only one component, it is unclear which component you want to extract.
One possible interpretation of your question could be “I want to get the luminance value of a RGB color that is stored as a vector” - and for that there are various solutions, depending on color space and whitepoint.
The example you gave:
… doesn't work in VEX: You seem to be trying to set a parameter from VEX, which - unfortunately - Houdini does not support (it's not clear what you actually want to do, since the closing “)” bracket is missing).
If you want to extract a single channel from a vector, you can access it through .x/.y/.z. I think that is what Thomas meant: If the color is grayscale, it only has one channel (the x-channel), then “casting” is simply done by accessing that single channel's x-value, which is a float (or could be an integer). If the color has more than one channel, you need to define *how* you want to “recast”, because there is no such thing as “recast a 3 component value to a 1 component value” (think of imaginary numbers, which consist of two components).
Marc
I think your question is ambiguous - you ask about “recasting a vector to a float”, but if you want to “cast” a vector with 3 components to a float with, obviously, only one component, it is unclear which component you want to extract.
One possible interpretation of your question could be “I want to get the luminance value of a RGB color that is stored as a vector” - and for that there are various solutions, depending on color space and whitepoint.
The example you gave:
Christopher_R@Cd = {0,0,0}; ch("ctrl",@Cd = {0,0,0};
… doesn't work in VEX: You seem to be trying to set a parameter from VEX, which - unfortunately - Houdini does not support (it's not clear what you actually want to do, since the closing “)” bracket is missing).
If you want to extract a single channel from a vector, you can access it through .x/.y/.z. I think that is what Thomas meant: If the color is grayscale, it only has one channel (the x-channel), then “casting” is simply done by accessing that single channel's x-value, which is a float (or could be an integer). If the color has more than one channel, you need to define *how* you want to “recast”, because there is no such thing as “recast a 3 component value to a 1 component value” (think of imaginary numbers, which consist of two components).
Marc
Edited by malbrecht - Feb. 13, 2017 10:32:40
Houdini Learning Materials » I can't use help documents
-
- malbrecht
- 806 posts
- Offline
Interesting … which version are you using?
I had the same thing happening to me last night - a couple of times. What fixed it for me was to quit Houdini, make sure that all Houdini tasks actually ended, then start a new session and build a very simple, fundamental scene (a geometry node, done). THEN “help” would work again.
Since I could not reproduce the “lock up” situation I did not file a bug - but if it happens for more users than just one or two, there must be something specific we can figure out …
Marc
I had the same thing happening to me last night - a couple of times. What fixed it for me was to quit Houdini, make sure that all Houdini tasks actually ended, then start a new session and build a very simple, fundamental scene (a geometry node, done). THEN “help” would work again.
Since I could not reproduce the “lock up” situation I did not file a bug - but if it happens for more users than just one or two, there must be something specific we can figure out …
Marc
Houdini Lounge » Indie Documentation
-
- malbrecht
- 806 posts
- Offline
To add to what Jonatahin said which was to add to what Matt said: The documentation *is* searchable (like a PDF). And it is constantly being edited and improved, which would require a constant re-download of a PDF, if it was a PDF.
The best way to get started, if you only want to study documentation instead of getting your fingers dirty with the actual application, is to read some of the cookbook examples in the documentation or browse through the (explained) examples.
Marc
The best way to get started, if you only want to study documentation instead of getting your fingers dirty with the actual application, is to read some of the cookbook examples in the documentation or browse through the (explained) examples.
Marc
Houdini Learning Materials » Houdini nodes reference videos ?
-
- malbrecht
- 806 posts
- Offline
> so videos walking through the example files available for each node?
might be a start, but (that's my perspective, open to discussion
) maybe spicing it up with whatever the respective video creator has available.
What I find important with “snippets” is to point out caveats, if present, with specific channels. A trivial example:
I am still working on the VEX version of the “animation creating leg rig” and noticed that, if a solver is set to “cache simulation”, you don't get actual *output* from every frames' execution, but only the accumulated result from simulation start to current frame, which (when debugging) can cost you a bunch of hair …
Marc
might be a start, but (that's my perspective, open to discussion

What I find important with “snippets” is to point out caveats, if present, with specific channels. A trivial example:
I am still working on the VEX version of the “animation creating leg rig” and noticed that, if a solver is set to “cache simulation”, you don't get actual *output* from every frames' execution, but only the accumulated result from simulation start to current frame, which (when debugging) can cost you a bunch of hair …
Marc
Houdini Learning Materials » Houdini nodes reference videos ?
-
- malbrecht
- 806 posts
- Offline
well … if 10 video contributors all discuss the “Alembic” node, we may get a library of “Alembic node discussions”, but not much more.
I will jump in as soon as we can find an agreement on how to distribute the work. Whoever commits to doing a video should do that within a specified time, else her commitment should get deleted, so that the node could be discussed by someone else.
If at least 5 cooperators “sign up” here, I also volunteer to create the database and a frontend for that.
Marc
I will jump in as soon as we can find an agreement on how to distribute the work. Whoever commits to doing a video should do that within a specified time, else her commitment should get deleted, so that the node could be discussed by someone else.
If at least 5 cooperators “sign up” here, I also volunteer to create the database and a frontend for that.
Marc
Houdini Learning Materials » Houdini nodes reference videos ?
-
- malbrecht
- 806 posts
- Offline
well … if 10 video contributors all discuss the “Alembic” node, we may get a library of “Alembic node discussions”, but not much more.
I will jump in as soon as we can find an agreement on how to distribute the work. Whoever commits to doing a video should do that within a specified time, else her commitment should get deleted, so that the node could be discussed by someone else.
If at least 5 cooperators “sign up” here, I also volunteer to create the database and a frontend for that.
Marc
I will jump in as soon as we can find an agreement on how to distribute the work. Whoever commits to doing a video should do that within a specified time, else her commitment should get deleted, so that the node could be discussed by someone else.
If at least 5 cooperators “sign up” here, I also volunteer to create the database and a frontend for that.
Marc
Houdini Learning Materials » Houdini nodes reference videos ?
-
- malbrecht
- 806 posts
- Offline
I volunteer in this endeavor - do we need to set up a database where video-makers can sign up for a node?
Marc
Marc
Work in Progress » 3d Fractals ...
-
- malbrecht
- 806 posts
- Offline
I have put this little project on my list of video-tutorials-to-do, I hope to get it done this month.
Marc
Houdini Lounge » Houdini 16
-
- malbrecht
- 806 posts
- Offline
I am not sure, but I understood that they were going to upload their own recording because they noticed the internet connection glitches. So here's hope …
Marc
Marc
Work in Progress » A Freshman's Approach to Learning Houdini - or "Houdini for Dummies" ...
-
- malbrecht
- 806 posts
- Offline
Work in Progress » A Freshman's Approach to Learning Houdini - or "Houdini for Dummies" ...
-
- malbrecht
- 806 posts
- Offline
Thanks, Edward - that's encouraging, since I wasn't sure my humble greenhorn stuff was worth submitting to that section. I'll take care of it pronto.
Marc
Marc
-
- Quick Links