Found 1209 posts.
Search results Show results as topic list.
Technical Discussion » Intrinsic transform & Motion Blur
- symek
- 1390 posts
- Offline
I suppose you need Geo Time Samples > 1 to get packed prims blurred properly. Other than that, it seems to work as expected here.
Technical Discussion » SOP world xform
- symek
- 1390 posts
- Offline
Houdini Lounge » Considering renaming Journal to Changelog
- symek
- 1390 posts
- Offline
Houdini Lounge » FORUM DARK THEME
- symek
- 1390 posts
- Offline
Houdini Lounge » Outside by SHED/SideFX
- symek
- 1390 posts
- Offline
Houdini Lounge » point cloud confusion
- symek
- 1390 posts
- Offline
alexcb
1. Why is the point cloud to search through called the 'point cloud texture'?
Because in a wider context this is what point clouds were designed for. As you probably know, a common procedure in computer graphics is to access images stored previously in a files and use them as a textures to drive different properties of a surface or other parameters of objecs.
Images, as they are stored inside computers, are very decent type of data, because they are well parameterized. That is, images have just a few dimensions (two, sometimes three), which can be discretized (divided) into equal chunks (pixels/voxels) and placed linearly, that is one after another, into computer's memory. In other words, they map pretty well to memory model of computers, what makes them efficient companion for most image algorithms. This may sound trivial, but it's actually a rare case in a world of data structures. Thanks to these properties, accessing even huge amount of pixels (even bigger than computers' memory), is possible and quite efficient. Thus they are used in CG a lot.
Quite often you don't have that luxury of well organized data like images though. Point clouds which are very common in 3d are such case. If you want to use them efficiently, you need to pre-process them and build a data structure suitable for further fast accessing and processing. This is what point clouds term in VEX is all about.
Most of the time, not only in VEX, but in other applications as well, this data structure is called KD-Tree [en.wikipedia.org] ( or Oct-trees in case of pointcloudlod() VEX call), and actually most of your questions refer to technicalities of working with kd-trees in Houdini. Secondly your questions come from the fact, that Houdini documentation regaring VEX, not always distinguishes clearly bewteen two domains for point clouds: Houdini and Mantra.
So short answer to your questions: point clouds are often referred as textures per analogy, since they are in fact a data structures created to be used similarly to raster textures – images, but on different data types like point clouds or any scattered measurements of 3d domain.
2. Why are the various bits of data you lookup in the ‘point cloud texture’ called ‘channels’? Why are they not called attributes? When the pcopen vop asks for what the bit of data is called in the point cloud texture that stores each points position, why is it labelled and refered to as the ‘position channel’? Suddenly with point clouds attributes become channels?
This comes from a previously mentioned analogy I would say. Channels are attributes of a textures. Also, you should know that the origin of point clouds in Houdini is rendering context, in which channels/textures are first class citizens. In SOPs land these may sound little less obvious.
3. Why are the returned points called the 'point cloud handle'?
Handle is just an integer saying Houdini, which point clouds among many possibly created previously you are referring to. Not sure how much you know about programming, but VEX is not (was not?) object oriented language back then, and such idiom is popular workaround for working with persistent objects in memory in languages without a notions of an objects. Specially in cases, when multiply threads are going to access single object they don't own, you provide a handler to the object which is private otherwise. To put is short: the whole point of creating point clouds is to amortize cost of searching through your data set. KD-Tree is temporary data structure, which is costly on creating, but super fast at access and can be reused endlessly as long as an underlying data is constant. When you use Point Clouds Open VOP, you create a point cloud texture object in memory, and then all functions refer to that object via handler.
Pc filter
1. What ‘channels’ can be queried? In the attached hipfile, pc filter successfully queries P and successfully queries the created attribute, ‘pointnum’. As I am only returning 1 point from the pcopen search, no weighting or averaging takes place so I get back P or pointnum with the same value from the closest single point returned. That's fine. Why doesn't it return anything for the other attribute, ‘myrand’? Does pcfilter only return channels/attributes of specific names like P and pointnum? If that's the case, are these the only 2, are there more? Are they documented?
Nope. Any attribute of the points you created your points cloud from, can be queried. I'm not sure why your scene doesn't work, but it seems it behave correct once you specify a file in pcopen with Opinput2, not the expression you use.
One thing about points clouds is that they are not always deterministic. They were designed to allow multi-threaded, very fast access and interpolation of heavy data sets, but to be fully predictible so for example order of points returned by pciterate may vary from execution to execution.
pcunshaded/pc export
Now this one I'm totally in the dark about. The helpcard states that this is somehow different to pc iterate in that it ‘only iterates over each point once’.
1. How is this different? It was my understanding that pc iterate iterated over all points one by one, hence once per point?
pcunshaded is solely for rendering context, in which mantra creates points for a shading purposes. Unlike pciterate, inside pcunshaded you most probably will write data to points cloud, not read from it. Most often this data are shading output, like color of a surface, light energy (photons in Mantra are implemented with points cloud infrastructure).
I'm not sure, but it's quite probably that Mantra doesn't know how many points it will iterate over when it enters pcunshaded loop.
2. Again I am confused by the terminology. The helpcard says you should then use pc export to store ‘shaded information’. What does it mean by ‘shaded’ and ‘unshaded’? I have absolutely no idea!
See above, this applies to Mantra shaders, not SOPs.
It seems some part of Point clouds docs are outdated though.
Houdini Indie and Apprentice » Composite frames into a grid
- symek
- 1390 posts
- Offline
phtj
I have a short animation. I would like to automatically combines the frames into a single image, where the frames are arranged in a grid (or matrix).
Mosaic COP does that. Also, in case you'd like to make it own version similar task was exampled here:
http://forums.odforce.net/topic/24636-show-all-layers-comp-node/ [forums.odforce.net]
Technical Discussion » What is Houdini's equivalent to Maya's objExists()
- symek
- 1390 posts
- Offline
Since Houdini scene has a structure of (nested) directories you have to rely on recursive search to find object in any of subdirectories (subnets in Houdini lingua) similarly to searching files on disk.
So instead of usual:
hou.node(“/obj”).glob(“box*”) –> for multi /obj/box1 /obj/box2
you would rather use:
hou.node(“/”).recursiveGlob(“box*”, filter=hou.nodeTypeFilter.NoFilter) –> /obj/box1 /obj/geo/box1 /shop/box_material etc
More help:
http://www.sidefx.com/docs/houdini15.0/hom/hou/Node [sidefx.com]
hope this helps,
skk.
So instead of usual:
hou.node(“/obj”).glob(“box*”) –> for multi /obj/box1 /obj/box2
you would rather use:
hou.node(“/”).recursiveGlob(“box*”, filter=hou.nodeTypeFilter.NoFilter) –> /obj/box1 /obj/geo/box1 /shop/box_material etc
More help:
http://www.sidefx.com/docs/houdini15.0/hom/hou/Node [sidefx.com]
hope this helps,
skk.
Houdini Lounge » H15 daily builds?
- symek
- 1390 posts
- Offline
I hardly see any advantage of incremental updates beside saving time. And time as any value is relative to other factors.
In my work such value is sandboxing. That's why I refer to work on Linux. In Linux based environments, sandboxing is A) crucial B) easy to achieve. It is straight, coherent, logical and reliable way of controlling chaos. If I open a scene created 2 years ago in my environment, it should look exactly the same. Until now this works for any professional application I am aware of. Otherwise it wouldn't be properly deployed and I would avoid it. Maya, Nuke, Houdini, whole bunch of renderers and utility programs + compilers, libraries, interpreters, all are read-only after installation, set up with reproducible steps, which are also scriptable. Thanks to that I'm be able to deploy them on any computer / site without third-party intervention. Its a matter of security and reliability for any clusters / web server / file server / CG pipeline I am aware of.
Incremental update would break the very principle of pipeline reliability. If it works for Adobe, this is only because the majority of their user base are freelances, who don't know they do a wrong thing, nor they are able to do it right. They would have to invest a lot of money and time in Windows' applications deployment architecture (Windows Server / domain etc). I hear it constantly how they are scared to update their applications, how they wait for first security patches, how they are bind to old/buggy version of an application for the entire show or have alternative hard drives with many variants of the system and/or apps. Madness. For the rest of the bunch (studios), Adobe-like deployment is plain stupid and creates a lot of problems with virtually zero advantages.
Saying that… creating separate install path for Indie users to help them deal with technical constrains, is yet another story and different sort of decision. Not very technical so to speak.
In my work such value is sandboxing. That's why I refer to work on Linux. In Linux based environments, sandboxing is A) crucial B) easy to achieve. It is straight, coherent, logical and reliable way of controlling chaos. If I open a scene created 2 years ago in my environment, it should look exactly the same. Until now this works for any professional application I am aware of. Otherwise it wouldn't be properly deployed and I would avoid it. Maya, Nuke, Houdini, whole bunch of renderers and utility programs + compilers, libraries, interpreters, all are read-only after installation, set up with reproducible steps, which are also scriptable. Thanks to that I'm be able to deploy them on any computer / site without third-party intervention. Its a matter of security and reliability for any clusters / web server / file server / CG pipeline I am aware of.
Incremental update would break the very principle of pipeline reliability. If it works for Adobe, this is only because the majority of their user base are freelances, who don't know they do a wrong thing, nor they are able to do it right. They would have to invest a lot of money and time in Windows' applications deployment architecture (Windows Server / domain etc). I hear it constantly how they are scared to update their applications, how they wait for first security patches, how they are bind to old/buggy version of an application for the entire show or have alternative hard drives with many variants of the system and/or apps. Madness. For the rest of the bunch (studios), Adobe-like deployment is plain stupid and creates a lot of problems with virtually zero advantages.
Saying that… creating separate install path for Indie users to help them deal with technical constrains, is yet another story and different sort of decision. Not very technical so to speak.
Technical Discussion » Jetson Embedded Computing
- symek
- 1390 posts
- Offline
Isn't it like the ARM on steroids? With power efficiency of mobile devices in mind and only 192 CUDA cores it hardly competes with any computing platform out there. It doesn't appear be be the best match for rendering / simulation.
How man Tegras one would have to stack to get gflops of single PC server with 5 Titans (2048 cores each) on board?
How man Tegras one would have to stack to get gflops of single PC server with 5 Titans (2048 cores each) on board?
Houdini Lounge » Why SESI doing this?
- symek
- 1390 posts
- Offline
It's not a question whether it's implemented in nodes or code. We would all appreciate nodes for their obvious advantages provided minimal assumptions about usability are met. These tools are simply too slow to be useful for production assets.
I don't think this was a conscious decision, Alexy - to choose nodes over c++. It looks more like intern's exploration or developer toy project to check new possibilities. It shouldn't be advertised as a new feature of H14 but rather prototype.
The thing about interactive/artists tools is that, afaik, they are particularly hard to develop without constant production insight, so shaping them with nodes must be very handy I guess…
I don't think this was a conscious decision, Alexy - to choose nodes over c++. It looks more like intern's exploration or developer toy project to check new possibilities. It shouldn't be advertised as a new feature of H14 but rather prototype.
The thing about interactive/artists tools is that, afaik, they are particularly hard to develop without constant production insight, so shaping them with nodes must be very handy I guess…
Technical Discussion » Pixel Based Shading?
- symek
- 1390 posts
- Offline
Thanks for thorough answer and encouraging. Yes, demoedgedetect gave me some tough moments I was trying to think about Cvex filter without extending VEX, but it left me with naked arrays and handmade indices for now.
I'm planing to repack source as small array per pixel - extended by user specified width as it's now for built in filters (-w x).
But I agree to make it more friendly some precomputed object or function would be handy. I'm not sure if I (HDK end-user) can make such setup, but one could try to make something similar to UT_PointGrid from full source (all samples) available in cvex filter (as a third party I think I could try to send all-samples array and make vex function precomputing static object similar to pcopen).
Then I could refer to exact pixel or neighbours' samples easily, query proximity etc. What would be even cooler is to auto precompute additional channels like integral of the grid, gradients etc. The cost shouldn't be high and most advance filters would benefit.
Ideally this should be a function similar to texture() but referring to underlying VRAY_Imager with all rasters and with special indexing in destination space. It could return either an array of samples or computed result of an operand like sum, min, max, std-div, avarage etc.
Not sure if this is worth efforts.
I'm planing to repack source as small array per pixel - extended by user specified width as it's now for built in filters (-w x).
But I agree to make it more friendly some precomputed object or function would be handy. I'm not sure if I (HDK end-user) can make such setup, but one could try to make something similar to UT_PointGrid from full source (all samples) available in cvex filter (as a third party I think I could try to send all-samples array and make vex function precomputing static object similar to pcopen).
Then I could refer to exact pixel or neighbours' samples easily, query proximity etc. What would be even cooler is to auto precompute additional channels like integral of the grid, gradients etc. The cost shouldn't be high and most advance filters would benefit.
Ideally this should be a function similar to texture() but referring to underlying VRAY_Imager with all rasters and with special indexing in destination space. It could return either an array of samples or computed result of an operand like sum, min, max, std-div, avarage etc.
Not sure if this is worth efforts.
Technical Discussion » Pixel Based Shading?
- symek
- 1390 posts
- Offline
ndickson
Mantra now allows custom pixel filters in the HDK.
Funny enough I've just put into my blender this new VRAY_PixelFilter.h with CVEX_Context.h… Still struggling about details like how to split data (give one pixel only its samples?), but overall impression is great. It makes possible many interesting effects.
Houdini Lounge » Procedural, Parametric. Differences?
- symek
- 1390 posts
- Offline
Procedural guy says: I design a procedure which produces for me the result I want. Once procedure is created, I can repeat its run effortlessly or change some part of it, to get different result easily.
Parametric guy says: I create using parametrized stuff, that is I use measurable characteristics (parameters) to create something more complicated than parameters them self (like just a few numbers describing endlessly smooth NURBS surface). This way I use less space and time to create more advanced stuff.
Procedural creation be parametrized or not, and vice versa.
Parametric guy says: I create using parametrized stuff, that is I use measurable characteristics (parameters) to create something more complicated than parameters them self (like just a few numbers describing endlessly smooth NURBS surface). This way I use less space and time to create more advanced stuff.
Procedural creation be parametrized or not, and vice versa.
Technical Discussion » Distributed rendering Help needed
- symek
- 1390 posts
- Offline
SreckoM
Few questions:
1. Is it possible to do that (using -H option) with Escape license on one machine, or on every machine I need to have licensed Houdini?
Afaik you just need to install Houdini remotely and have more than one Mantra license.
2. Is it possible to do DR with hqueue?I have currently hard time with HQueue without much experience, but it looks like it supports distributed rendering. What else means “Min/max Client per Frame” option on Mantra Options Tab in Hqueue Rop…
Houdini Learning Materials » Opposite of subdivide node
- symek
- 1390 posts
- Offline
SI Users » vop sop: getting the average of all points
- symek
- 1390 posts
- Offline
pusat
Or easier to do it in VEX:
vector computeCenter ( )
{
(…)
}
These will not be very fast though.
I would say this code has a good chance to have comparable performance with C++ node, but apparently not (it's ~x3.5 slower from AttributePromote in average mode, which is not bad, as it's still bellow a second for 5mil points, but still…).
Pure speculation here, but this is probably due to point()/import() functions which force atomic access to the attribute's array and per element compute (bye bye simd), unlike HDK which has these days very cache/simd friendly batch access to attributes. Looks like llvm/vex doesn't do its best here.
I think avg() on a big array of x, y, z's might be a way faster, but costs creating such an array doesn't make any sense. VEX functions like min(string attr)/max(string attr)/avg(string attr) would be helpful.
SI Users » vop sop: getting the average of all points
- symek
- 1390 posts
- Offline
In Vops, use Bounding Box Vop, to compute centroid (centroid = min+(max-min)/2). Neither that nor centroid() expression will give an average of positions though. They both are center of bounding box, which is not the same thing.
edit: there is pointavg() [sidefx.com] expression also.
edit: there is pointavg() [sidefx.com] expression also.
Edited by - 2014年4月27日 08:15:13
Houdini Learning Materials » Redering tutorials
- symek
- 1390 posts
- Offline
Korny Klown2
I can now officially say that Mantra is a pretty bad renderer. Very sad.
Your horribly arrogant attitude really doesn't help I these discussions and are rather astonishing considering how fundamental lack of knowledge you present not only in Houdini's concepts but general concepts of current CGI too.
Answering in such thread is ultimate waste of time me thinks, but for a sake of others' people interest whose time you might waste: the main difference in those scenes is that you force Mantra cornell box to use specular rays, while mental ray only diffuse. Now, if you were even close the position that gives you the right to render such harsh statements about any rendering engine, you would know what difference this makes, specially in light of path tracing nature of Mantra.
Not to mention using final gather in comparison to brute force path tracing is pretty much meaningless, but how would you know that, right?
The thing is this setup shows precisely why Mantra is light years ahead of your favorite render engine, you simple can't see that, as an expert from openmindn(l)ess perhaps. Any decent path tracer out there can render cornell box scene much faster and better quality than mental ray, specially if you help it with final gather like optimization, which can be done in Houdini with a single click, but you don't know that either, right?
Ask your Maya friends why they spend tens of 1000s of dollars for Arnold before making any further assumptions.
Then compare render times of equal quality images by mental ray, mantra and Arnold.
Edited by - 2014年4月12日 07:49:34
Houdini Learning Materials » AttribCreate basic help
- symek
- 1390 posts
- Offline
MartybNz
Lol - Nuke may appear extremely powerful but it too has significant issues - talk to any developer for it and it's got big issues!
Not to mention a hord of compositors truly hating it Main reason for such a vast Nuke's adoption is its engine. You hardly will find other compositor capable of working on any layers size and number, in 32bit, per any number or channels…
In terms of GUI, Shake was nicer
-
- Quick Links