By importing the cloth halves as two separate objects I was able to get all but one of the seam constraints working. Doesn't matter which constraint is bypassed, there just needs to be one pair of points that is not constrained.
The work around would be to run the sim for a few frames on the whole cloth then switch to a sim on the constrained cloth with one pair of points uncontrained.
Found 221 posts.
Search results Show results as topic list.
Technical Discussion » Limited number of cloth constraints??
- MichaelC
- 344 posts
- Offline
Houdini Lounge » BUG: vop sop ignore cache limit preference
- MichaelC
- 344 posts
- Offline
You're probably right, I didn't really investigate the problem when it happened. Houdini crashed with a memory error, and I just opened up a new scene and rebuilt it this way and everything worked fine. thought you might like to try it and see if it made a difference.
Houdini Lounge » BUG: vop sop ignore cache limit preference
- MichaelC
- 344 posts
- Offline
Oddly I ran into this same issue earlier tonight. Vista 32, Core Duo, Qaudro 2500. I had an animation stored in picnc format, 100 frames, 480 pixels square, and was transfering the image to a grid with 200x200 divisions using a VOP SOP.
As a work around, I am now using a pic() expression in the color channels of a point COP. This works fine and my grid is even more dense, 500x500 divisions.
If you require UVs to map the image to the geo properly you can use the Vertex Split SOP to split the mesh on its UV seams and use a Point SOP (UV values in the position channels) to morph the mesh into it's UV space. Apply the Point SOP with an appropriate pic() expression, then either transfer the attributes back to the mesh or morph the unwrapped mesh back to it's original shape. Example of the workaround is attached.
As a work around, I am now using a pic() expression in the color channels of a point COP. This works fine and my grid is even more dense, 500x500 divisions.
If you require UVs to map the image to the geo properly you can use the Vertex Split SOP to split the mesh on its UV seams and use a Point SOP (UV values in the position channels) to morph the mesh into it's UV space. Apply the Point SOP with an appropriate pic() expression, then either transfer the attributes back to the mesh or morph the unwrapped mesh back to it's original shape. Example of the workaround is attached.
Technical Discussion » Returning an index in a list
- MichaelC
- 344 posts
- Offline
I really don't know the answer to this, but as a hack solution you could brach off in your network, blast away everything except the group, and always grab the third point from what remains. If you need to reference the original point number just add it as an attribute to the group before you blast the points outside the group.
Houdini Lounge » Mouse turns red?
- MichaelC
- 344 posts
- Offline
This sounds like a possible bug. You should provide information about you hardware, OS, driver version, and which version of Houdini you are using and maybe one of the devs can help.
Houdini Lounge » Recommend 3rd Party Renderer
- MichaelC
- 344 posts
- Offline
The most common thrid party renderer by far is PRMan. I've been told it's possible now to integrate just about any third party renderer using SOHO, Houdini's python binding for rendering, but I haven't looked into it myself. Mental Ray is supported by Houdini.
The question is though, why not use Mantra? Mantra is as capable if not more capable than any third party renderer you could want to use and with the tight integration with the rest of Houdini's tools you've got a pretty compelling reason not to choose a third party renderer over Mantra.
The question is though, why not use Mantra? Mantra is as capable if not more capable than any third party renderer you could want to use and with the tight integration with the rest of Houdini's tools you've got a pretty compelling reason not to choose a third party renderer over Mantra.
Technical Discussion » two quesetions about vex
- MichaelC
- 344 posts
- Offline
He means something like
vector(rand($ID+3), rand($ID+12), rand($ID+234))
You can use any expression to generate a seed for the rand function.
It doesn't matter what values you add to $ID so long as they are different for each channel. Using a scalar in combination with $ID should give you vectors that are different for every particle and every channel.
Edit: Take a look at the pop network in this file.
vector(rand($ID+3), rand($ID+12), rand($ID+234))
You can use any expression to generate a seed for the rand function.
It doesn't matter what values you add to $ID so long as they are different for each channel. Using a scalar in combination with $ID should give you vectors that are different for every particle and every channel.
Edit: Take a look at the pop network in this file.
Edited by - 2007年12月2日 10:01:49
Technical Discussion » TOOO BAD!!
- MichaelC
- 344 posts
- Offline
Yeah, but old habits die hard. I feel like Houdini could support every file format known to makind and people would still insist on using Maya for rigging and animation. You might be able to win previz and layout with a Collada exporter but those animators and riggers are a tough nut to crack. They don't know a superior product when they see one :twisted:
Technical Discussion » TOOO BAD!!
- MichaelC
- 344 posts
- Offline
If the exporter forces the user to be explicit about what is being exported it makes things much easier, but even then there's a lot that can wrong, so you end up wrting a tool that forces a user to be explicit AND format their scene a certain way. That's fine if you are in a studio where there is a prescribed workflow, but it's going to fall over quickly and often out in the wild.
I wrote an exporter for Houdini that will export meshes, capture rig hierarchies, vertex animation, bone transforms, particles and simple material information to a custom XML format for a personal game project. It didn't take long at all, however it has pretty limited capabilities and it forces a certain workflow. It forces users to remember to do things like not delete their capture attributes (which Houdini does by default), forces users to use a custom Material HDA, forces users to use takes… etc.
Collada is great; it supports a lot of information, however there are fundamental features of Houdini I'm not so sure Collada can represent in the current spec. MetaCapture for instance; Assuming you could write metacapture data to Collada, is there any piece of third party software available that can parse that data in a meaningful way?
I'm not down on the idea of a Collada exporter here. I just know it's going to require a considerable ammount of effort, so I don't think anyone should fault SideFX for the lack of support for Collada export at this time. In fact, I'd be willing to try to repurpose my little game exporter for Collada. My only concern is that SideFX might not be too happy about someone making available such an exporter that was fully funtional with Apprentice.
I wrote an exporter for Houdini that will export meshes, capture rig hierarchies, vertex animation, bone transforms, particles and simple material information to a custom XML format for a personal game project. It didn't take long at all, however it has pretty limited capabilities and it forces a certain workflow. It forces users to remember to do things like not delete their capture attributes (which Houdini does by default), forces users to use a custom Material HDA, forces users to use takes… etc.
Collada is great; it supports a lot of information, however there are fundamental features of Houdini I'm not so sure Collada can represent in the current spec. MetaCapture for instance; Assuming you could write metacapture data to Collada, is there any piece of third party software available that can parse that data in a meaningful way?
I'm not down on the idea of a Collada exporter here. I just know it's going to require a considerable ammount of effort, so I don't think anyone should fault SideFX for the lack of support for Collada export at this time. In fact, I'd be willing to try to repurpose my little game exporter for Collada. My only concern is that SideFX might not be too happy about someone making available such an exporter that was fully funtional with Apprentice.
Technical Discussion » TOOO BAD!!
- MichaelC
- 344 posts
- Offline
If found it's not difficult at all to write an exporter for Houdini 9 using Python. The problem I find, is that Houdini is such a departure from other applications in the way that it represents the scene data and in what a scene may contain that determining what data is important to the exporter can be incredibly difficult. It's not as simple as just walking the network and writing out a mesh and bone information and some lights and a camera. I don't envy the person who ends up trying to write a Collada exporter at SideFX. It's going to be an absolutely infuriating task, and it's a project that will probably never end.
Technical Discussion » constrian object to viewport
- MichaelC
- 344 posts
- Offline
What we were trying to do is put labels (font SOP) on locators we are importing from Maya. We thought it would be a nice feature to have the locator's label always always face the screen regardless of what camera the user was viewing the scene through (even no camera). We don't want to end up with a complex solution for this, so well probably just orient the lable to the locator's Z axis.
Technical Discussion » Python in a String Parameter field
- MichaelC
- 344 posts
- Offline
OK I have a solution, but still would would like to know the answer to the question above.
Right now we run the python function as a callback to the filename parameter and set a hidden string parameter to reference.
Right now we run the python function as a callback to the filename parameter and set a hidden string parameter to reference.
Technical Discussion » Python in a String Parameter field
- MichaelC
- 344 posts
- Offline
What is the syntax for executing a python function stored in the python hdaModule of a digtal asset within a string parameter field of that asset?
Long story short, we are trying to strip the path and extension off a filename parameter and are using a python function embedded in the HDA to do it. If anyone has other ideas of how to go acout doing this they would be welcome as well.
Long story short, we are trying to strip the path and extension off a filename parameter and are using a python function embedded in the HDA to do it. If anyone has other ideas of how to go acout doing this they would be welcome as well.
Houdini Lounge » Outputing Object Normals
- MichaelC
- 344 posts
- Offline
Perhaps you need normals in a different space. If so you'll need to determine what space the normals should be in and put together a VEX shader that renders the proper normals.
Technical Discussion » normal map question
- MichaelC
- 344 posts
- Offline
I don't think a normal map is particularly more efficient than a bump map. I'll try to be a bit clearer…
With a normal map you can have normal vectors in the map that point in any arbitrary direction. So say you have a single poygon; that single polygon only has one true surface normal. With a normal map you could potentially be replacing that single normal, as far as the renderer is concerned, with thousands of unique normals, possibly several normals for each pixel in the final image, all pointing in slightly varying directions. It can give the illusion of a very complex, high resolution, smooth surface (provided the normal map is of a sufficient resolution) on a single polygon. However, since the geometry is not actually being modified, you will see sharp polygon edges around the silhouette of the model.
With the bump map on the other hand, what happens is the existing single surface normal is multiplied by the value stored in the bump map at each point on the surface. The direction of the normal used in the lighting calculations stays the same as the original surface normal. So you have a polygon with one normal, the direction never varies, only the magnitude of that normal varies per pixel. This gives the surface a simple embossed look. Again with bump maps the surface is not actually being modified, so you will see sharp polygon edges on the silhouette of the model.
In terms of complexity it depends on how the normal map is generated and used as to whether a normal map is more efficient than a bump map. In some cases you may just swap the surface normals for those in the normal map. In other cases the shader may have to do a calculation using tangents and what not stored on the model. At any rate with hardware these days it's not something to get concerned about.
You asked about tangents eairler… For each normal on your surface, you also have a binormal, and a tanget. The binormal is a vector perpendicular to the normal, and the tangent is a vector perpendicular to both the normal and binormal. They can be envisioned as a little coordinate system for any point on the surface similar to the X, Y, Z coordinates you are used to seeing in 3D applications. The tangent space can be envisioned as a plane perpendicuar to the surface normal.
You will see talk about camera space normal maps, object space normal maps, and tangent space normal maps… They all have their uses but for a myriad of reasons, tangent space normal maps are the most commonly used in games. Without going into a lot of detail the main reasons are that tangent space maps can be lit from any direction, they can be reused on opposite sides of a model, and they can be compressed to two channels.
With a normal map you can have normal vectors in the map that point in any arbitrary direction. So say you have a single poygon; that single polygon only has one true surface normal. With a normal map you could potentially be replacing that single normal, as far as the renderer is concerned, with thousands of unique normals, possibly several normals for each pixel in the final image, all pointing in slightly varying directions. It can give the illusion of a very complex, high resolution, smooth surface (provided the normal map is of a sufficient resolution) on a single polygon. However, since the geometry is not actually being modified, you will see sharp polygon edges around the silhouette of the model.
With the bump map on the other hand, what happens is the existing single surface normal is multiplied by the value stored in the bump map at each point on the surface. The direction of the normal used in the lighting calculations stays the same as the original surface normal. So you have a polygon with one normal, the direction never varies, only the magnitude of that normal varies per pixel. This gives the surface a simple embossed look. Again with bump maps the surface is not actually being modified, so you will see sharp polygon edges on the silhouette of the model.
In terms of complexity it depends on how the normal map is generated and used as to whether a normal map is more efficient than a bump map. In some cases you may just swap the surface normals for those in the normal map. In other cases the shader may have to do a calculation using tangents and what not stored on the model. At any rate with hardware these days it's not something to get concerned about.
You asked about tangents eairler… For each normal on your surface, you also have a binormal, and a tanget. The binormal is a vector perpendicular to the normal, and the tangent is a vector perpendicular to both the normal and binormal. They can be envisioned as a little coordinate system for any point on the surface similar to the X, Y, Z coordinates you are used to seeing in 3D applications. The tangent space can be envisioned as a plane perpendicuar to the surface normal.
You will see talk about camera space normal maps, object space normal maps, and tangent space normal maps… They all have their uses but for a myriad of reasons, tangent space normal maps are the most commonly used in games. Without going into a lot of detail the main reasons are that tangent space maps can be lit from any direction, they can be reused on opposite sides of a model, and they can be compressed to two channels.
Technical Discussion » How to control a source to birth exactly one particle?
- MichaelC
- 344 posts
- Offline
You say you have the impact data, did you try an expression in the activation field. If collided 1 else 0, something along those lines…
Technical Discussion » normal map question
- MichaelC
- 344 posts
- Offline
I think maybe you need to understand the difference between a normal map, a bump map and a displacement map.
All surfaces have normals. A normal is a direction vector that tells the direction a surface is facing, and is used in the lighting calculation to determine how light is reflected on the surface.
With a normal map, what you are doing is replacing the actual surface normals with normals that are stored in a 2D image map. When these normals are used in the lighting calculation it can make it appear as though there is more detail on the surface than there is.
A bump map one the other hand, is used to modify the existing surface normals, pushing them in or out in the lighting calculation. It can create the appeareance of raised and recessed areas on the surface, but in many cases the result is not as effective as a normal map, because the true surface normals are being used.
A displacement map is similar to a bump map, except that it actually displaces the surface along it's existing normals. If you have a micropolygon renderer, such as in Houdini, this will be the most effective way to create fine surface detail, and it's the method most often used in film, though sometimes a combinaion of methods will be used in the same asset.
Normal mapping is popular in game development because it's a very effective and inexpenive method in terms of resources to make a very simple model that can be drawn quickly appear to have much more surface detail than is actually there. The method is not used in film to the extent it is in games, and in many cases it's used to a different end in film than it is in games. In film for imstance, they may use a normal map to create a fake fur or feathered surface that sits underneath the high res fur and feathers of an animal or to be used on characters in the distant back ground. They are doing this because it's a cheap way to fill these surfaces with fur and feathers without the expensive over head of actually generating and rendering fur and feathers.
All surfaces have normals. A normal is a direction vector that tells the direction a surface is facing, and is used in the lighting calculation to determine how light is reflected on the surface.
With a normal map, what you are doing is replacing the actual surface normals with normals that are stored in a 2D image map. When these normals are used in the lighting calculation it can make it appear as though there is more detail on the surface than there is.
A bump map one the other hand, is used to modify the existing surface normals, pushing them in or out in the lighting calculation. It can create the appeareance of raised and recessed areas on the surface, but in many cases the result is not as effective as a normal map, because the true surface normals are being used.
A displacement map is similar to a bump map, except that it actually displaces the surface along it's existing normals. If you have a micropolygon renderer, such as in Houdini, this will be the most effective way to create fine surface detail, and it's the method most often used in film, though sometimes a combinaion of methods will be used in the same asset.
Normal mapping is popular in game development because it's a very effective and inexpenive method in terms of resources to make a very simple model that can be drawn quickly appear to have much more surface detail than is actually there. The method is not used in film to the extent it is in games, and in many cases it's used to a different end in film than it is in games. In film for imstance, they may use a normal map to create a fake fur or feathered surface that sits underneath the high res fur and feathers of an animal or to be used on characters in the distant back ground. They are doing this because it's a cheap way to fill these surfaces with fur and feathers without the expensive over head of actually generating and rendering fur and feathers.
Technical Discussion » Lowest cost houdini rig
- MichaelC
- 344 posts
- Offline
I don't think the RBDs in Houdini are multithreaded in Houdini 9, but I'm sure someone will correct me if I am wrong. The are many parts of Houdini that can take advantage of multiple procs though, so it's deffintiely worth while to have a dual core machine. If for nothing else, you can solve an RBD simulation on one core and still have use of your machine to do something else.
The Core 2 I think offers the most bang for the buck. I had read a few weeks ago that Intel was going to be slashing prices on them this month, it looks like they did. I'm seeing Core 2 Quads for as low as 260.00 dollars. It's very affordable, however you'll need to really load up the machine with memory to make use of it. I'd recommend at least going with a dual Core so you still can make use of your machine while it solves a sim in the background. So go with a core 2, Duo or Quad is up to you, go 2.33 Ghz or higher.
So you'll spend about 350 or so on a decent motherboard and processor, up to about 3-400 for a really nice video card another hundred on a hard drive, a couple hundred or so for some dual channel memory, 50 dollars or so on a DVD drive. Throw it in a 100 dollar case, install Linux, and you've got a very spiffy workstation for far less than 5,000 dollars.
The Core 2 I think offers the most bang for the buck. I had read a few weeks ago that Intel was going to be slashing prices on them this month, it looks like they did. I'm seeing Core 2 Quads for as low as 260.00 dollars. It's very affordable, however you'll need to really load up the machine with memory to make use of it. I'd recommend at least going with a dual Core so you still can make use of your machine while it solves a sim in the background. So go with a core 2, Duo or Quad is up to you, go 2.33 Ghz or higher.
So you'll spend about 350 or so on a decent motherboard and processor, up to about 3-400 for a really nice video card another hundred on a hard drive, a couple hundred or so for some dual channel memory, 50 dollars or so on a DVD drive. Throw it in a 100 dollar case, install Linux, and you've got a very spiffy workstation for far less than 5,000 dollars.
Technical Discussion » Lowest cost houdini rig
- MichaelC
- 344 posts
- Offline
If your main concern is simulation you are going to want to blow most of your budget on your CPU. 2-4 gigs of RAM, the fastest CPU possible, and a decent Nvidia card. Doesn't really matter if it's a Quadro or a gaming card. I run Houdini on 4 machines, two with higher end gaming cards (7900 GTS, 8600GTS) and two with Quadros (4500, 2500 mobile) and honestly there's not a real huge difference that I can notice save when intereacting with dense geometry in the viewport with a sculpt SOP or something of that nature. The 8600 I can tell you is deffintiely the slowest, the 4500 is the fastest (not by much), and the 7900 and 2500 seem about the same.
You can build something cheap that will run Houdini well.
You can build something cheap that will run Houdini well.
Houdini Lounge » Houdini vs Maya
- MichaelC
- 344 posts
- Offline
Actually his two cents is worth quite a bit more these days with the Canadian exchange rate.
But yeah, I agree, convincing your boss to pickup maybe one Master license and one or two Escape licenses plus a bunch of Mantra tokens will probably be the way to go. If you guys are going to Siggraph convincing them shouldn't be too hard. Just check out the booth presentations, the free experts panel on Moday at 1 PM, or the user group on Tuesaday night. If they go back to the old User Group Meeting format with presentations of recent work, you'll surely see some pretty clever stuff that'll have your boss scratching his head and wondering if it's even possible to do something like that in Maya.
Sorry it's not really an answer to your question, but in my experience no bullet point list is going to convince your boss to shell out money on new software. You just have to show them. Maybe what you could do is take a recent project or part of a recent project and rework it in APprentice and present it as a case study to your boss. Show them how this tool could've improved that project or saved them money. Think down the road, show them how you can easily build reusable tools without having to maintain thousands of lines of C++ or MEL.
The biggest complaint is usually the cost of training. If you could build a tool that accomplishes something very complex, and show your boss that you can hand this tool off to some junior artist with Escape, and within minutes have them generating this very complex effect with just a bit of instruction, that might help a bit as well.
But yeah, I agree, convincing your boss to pickup maybe one Master license and one or two Escape licenses plus a bunch of Mantra tokens will probably be the way to go. If you guys are going to Siggraph convincing them shouldn't be too hard. Just check out the booth presentations, the free experts panel on Moday at 1 PM, or the user group on Tuesaday night. If they go back to the old User Group Meeting format with presentations of recent work, you'll surely see some pretty clever stuff that'll have your boss scratching his head and wondering if it's even possible to do something like that in Maya.
Sorry it's not really an answer to your question, but in my experience no bullet point list is going to convince your boss to shell out money on new software. You just have to show them. Maybe what you could do is take a recent project or part of a recent project and rework it in APprentice and present it as a case study to your boss. Show them how this tool could've improved that project or saved them money. Think down the road, show them how you can easily build reusable tools without having to maintain thousands of lines of C++ or MEL.
The biggest complaint is usually the cost of training. If you could build a tool that accomplishes something very complex, and show your boss that you can hand this tool off to some junior artist with Escape, and within minutes have them generating this very complex effect with just a bit of instruction, that might help a bit as well.
-
- Quick Links