>under ur collision pop, under hints, make sure that u have specified that >it is translating geom.
Do this first!
>and then depending how fast its moving, try under the actual pop >network, specify that oversampling is more than 1, ie 4.
This will change the amount of times per frame your simulation is evaluated. If your collision is moving quickly, it may be moving through points in-between frames, and those points are not colliding because your simulation is not evaluating in-between frames. Taking your oversampling level up to 4 or higher will give you more accurate collision results.
ALSO…. If you can, triangulate your collision geometry. When all else fails triangulating your surface can certainly help. You can triangulate a surface many ways… one is procedurally as you're building it. The other is to use a convert sop and convert your mesh to poly and turn “triangles” on under connectivity.
Found 132 posts.
Search results Show results as topic list.
Technical Discussion » What wrong with Collision POP ?
- the_squid
- 132 posts
- Offline
Technical Discussion » Scripting Q
- the_squid
- 132 posts
- Offline
I have a file a friend sent me to debug. The first thing I noticed is he has about 50 pop networks and each of them has an incorrect reference in the source pop /impulserate parameter.
Since his popnet1 needs to reference an object called…/sort1, I figured I should write a little script to go in there and fix the bad paths for him.
So far I have this:
for i = 1 to 21
chrmkey -t 0 /obj/geo1/break_shit/popnet${i}/source${i}/impulserate
chkey -t 0 -v 0 -m 0 -A 0 -F ‘nprims(“/obj/geo1/break_shit/sort${i}”)/1’ /obj/geo1/break_shit/popnet${i}/source1/impulserate
end
================
This works great except for one thing… it puts this into the parameter:
nprims(“/obj/geo1/break_shit/sort${i}”)/1
================
What I want it to do, is for each {i} put the value in its place like so:
nprims(“/obj/geo1/break_shit/sort1”)/1 (where the 1 in sort1 is sort${i} evaluating for the first time.
================
Obviously it's the chkey command followed by the string designated by the ticks `` that's throwing this off. But how do I get around this?
Since his popnet1 needs to reference an object called…/sort1, I figured I should write a little script to go in there and fix the bad paths for him.
So far I have this:
for i = 1 to 21
chrmkey -t 0 /obj/geo1/break_shit/popnet${i}/source${i}/impulserate
chkey -t 0 -v 0 -m 0 -A 0 -F ‘nprims(“/obj/geo1/break_shit/sort${i}”)/1’ /obj/geo1/break_shit/popnet${i}/source1/impulserate
end
================
This works great except for one thing… it puts this into the parameter:
nprims(“/obj/geo1/break_shit/sort${i}”)/1
================
What I want it to do, is for each {i} put the value in its place like so:
nprims(“/obj/geo1/break_shit/sort1”)/1 (where the 1 in sort1 is sort${i} evaluating for the first time.
================
Obviously it's the chkey command followed by the string designated by the ticks `` that's throwing this off. But how do I get around this?
Houdini Lounge » Dynamic Operators
- the_squid
- 132 posts
- Offline
I have asked SESI… but they're very tight lipped on the subject of their new dynamic operators (DOPs.) Us Houdini artists are all eager to know what these new tools are about and how they'll behave with the rest of Houdini. No amount of me begging SESI has yielded much information on this new toolset… but I have been asking around.
…And I found some answers.
I'm not going to give much away, but from what I've heard Houdini's new RBD's (which will be one of the solvers available in the new toolset) kick some major butt. Infact, Maya artists are already admitting their current tools for rbd's dont come anywhere close to what the Houdini guys have up their sleeves. I haven't heard much about cloth, so I assume that's still being worked on. A fluid solver is also being worked on to ship with later releases of the DOP toolset down the line.
A lot of what I just said is already known… except the whole part about RBD's kicking major butt (which we assumed, but hadnt been confirmed.) Now more than ever I want to know more… for now I'm satisfied knowing the results are already speaking for themselves for those lucky enough to test these new tools.
The rest of us wait in anticipation…
…And I found some answers.
I'm not going to give much away, but from what I've heard Houdini's new RBD's (which will be one of the solvers available in the new toolset) kick some major butt. Infact, Maya artists are already admitting their current tools for rbd's dont come anywhere close to what the Houdini guys have up their sleeves. I haven't heard much about cloth, so I assume that's still being worked on. A fluid solver is also being worked on to ship with later releases of the DOP toolset down the line.
A lot of what I just said is already known… except the whole part about RBD's kicking major butt (which we assumed, but hadnt been confirmed.) Now more than ever I want to know more… for now I'm satisfied knowing the results are already speaking for themselves for those lucky enough to test these new tools.
The rest of us wait in anticipation…
Technical Discussion » Motion Blur
- the_squid
- 132 posts
- Offline
It depends on how your bug is setup. Velocity motion blur should only work if your bug has velocity. Is he being copied onto a point from a particle system giving him velocity? If the wings are animated at the sop level (ie in a transform or copy sop within the object) they should render with motion blur if you turn on “deformation motion blur.”
If you want both velocity motion blur AND deformation motion blur I think you may have to seperate your objects out.
You could also use a trail sop to calculate your entire bugs velocity (including wings.) and motion blur it based off velocity after you've baked velocity into all of your bugs points using the trail sop.
If you want both velocity motion blur AND deformation motion blur I think you may have to seperate your objects out.
You could also use a trail sop to calculate your entire bugs velocity (including wings.) and motion blur it based off velocity after you've baked velocity into all of your bugs points using the trail sop.
Technical Discussion » point clouds... not i3d
- the_squid
- 132 posts
- Offline
So basically, i3D is great if you have the time to write your own shaders for it… in some cases you can get around its shortcomings by using a post approach to add motion blur, or you can use it to visualize density data or vector fields you want to force simulations through. This is all well and good, but after all these years of development you'd think we'd have a better (FASTER!!!) way to achieve volumetric effects in Houdini WITHOUT having to write custom shaders. I'm a little upset because we just did a Houdini presentation here, and they showed “volumetric smoke” that they'd rendered with Houdini using i3d and, lets just put it this way – apparantly I'm not the only guy who doesnt get good results out of i3d.
I dont like plugins. I dont like out of the box solutions and fisher price software. But I do like having a good starting point, and i3D just doesnt deliver in a lot of situations.
Also, I asked the guys doing the demo to show point cloud rendering techniques for volumetric smoke. They showed up with a render of a Buddha statue using sub-surface scattering. Not exactly what I was looking for. Something tells me I better start learning how to use jig….
:cry:
I dont like plugins. I dont like out of the box solutions and fisher price software. But I do like having a good starting point, and i3D just doesnt deliver in a lot of situations.
Also, I asked the guys doing the demo to show point cloud rendering techniques for volumetric smoke. They showed up with a render of a Buddha statue using sub-surface scattering. Not exactly what I was looking for. Something tells me I better start learning how to use jig….
:cry:
Houdini Lounge » Learning Houdini...No, really... learning Houdini
- the_squid
- 132 posts
- Offline
The online material helps me a lot actually, I use the video tutorials all the time.
IMHO The best way to learn Houdini is to make friends with a Houdini artist :wink:
The forums can be helpful or they can get you nowhere, just depends.
If you like pain you could work at DD :twisted: and learn that way. Or Sony. Or R&H. (DD has a great Houdini community.)
In my experience it's always been best to learn Houdini with a friend. It cuts down on frusteration and you're able to learn twice as fast by sharing what you've done with the other artist. A community is helpful.
No matter what though it takes a lot of patience. You're not going to learn it in a week so dont even try. The most important thing you can do is try to get your head around the way it thinks. Once you understand the basic logic you can start to figure things out on your own.
I use the sop specific help all the time, if you dont know what that is just open the params on any op (press p or switch to parameter view) and click on the question mark to bring up help for that op. BTW when I say op I'm referring to anything with *op in the name – sops, cops, rops, etc.
There are a lot of ways to learn Houdini. A lot of the info on the web is highly technical. It depends on what you want to use it for, but that info can be helpful as well. Obviously http://www.odforce.net/ [odforce.net] can be a good source (but search doesnt work! :x !!!)
These forums can be great as well for help (if you have a specific question.)
Your most valuable asset during the first few days/weeks of learning will be patience.
We'll keep an eye out for your posts so we can be of help.
Good luck and welcome to the community!
IMHO The best way to learn Houdini is to make friends with a Houdini artist :wink:
The forums can be helpful or they can get you nowhere, just depends.
If you like pain you could work at DD :twisted: and learn that way. Or Sony. Or R&H. (DD has a great Houdini community.)
In my experience it's always been best to learn Houdini with a friend. It cuts down on frusteration and you're able to learn twice as fast by sharing what you've done with the other artist. A community is helpful.
No matter what though it takes a lot of patience. You're not going to learn it in a week so dont even try. The most important thing you can do is try to get your head around the way it thinks. Once you understand the basic logic you can start to figure things out on your own.
I use the sop specific help all the time, if you dont know what that is just open the params on any op (press p or switch to parameter view) and click on the question mark to bring up help for that op. BTW when I say op I'm referring to anything with *op in the name – sops, cops, rops, etc.
There are a lot of ways to learn Houdini. A lot of the info on the web is highly technical. It depends on what you want to use it for, but that info can be helpful as well. Obviously http://www.odforce.net/ [odforce.net] can be a good source (but search doesnt work! :x !!!)
These forums can be great as well for help (if you have a specific question.)
Your most valuable asset during the first few days/weeks of learning will be patience.
We'll keep an eye out for your posts so we can be of help.
Good luck and welcome to the community!
Technical Discussion » point clouds... not i3d
- the_squid
- 132 posts
- Offline
“…However we do use them pretty extensively to mimic our voxel density data generated from our voxel software and we use them to represent vectorfields from fluid simulation and tailor-made force fields. ”
This is cool and something I never thought of, because I have no idea how I would represent vectorfields using i3d… I can see how that would be great if it's fast.
“Why use them? They're FAST to access randomly and Houdini handles caching them when you're accessing them in a sequence very well.”
Definately true. It's the writing out process that can take a long time. I guess the same goes for any volume renderer really. We have jig but thats another beast itself. I'd probably get a lot further faster with i3d.
“Many voxel formats - like ours - are most efficient when reading them in a certain order, say slice by slice or something similar. i3d can query a thousand points scattered in a volume very quickly and easily. We pushed plenty of debris down fluid simulation vectorfields with playback speeds way over realtime.”
Again this is a really cool use of i3d that I'd never even considered. Now that you bring it up, I'm trying to imagine how that would work.
“We've cast shadows on objects rendered in Mantra using i3d copies of our voxel data and the integrate3d() function is really simple to use and still very fast. ”
I'm not at all familiar with the integrate3d() function. I looked in the text editor for help but couldnt find it. Is this a function I'd use at the VEX level? As far as using i3d's for shadows, thats definately a plus. Being able to grab an i3d texture in an isosurface is awesome.
“The power of accessing i3d's via the VEX interface in all contexts is very cool and powerful. We've written VEX interfaces to our own format but it is nowhere near the same speed as what we can get out of i3d.”
Another clue for me to get my VEX knowledge on. To this day I've never written a single line of VEX code. I've thrown shaders together in VOPs, but thats different.
“i3d's are cool, no denying it. Point clouds are cool - and perhaps cooler.”
Motion blur… I would love to start playing with point clouds more.
“I know this doesn't really help you and your rendering task. Please be aware that your problem isn't with i3d's - it's probably more with the mantra raymarching shader you're rendering with and perhaps the i3d generation shader.”
I figure pretty much anything in Houdini is going to be well thought out and more than just skin deep. However, the more accessable parts of i3D need to be expanded on more. Like you say, we need more built in shaders to generate and render i3ds with.
“i3dgen is a pretty slow method of generating voxel data, unfortunately - and populating chunk by chunk (buckets) is really the only way to allow huge datasets to be generated without blowing your RAM.
I'm convinced you can get good results with it but it takes some hard work and clever thinking, and then a dollop of patience. ”
Speaking of which, my cigarette smoke is looking better. Took some time but I'm pretty happy with the results. I'm at the point where it's good but could be so much better if I knew more about how to really get in there and build my own custom VEX tools. The deadlines on this show and the fact I'm having to bounce back and forth to do shots in Maya and then tests in Houdini keeps me from being able to really dig into the books and school myself right now unfortunately.
“Jig is the only commercial volume renderer out there. And it takes some learning too. Take a look at that if you have the time and/or money. This also renders by buckets but doesn't require you to generate the entire dataset beforehand. On the other hand, all that computation of the density is lost and must be regenerated. (Unless they've made persistent caches now?)
The tools available to the lone artist for rendering volumes are not fantastic”
Jig is great when it works. Takes a lot to dial it in and find that sweet spot. Works great for low density smoke but once you start cranking the density it goes to hell. We had all sorts of problems using jig, for one thing their support is usually too busy working on other things to support the software.
Anyway, thanks for the response. Well thought out and you bring up some methods I hadnt even thought of. If you could elaborate that would be great. If not, I understand 8)
This is cool and something I never thought of, because I have no idea how I would represent vectorfields using i3d… I can see how that would be great if it's fast.
“Why use them? They're FAST to access randomly and Houdini handles caching them when you're accessing them in a sequence very well.”
Definately true. It's the writing out process that can take a long time. I guess the same goes for any volume renderer really. We have jig but thats another beast itself. I'd probably get a lot further faster with i3d.
“Many voxel formats - like ours - are most efficient when reading them in a certain order, say slice by slice or something similar. i3d can query a thousand points scattered in a volume very quickly and easily. We pushed plenty of debris down fluid simulation vectorfields with playback speeds way over realtime.”
Again this is a really cool use of i3d that I'd never even considered. Now that you bring it up, I'm trying to imagine how that would work.
“We've cast shadows on objects rendered in Mantra using i3d copies of our voxel data and the integrate3d() function is really simple to use and still very fast. ”
I'm not at all familiar with the integrate3d() function. I looked in the text editor for help but couldnt find it. Is this a function I'd use at the VEX level? As far as using i3d's for shadows, thats definately a plus. Being able to grab an i3d texture in an isosurface is awesome.
“The power of accessing i3d's via the VEX interface in all contexts is very cool and powerful. We've written VEX interfaces to our own format but it is nowhere near the same speed as what we can get out of i3d.”
Another clue for me to get my VEX knowledge on. To this day I've never written a single line of VEX code. I've thrown shaders together in VOPs, but thats different.
“i3d's are cool, no denying it. Point clouds are cool - and perhaps cooler.”
Motion blur… I would love to start playing with point clouds more.
“I know this doesn't really help you and your rendering task. Please be aware that your problem isn't with i3d's - it's probably more with the mantra raymarching shader you're rendering with and perhaps the i3d generation shader.”
I figure pretty much anything in Houdini is going to be well thought out and more than just skin deep. However, the more accessable parts of i3D need to be expanded on more. Like you say, we need more built in shaders to generate and render i3ds with.
“i3dgen is a pretty slow method of generating voxel data, unfortunately - and populating chunk by chunk (buckets) is really the only way to allow huge datasets to be generated without blowing your RAM.
I'm convinced you can get good results with it but it takes some hard work and clever thinking, and then a dollop of patience. ”
Speaking of which, my cigarette smoke is looking better. Took some time but I'm pretty happy with the results. I'm at the point where it's good but could be so much better if I knew more about how to really get in there and build my own custom VEX tools. The deadlines on this show and the fact I'm having to bounce back and forth to do shots in Maya and then tests in Houdini keeps me from being able to really dig into the books and school myself right now unfortunately.
“Jig is the only commercial volume renderer out there. And it takes some learning too. Take a look at that if you have the time and/or money. This also renders by buckets but doesn't require you to generate the entire dataset beforehand. On the other hand, all that computation of the density is lost and must be regenerated. (Unless they've made persistent caches now?)
The tools available to the lone artist for rendering volumes are not fantastic”
Jig is great when it works. Takes a lot to dial it in and find that sweet spot. Works great for low density smoke but once you start cranking the density it goes to hell. We had all sorts of problems using jig, for one thing their support is usually too busy working on other things to support the software.
Anyway, thanks for the response. Well thought out and you bring up some methods I hadnt even thought of. If you could elaborate that would be great. If not, I understand 8)
Technical Discussion » point clouds... not i3d
- the_squid
- 132 posts
- Offline
Man….. this thread just died. I'm gonna make one last attempt at re-opening the i3D subject. I've been working with i3d simply because I dont really have any other choice, trying to create realistic looking cigarette smoke. The smoke has to be very dynamic and needs to react to lighting in a scene with lightning strikes, high contrast key lights, very dark shadows, etc. As much as I'd love to get away with sprites they just arent cutting it in the comp.
i3D works on a few levels. 1.) it's volumetric and reacts to light and 2.) it's easier to get close to the smoke look faster (compared to working with sprites.)
However, this smoke is being blown out at a high velocity, blown back by air conditioning in a car, and then needs to settle in the back area of the car. When it's initially blown out it needs to be heavily motion blurred and i3d's motion blur hack (please dont take offense to that term) doesnt cut it.
Another problem I'm running into is the i3d texture itself. If I had more control over it, and maybe more variations besides “VEX_metacloud”, I'd have more interesting ways to achieve the look I'm going for. As an example, Lightwaves “hyper voxels” come with tons of variations.
Another thing that sucks about i3d is having to write out the i3Ds itself. I'm sure this wouldnt bother me so much if I were working solely on this smoke, but I'm doing a lot of work in Maya as well and testing things on the side in Houdini. The process of writing out the i3ds is so slow (64x64x64) its easy to lose track of where I was when I kicked off the i3d after working in other files for a few hours and then returning to a completed range.
Another thing really bugging me is for some reason my point colors on my particles dont seem to transfer into my i3D smoke. This makes sense on a level to me because i3D works with density and not color, but there should be some way to transfer point colors to density or something along those lines. Even if it's a multiplier that allows you to choose the range of color you want and how much you want it to affect the density.
A lot of these issues may be addressed by somebody more familiar with i3d, but I cant help but wonder why most facilities that own Houdini end up writing their own volumetric renderers if i3D really is all that.
:roll:
i3D works on a few levels. 1.) it's volumetric and reacts to light and 2.) it's easier to get close to the smoke look faster (compared to working with sprites.)
However, this smoke is being blown out at a high velocity, blown back by air conditioning in a car, and then needs to settle in the back area of the car. When it's initially blown out it needs to be heavily motion blurred and i3d's motion blur hack (please dont take offense to that term) doesnt cut it.
Another problem I'm running into is the i3d texture itself. If I had more control over it, and maybe more variations besides “VEX_metacloud”, I'd have more interesting ways to achieve the look I'm going for. As an example, Lightwaves “hyper voxels” come with tons of variations.
Another thing that sucks about i3d is having to write out the i3Ds itself. I'm sure this wouldnt bother me so much if I were working solely on this smoke, but I'm doing a lot of work in Maya as well and testing things on the side in Houdini. The process of writing out the i3ds is so slow (64x64x64) its easy to lose track of where I was when I kicked off the i3d after working in other files for a few hours and then returning to a completed range.
Another thing really bugging me is for some reason my point colors on my particles dont seem to transfer into my i3D smoke. This makes sense on a level to me because i3D works with density and not color, but there should be some way to transfer point colors to density or something along those lines. Even if it's a multiplier that allows you to choose the range of color you want and how much you want it to affect the density.
A lot of these issues may be addressed by somebody more familiar with i3d, but I cant help but wonder why most facilities that own Houdini end up writing their own volumetric renderers if i3D really is all that.
:roll:
Technical Discussion » Kaleidoscope
- the_squid
- 132 posts
- Offline
Excellent suggestions thanks! I was giving my model way too much credit, and wanting to blame the results on a limitation. With some tweaking I was able to achieve the effect I was looking for after reading your post. Very cool. Render times leave a bit to be desired though 8)
Another nice way of achieving a kaliedoscope is with a series of clips and copy's on the geometry you want to mirror. I wont get into specifics but basically you can create a fairly quick kaleidoscope using this method. Doing the reflections as actual geometry instead of actual rendered reflections.
My next project: A compound telescope.
Wish me luck… :wink:
Another nice way of achieving a kaliedoscope is with a series of clips and copy's on the geometry you want to mirror. I wont get into specifics but basically you can create a fairly quick kaleidoscope using this method. Doing the reflections as actual geometry instead of actual rendered reflections.
My next project: A compound telescope.
Wish me luck… :wink:
Technical Discussion » Kaleidoscope
- the_squid
- 132 posts
- Offline
BTW the “reflect bounce” parameter in question is located at the object level in the parameters for the kaleidoscope model (not in mantra or a shader.)
Technical Discussion » Kaleidoscope
- the_squid
- 132 posts
- Offline
I've modeled what should function as a kaleidoscope (modeled based on a real kaleidoscope I took apart.)
My first render showed no promise. Then I remembered the “reflect bounce” parameter under the “Render” tab and cranked that baby up to 10. My next render looked better… but not good enough. I took the value up to 100. No change…. so I figured I'd test the limits and cranked the value up to 10000. I thought for sure this would take my machine to the knees, but it didnt. My render also looked exactly the same… so my question is, is there a limit to the number of reflection bounces one can have in Houdini? I assume there is… and if I'm right, is there a way to bypass this limitation?
My first render showed no promise. Then I remembered the “reflect bounce” parameter under the “Render” tab and cranked that baby up to 10. My next render looked better… but not good enough. I took the value up to 100. No change…. so I figured I'd test the limits and cranked the value up to 10000. I thought for sure this would take my machine to the knees, but it didnt. My render also looked exactly the same… so my question is, is there a limit to the number of reflection bounces one can have in Houdini? I assume there is… and if I'm right, is there a way to bypass this limitation?
Technical Discussion » point clouds... not i3d
- the_squid
- 132 posts
- Offline
What production facilities actually use i3D?
Digital Domain has its' own volume renderer, R&H either has their own or uses some version of jig. I'm not sure about Sony.
Digital Domain has its' own volume renderer, R&H either has their own or uses some version of jig. I'm not sure about Sony.
Technical Discussion » point clouds... not i3d
- the_squid
- 132 posts
- Offline
It's entirely possible my oppinion is the result of a lack of knowledge of what i3d is capable of. But, I've heard rumors that even people at SideFX arent completely happy with it and would like to someday see it replaced by either point cloud rendering or some other method.
Since I'm just doing tests on the side I dont have 100% if my time to put into it. I get nice results sometimes… but the element I was trying to create really needs motion blur since it moves so quickly. The i3d motion blur may work in a few circumstances but definately doesnt work in my case.
If you could send out a file that has an example of i3d working effectively with a good amount of control built into it I'd love to see that. As it is I'm doing a very basic i3D setup.
Since I'm just doing tests on the side I dont have 100% if my time to put into it. I get nice results sometimes… but the element I was trying to create really needs motion blur since it moves so quickly. The i3d motion blur may work in a few circumstances but definately doesnt work in my case.
If you could send out a file that has an example of i3d working effectively with a good amount of control built into it I'd love to see that. As it is I'm doing a very basic i3D setup.
Technical Discussion » point clouds... not i3d
- the_squid
- 132 posts
- Offline
If i3D was a girl, I'd have to say she's just “not good for me.”
:?
-i3D's motion blur is an elegant hack, but still a hack.
-The process of writing out the i3d files is redundant. Even Lightwave has a more streamlined way of achieving volumetrics.
-Writing out a 128x128x128 i3d file (with a lot of particles) creates a sequence of large files
(in addition to the actual rendered frames)
-It's difficult to get i3d to inherit point attributes like color (or impossible? I still cant get it to work…)
Enough with the complaining though. Is there a better solution? thousands of transparent sprites motion blurred look nice, but geez what a hit to the render time! I've heard there is a better way… and it is using point clouds.
Can anybody point me towards a good file that shows how to use point cloud rendering for use with particles? I've gone through the tutorals for mantra final gather passes which involves point cloud data sets but I've never seen an example where point clouds are used to render volumetrics like smoke. If it works for sub-surface scatterring I'm sure it's more than capable. Any examples?
Thanks in advance,
TheSquid
:?
-i3D's motion blur is an elegant hack, but still a hack.
-The process of writing out the i3d files is redundant. Even Lightwave has a more streamlined way of achieving volumetrics.
-Writing out a 128x128x128 i3d file (with a lot of particles) creates a sequence of large files
(in addition to the actual rendered frames)
-It's difficult to get i3d to inherit point attributes like color (or impossible? I still cant get it to work…)
Enough with the complaining though. Is there a better solution? thousands of transparent sprites motion blurred look nice, but geez what a hit to the render time! I've heard there is a better way… and it is using point clouds.
Can anybody point me towards a good file that shows how to use point cloud rendering for use with particles? I've gone through the tutorals for mantra final gather passes which involves point cloud data sets but I've never seen an example where point clouds are used to render volumetrics like smoke. If it works for sub-surface scatterring I'm sure it's more than capable. Any examples?
Thanks in advance,
TheSquid
Technical Discussion » Adding reflection
- the_squid
- 132 posts
- Offline
Experiment…. keep your mantra mplay window open while you render frames, then adjust parameters, then render again. VEX_Supermaterial has some pretty generic setting on its main “Shader” tab and the “Texture” and “Properties” tabs are pretty much just the same thing over and over again (map for your specular… map for your ambient.. map for your reflections… etc.)
The best advice I can give when dealing with shaders is experiment as much as possible. Again, just leave mplay up and do quick renders as you change parameters to get a better idea of what they do.
You probably already know this, but if you dont it could help – each shader control panel (where all the parameters are) has a little ? icon in the upper right hand corner. If you click on that you can get help related specifically to that shader.
Same goes for pretty much any other op in Houdini.
As far as texturing your CD goes…… did you try using UVTexture or UVproject (sops) to control how your image is mapped? IF you play with the values in those sops (scale. offset) you should be able to scale your image up so you no longer see the square edges.
A CD should be as simple as a circle sop followed by a copy sop followed by a skin sop. After the skin sop you should append a uvproject sop, and after that a shader sop to apply your VEX_supermaterial.
Good luck!
The best advice I can give when dealing with shaders is experiment as much as possible. Again, just leave mplay up and do quick renders as you change parameters to get a better idea of what they do.
You probably already know this, but if you dont it could help – each shader control panel (where all the parameters are) has a little ? icon in the upper right hand corner. If you click on that you can get help related specifically to that shader.
Same goes for pretty much any other op in Houdini.
As far as texturing your CD goes…… did you try using UVTexture or UVproject (sops) to control how your image is mapped? IF you play with the values in those sops (scale. offset) you should be able to scale your image up so you no longer see the square edges.
A CD should be as simple as a circle sop followed by a copy sop followed by a skin sop. After the skin sop you should append a uvproject sop, and after that a shader sop to apply your VEX_supermaterial.
Good luck!
Technical Discussion » render problem--probably something simple =P
- the_squid
- 132 posts
- Offline
I would need to see the file to really troubleshoot this. In the meantime maybe I can help with a few suggestions:
Off the top of my head, here are some possibilities:
1.) You are rendering using the wrong camera. Highly unlikely since you say you see a white outline in the greyscale channel. Speaking of which, what greyscale channel are you talking about? Are you sure you're not looking at the alpha channel?
2.) If you're seeing the object you render in the alpha, it could be that your scene has no lights, so you see black in RGB and then solid white in alpha. This means your object is in the scene, just not being lit. You'll need to add some point lights if you want to see anything, or an ambient light.
3.) There's always the possibility your scene was setup to render as a matte… although this too is unlikely. In your shaders, create a VEX_supermaterial and use that as your objects surface. VEX_supermaterial will create a generic white phong shader. As long as you point to this as your shader you can rule the shader out as being a problem.
4.) If all else fails you may want to throw a FACET sop at the end of your sop chain and select “compute normals.” If your normals are messed up this should help, and if they're reversed throw a POINT sop after the facet sop and set all your normals to be 0-$NX,0-$NY,0-$NZ (this will reverse the normals)
5.) Are you sure you're not just closing your eyes when you render? Just kidding.
6.) There's also the possibility (if you've checked all other possibilities) that you're object is set to render as a matte object in its render settings. These are seperate from shaders I believe. At the object level, look at the parameters for the object you want to render. Go to the render tab and make sure the “matte” parameter is set to 0.
7.) If you've checked every single thing so far and it's STILL not rendering, maybe you have a light mask set in your object that is keeping it from recognizing the lights in your scene. At the object level, look at the parameters for the object you want to render. Under the shading tab, there's a parameter for “light mask” make sure it's either set to * or that it's set to use the lights in your scene. You can always click the + button to the righ to select the lights you want to affect your object.
Other than those suggestions, the only other thing I could recommend is giving your file out to one of us so we can take a look at it. Good luck!
Off the top of my head, here are some possibilities:
1.) You are rendering using the wrong camera. Highly unlikely since you say you see a white outline in the greyscale channel. Speaking of which, what greyscale channel are you talking about? Are you sure you're not looking at the alpha channel?
2.) If you're seeing the object you render in the alpha, it could be that your scene has no lights, so you see black in RGB and then solid white in alpha. This means your object is in the scene, just not being lit. You'll need to add some point lights if you want to see anything, or an ambient light.
3.) There's always the possibility your scene was setup to render as a matte… although this too is unlikely. In your shaders, create a VEX_supermaterial and use that as your objects surface. VEX_supermaterial will create a generic white phong shader. As long as you point to this as your shader you can rule the shader out as being a problem.
4.) If all else fails you may want to throw a FACET sop at the end of your sop chain and select “compute normals.” If your normals are messed up this should help, and if they're reversed throw a POINT sop after the facet sop and set all your normals to be 0-$NX,0-$NY,0-$NZ (this will reverse the normals)
5.) Are you sure you're not just closing your eyes when you render? Just kidding.
6.) There's also the possibility (if you've checked all other possibilities) that you're object is set to render as a matte object in its render settings. These are seperate from shaders I believe. At the object level, look at the parameters for the object you want to render. Go to the render tab and make sure the “matte” parameter is set to 0.
7.) If you've checked every single thing so far and it's STILL not rendering, maybe you have a light mask set in your object that is keeping it from recognizing the lights in your scene. At the object level, look at the parameters for the object you want to render. Under the shading tab, there's a parameter for “light mask” make sure it's either set to * or that it's set to use the lights in your scene. You can always click the + button to the righ to select the lights you want to affect your object.
Other than those suggestions, the only other thing I could recommend is giving your file out to one of us so we can take a look at it. Good luck!
Technical Discussion » VOPs Viewing
- the_squid
- 132 posts
- Offline
Is there a way to view a shader that was NOT created using VOPs within VOPs? I figure there probably isnt, but wanted to ask… :?:
Technical Discussion » Building i3D shader in VOPs
- the_squid
- 132 posts
- Offline
Technical Discussion » Building i3D shader in VOPs
- the_squid
- 132 posts
- Offline
Uh… oh….. I'm starting to get the feeling nobody's ever actually built an i3d shader using the VOPs toolset. Too bad I'm working all weekend, otherwise this could become a nice little side project for me. :roll: Anyway, looks like this topic should be discussed further…. sidefx video tutorial maybe? :?:
nudge nudge
nudge nudge
Technical Discussion » Building i3D shader in VOPs
- the_squid
- 132 posts
- Offline
Thanks DeeCue
This looks like a tutorial going over the old fashioned way of building i3D shaders… i.e. VEX code. I'm attempting to soften a few 3D Studio MAX users up to using Houdini instead of MAX's Afterburn plugin for rendering volumetric smoke.
If I show them this http://odforce.net/tips/shader_wiritng_i3d_1.php [odforce.net] I will be laughed out of the office. This is very good for me, however its a bit complex for who I'm trying to help out.
Anybody have a link or have experience setting this sort of thing up using the VOP operators?
This looks like a tutorial going over the old fashioned way of building i3D shaders… i.e. VEX code. I'm attempting to soften a few 3D Studio MAX users up to using Houdini instead of MAX's Afterburn plugin for rendering volumetric smoke.
If I show them this http://odforce.net/tips/shader_wiritng_i3d_1.php [odforce.net] I will be laughed out of the office. This is very good for me, however its a bit complex for who I'm trying to help out.
Anybody have a link or have experience setting this sort of thing up using the VOP operators?
-
- Quick Links