“Eight” cores(quotes courtesy Hyperthreading Inc ). That seems somewhat comparable with your results, yeah? My clock speed is a little faster…
Cheers,
J.C.
P.S. the full blurb on the cpu is “Intel i7 Quad Core Enhanced Performance V2”
P.P.S. Well, no it doesn't seem comparable. You're running two cpus, I'm running one with HT. According to the scuttlebutt I keep reading here, I'm supposed to be penalized more heavily, yes? This is why I've been mystified by all the negative talk about HT.
covers most of it. Sounds like you want to select multiple keyframes then hit ‘y’ to bring up the scale handle. The dopesheet and table views(hit 1, 2 3 to cycle through) offer other methods.
Shot in the dark, but I've had issues with some camera values being improperly exported with the pftrack –> houdini module. I have a bug report in with them on this. Anyway, try outputting a cmd file too, take a look at it. The focal length and aperture should be correct. My guess is the fbx route is outputting zero for one of those parameters.
SYmek ps you say whenever you run hscript on a workstation you set it to use gui license? Yes, this would be a solution for some of our headache. thank you!
You'll want the hserver options. We have a standard location on all nodes that we search for hserver.opt files. We can make sure no GUI licenses try to run on render nodes and vice versa. There's some other useful settings in there too.
Just a heads-up that file, when loaded, seemed to have that insidious little bug in it with the Edit/Color Settings/Color Correction LUT entry being filled with endless slashes. I believe the problem where they were actually multiplied with every save has been addressed, but just in case…
I thought mplay only checks to see if a valid license exists, but never actually eats a token? That's been my experience. Are you saying you have seen a bug scenario where mplay demands a token?
I haven't seen hython snag an additional license if Master is running, but then we use hscript options to set a workstation as GUI only, so hython only grabs a GUI token. Is that what's different with your setup?
DaJuice, I believe Physical SSS is added as an emission, as opposed to having a bsdf output. I'm not sure about the other VOPs you refer to - there is indeed a fakealbedo VOP, undocumented, but that's not the VOP that's in the Surface Model VOP - it's just called that, and is a multiply VOP. Direct and Indirect lighting are all tied up in PBR.
I hear you about black box shaders, which indeed is exactly why SESI didn't do it - they are exposed on the atomic level. Now, it's perfectly true that digging through them isn't necessarily trivial. One of the contentious issues during testing was the decision to make a ‘one model fits all render engines’ solution. The upside is, you can throw down one regardless of render engine. The downside, which many of us stated, was a more cluttered network and some parameters which just “don't work” because they don't apply to a particular engine. Personally, I would have liked a Surface Model for each engine. <shrug>
After the first time you render, be sure to change /shop/dragon/surfacemodel1/Subsurface/Point Cloud Mode to Read From File, to speed up renders. Note that with non-deforming surfaces, you want to only write out one point cloud that is used throughout the animation despite the model translating.
I did kick open the Surface VOP node and change a couple of things - go into the “if2” node inside the SSS section of the network and look at the physicalsss2 node. I click on the two Optimize Secondary Rays in the single and multiple scattering tabs. This has a massive effect on render times with a negligible difference in look. IMHO they should default to on. Also, if for any reason you find you need to bump the samples on your lights up to fixx SSS problems, take a look at the Multiple Scattering/Global Light Quality setting, which defaults to 1. Basically, this hacks the sampling on the lights up *just* for SSS, and nothing else, so you don't need to take a global hit just to fix an SSS issue.
Other than that, the Surface Model VOP sort of speaks for itself - you can see reflections of the envlight in the surface, etc. Hope it's useful.
It's very different, yes. Really, it's the lighting models that have changed the most. There are, however, new basic shaders that you can start with if you want to roll your own. The official SESI suggestion is to start with a Mantra Surface from the shader gallery, I personally find that too constricted because the moment you want to customize it, you're stuck with all that multi-renderer-compatible crap inside and just becomes messy. It's handy for fast exploring though. What I prefer to do is make a Material Shader Builder, and stuff a surface model VOP inside of that. That gets you a good chunk of the way along. You have two layers of reflectivity you can assign, and there's SSS there. I still need to crack that open to get the speed acceptable for SSS, but it's a good start.
Cheers,
J.C.
P.S. if I get a sec, I'll clean up and upload an SSS file I was playing with during beta. It's not marble, it's more ceramic, but things like reflections, SSS are all there. I've become a PBR evangelist, but very, very frequently out of the box it's performance-deficient. It typically needs some tweaking and there are several potential pitfalls.
Marble seems busted to me, I've already moaned about it but no response. It's not you. Basically, it takes forever to generate a map pass (which needs optimization) then when it does render it typically looks solid white - the subsurface layer defaults to far too hot. Also, no work has been done yet to put in veins, which I think the typical user expects. Most of the other shaders are in a reasonable state, though. You just hit a minefield. I really think it should be removed until it's somewhat usable. Cheers JC
Just so there's no misunderstanding, the forums are here for folks to talk amongst themselves. It's not support though…if a one day wait is causing you grief, you might want to contact support…
I think the challenge is for SESI to get as many as possible the HOWTOs, caveats and gotcha's into the docs so setting up quite sweet looking renders isn't too harsh. I know that at the moment if you just dive in and start throwing down env and area lights and fire up some of the existing sample shaders(beware marble! ), you may well be hit hard with questionable render times and results.
There is some good reading in the docs at the moment, worth reading the new stuff about lights rendering and PBR.
In the middle of something atm, but I would point out that 10 second renders and very simple scenes aren't a good metric for this. Where the power of the new lighting, especially with PBR(which I assume you're not using), can give you very good looking stuff quite quickly. I understand you're comparing 10 to 11 directly, and that's perfectly valid, but I'd definitely look at something a little more detailed for before's and after's.
Managing noise is indeed important with area and env lights, and has it's own little tricks. I find that for some things, actually using PBR will give me better looking results faster than micropoly. There are definitely growing pains, though, with the light changes, I'm not trivializing that. When you're doing stuff for final, with larger datasets than a couple of prims, I think you'll see the advantages.
I assumed this was an educational experience. Peter is correct, it's a rabbit hole…*huge* big topic. We use SGE here for our solution, there are other solutions out there of course and it sounds like you already have something in place. The guts of scheduling, resources, rules of engagement - that's the hard stuff if done correctly. Personally, I have zero interest in re-inventing that particular wheel. Wrapping a particular software product into such a system, that's typically more straightforward, that's just implementation and a bunch of work.
I'm unclear which exactly you're trying to do here. Whatever you do, don't go the route of ‘mantra -H foobar’ - that particular feature is very limited and quite old, it won't give you the flexibility you need. There's plenty of flexibility with Houdini to fit it into pretty much any scheduling system out there - it spits out discrete chunks, you can edit the process of writing it out plus pre-process it before running, all quite trivially. Houdini is designed for maximum scriptability.
We render out ifd's and each spawns it's own process. You're quite correct that some renders the ifd generation process can easily exceed the render time, and I/O is brutal. We just make sure there's a mechanism in there for splitting up the hbatch processes. It's still handy to keep the ifd tasks separate from the hbatch…
Well, that feature was screamed for in early testing days, while it's not technically accurate, it makes lighting scenes a helluva lot less work. Every time you want to size up an area light, you had to manually re-fiddle the intensity. I agree that doesn't match the real world, but resizing area lights when setting up a scene you do a lot. I personally like it on by default.
This will become a big issue in the near future. Someone wanting to learn Python will understandably gravitate towards 3000, but there's a really big world out there that won't be implementing it in large projects for a very large time simply because the cost outweighs the benefits. I'm not really sure what's going to happen.
The good news, Ordiza, is that while the new things in 3000 tend to break an *awful* lot of older scripts, it's whittled down typically to 10 or so very basic reasons why, so you can safely learn 2.x and use it now in almost all contexts out there and when the time comes learning the fundamental changes in 3000 won't break your brain.