Found 128 posts.
Search results Show results as topic list.
Solaris and Karma » Sphere is not a sphere.
- JOEMI
- 128 posts
- Offline
Btw. Arnold7 uses now USD files as scene description and with good degree of success. Yes, there is render delegate, but I hope SideFX will split Karma execution to same branches.
Solaris and Karma » Sphere is not a sphere.
- JOEMI
- 128 posts
- Offline
You are all right, that it is still developing. But it is developing sensible big amount of time - and doesn't demonstrate still convincing "reasons d'etre".
It is timing about two months of qualified developer to implement renderer adon, which can translate USD data to renderer's API calls. With much more wider range of features supported. All renderer's API's (production used) have similar patterns and subjects of processing. If you need task-oriented extending - ok, you just implement it too, as you will do it for renderer delegate. OpenNSI demonstrates some modern approach and don't offer itself to be "common biggest divider".
I seem Hydra is strange branch of rib-filtering idea, but we used rifs for extending content interpretation, instead of cutting it dramatically.
Very interesting - when Pixar themselves get usd file for rendering - do they really use Hydra? They offer such feature. Do they really cut everything not fitted to hydra pipeline from THEIR renderer? And what customers said about this situation?
All noise, I make is not about defective geometry - I hope it will fixed sometimes - (but H19 will render for you squashed 90-faced eggs instead of spheres while) - now I prepare for possible whole production pipeline and I should explain why "we can/cannot use such modern and fast renderer". And now I faced that I cannot said any calming words. Should we thought about Solaris workflow of go pray for Katana (yes, it used hydra for preview, not more). Or just go and spend two months for translator. (Tamerlane forces his sons "learn languages of provinces you rule - then translators could not lie you".) What else and in what worst of possible moment I will found, that something always known as effectively and robust working - will "just not implemented still"?
I mentioned NVIDIA before - I seem it explains disbalance with USD and Hydra developing - just because NVIDIA uses USD and forces it's development - and they were not interesting with Hydra per se - RTX driver uses it - but they changed sources, so it is incompatible with main developers branch.
Let there be Hydra ZOO! Why not another for SESI too? I can develop another one too, if someone ask.
May be it is simpler just to feed one head we needed then?
(Very interesting, do they have squashed spheres in omniverse marbling demo, or didn't noticed such fact? Or they see this and didn't warn anyone and just threw this primitive from using?)
It is timing about two months of qualified developer to implement renderer adon, which can translate USD data to renderer's API calls. With much more wider range of features supported. All renderer's API's (production used) have similar patterns and subjects of processing. If you need task-oriented extending - ok, you just implement it too, as you will do it for renderer delegate. OpenNSI demonstrates some modern approach and don't offer itself to be "common biggest divider".
I seem Hydra is strange branch of rib-filtering idea, but we used rifs for extending content interpretation, instead of cutting it dramatically.
Very interesting - when Pixar themselves get usd file for rendering - do they really use Hydra? They offer such feature. Do they really cut everything not fitted to hydra pipeline from THEIR renderer? And what customers said about this situation?
All noise, I make is not about defective geometry - I hope it will fixed sometimes - (but H19 will render for you squashed 90-faced eggs instead of spheres while) - now I prepare for possible whole production pipeline and I should explain why "we can/cannot use such modern and fast renderer". And now I faced that I cannot said any calming words. Should we thought about Solaris workflow of go pray for Katana (yes, it used hydra for preview, not more). Or just go and spend two months for translator. (Tamerlane forces his sons "learn languages of provinces you rule - then translators could not lie you".) What else and in what worst of possible moment I will found, that something always known as effectively and robust working - will "just not implemented still"?
I mentioned NVIDIA before - I seem it explains disbalance with USD and Hydra developing - just because NVIDIA uses USD and forces it's development - and they were not interesting with Hydra per se - RTX driver uses it - but they changed sources, so it is incompatible with main developers branch.
Let there be Hydra ZOO! Why not another for SESI too? I can develop another one too, if someone ask.
May be it is simpler just to feed one head we needed then?
(Very interesting, do they have squashed spheres in omniverse marbling demo, or didn't noticed such fact? Or they see this and didn't warn anyone and just threw this primitive from using?)
Edited by JOEMI - 2021年11月1日 02:34:37
Solaris and Karma » Sphere is not a sphere.
- JOEMI
- 128 posts
- Offline
SO, it obsolete before it was born. Ok, then why SESI pay so much attention to it? it is absolutely independent story form usd - and nothing forced them to build newest and modern renderer over such strange and obviously restrictive rendering mechanics. I know about 6 render delegates published, and just one scene delegate - usd delegate. To be true - no more required since usd is really cover most needs as data storage / interpretation and access technology. 18 years ago we developed XML-based group of technologies dedicated to same purposes - produce and transform scenes for renderman - so I was very glad, that we found a lot of similar approaches and concepts in usd later. And now I try to use usd as a data exchange format for a lot of research projects - and even usdviewq widget for analyzing of results (and I spend a lot of time uderstanding why I see skewed geometry, for example, and it was not a dynamic engine failures) - and I was disapointed, that If I decide to push more complex structure to renderer - for example -tetrahedral meshes - I should go to usd sources to implement its adapter - even if I've already implemented such type as usd-scheme. Anyway then I will gen to render not that data I've dealt with in scene.
And last. Hydra is renderer proxy layer. But why we decide, that rendering - is just convert geometry to raster busfers? Wat about vector layers? Is physical simulation - a rendering process too? Why not? Better for Hydra to be analog of XSLT technology, rather than WebGL, then it has practical potential.
Very interesting - since nvidia so wildly uses usd with its omniverse - does it pay so much attention to hydra too?
And last. Hydra is renderer proxy layer. But why we decide, that rendering - is just convert geometry to raster busfers? Wat about vector layers? Is physical simulation - a rendering process too? Why not? Better for Hydra to be analog of XSLT technology, rather than WebGL, then it has practical potential.
Very interesting - since nvidia so wildly uses usd with its omniverse - does it pay so much attention to hydra too?
Solaris and Karma » Sphere is not a sphere.
- JOEMI
- 128 posts
- Offline
1. There are two types of delegates - renderer's an scene delegate, which translate your geometry to hydra - most valuable is USD delegate. It produces such kind of defective geometry, you may find this in sources. Ok, it can be fixed by altering points positions - but it didn't changed amount of memory dramatically - and rendering speed too. Way to use procedurals - is worst possible. What should prevents me write a procedural, which gets USD file and make whole scene procedurally next time - what then Hydra needed for? And what I have to do for OTHER renderers, OpenGL preview etc?
2. Nothing prevents your renderer to render subdivs adaptively - I hope Karma does. But can you imagine, how much memory is wasted during rendering? What amount of rendering time just spent to push/pull excessive data, excessive 10x10 matrix computations required for subdivision - instead of trivial point-to-line measure procedure?
It is not about spheres. It is about - "what for?". We have famous russian tale about "axe porridge". Now hydra looks like an axe. You will spend time setting nicely lightdressed scenes in Solaris - and will wondered, that it can be rendered much faster without Hydra assistance. But you never will know about this.
I seem - Hydra is a part of something other, which mostly hidden from our eyes now. It is very kindly form Pixar r&d teams to share some ideas about integration - but now it looks not like a
Karma is based over Hydra - so it lacks analytical geometry too. If your DCC uses heavily NURBS or T-Splines - like Rhino for example - why it gets impossible to see good results rendered through Hydra even with renderers which are fully support these kinds of geometry? You may insist to convert analythicla geometry to subdiviosion approximations - but it is not possible (and reasonable!!!) every time. Patches are important, trimmed quadrics too. Even hierarchial subdivision meshes, blobbies are outside of these (mesh/subd-points-curves-volumes) primitives and their compositions. Yes, it is not a Houdini story - but then why Hydra is Houdini story if it has such unexplainable limitations? Should renderers vendors really trim possibilities twice - one with Houdini supported capacities (which are not so heavy, if ever exists) - and then more - with Hydra? There are SOHO and ROPs - these layer are much more adaptive and mature for rendering adaptors. Yes, possible they should be modernized. But to be true - I wondered why SESI mess with it and don't separate Karma off Hydra. Renderman itself is separated, and its delegate is not a general and once hub for renderer. May be it is testings and research - possible. These damn sphere primitives was realized in 18.5 as a real geodesic polygons, not a usd primitive - which shows, that SESI developers popssible knew about this issue.
2. Nothing prevents your renderer to render subdivs adaptively - I hope Karma does. But can you imagine, how much memory is wasted during rendering? What amount of rendering time just spent to push/pull excessive data, excessive 10x10 matrix computations required for subdivision - instead of trivial point-to-line measure procedure?
It is not about spheres. It is about - "what for?". We have famous russian tale about "axe porridge". Now hydra looks like an axe. You will spend time setting nicely lightdressed scenes in Solaris - and will wondered, that it can be rendered much faster without Hydra assistance. But you never will know about this.
I seem - Hydra is a part of something other, which mostly hidden from our eyes now. It is very kindly form Pixar r&d teams to share some ideas about integration - but now it looks not like a
Karma is based over Hydra - so it lacks analytical geometry too. If your DCC uses heavily NURBS or T-Splines - like Rhino for example - why it gets impossible to see good results rendered through Hydra even with renderers which are fully support these kinds of geometry? You may insist to convert analythicla geometry to subdiviosion approximations - but it is not possible (and reasonable!!!) every time. Patches are important, trimmed quadrics too. Even hierarchial subdivision meshes, blobbies are outside of these (mesh/subd-points-curves-volumes) primitives and their compositions. Yes, it is not a Houdini story - but then why Hydra is Houdini story if it has such unexplainable limitations? Should renderers vendors really trim possibilities twice - one with Houdini supported capacities (which are not so heavy, if ever exists) - and then more - with Hydra? There are SOHO and ROPs - these layer are much more adaptive and mature for rendering adaptors. Yes, possible they should be modernized. But to be true - I wondered why SESI mess with it and don't separate Karma off Hydra. Renderman itself is separated, and its delegate is not a general and once hub for renderer. May be it is testings and research - possible. These damn sphere primitives was realized in 18.5 as a real geodesic polygons, not a usd primitive - which shows, that SESI developers popssible knew about this issue.
Solaris and Karma » Sphere is not a sphere.
- JOEMI
- 128 posts
- Offline
Don't you know, that hydra (and Solaris next) lies you about sphere geometry? It is not an issue of Solaris - it is really Hydra provides you worst approximation possible. Just put sphere and cube, merge them, and tweak cue size to see an inconsistence.
When OpenGL draws for you something similar to latlong divided sphere with two 10-valence poles - it is not an approximation. It is REAL geometry all renderer should get instead of sphere. 82 points. 90 polygons. 2.3Kb (min) of data per mesh - almost 600 times more than one float required to describe sphere radius. So, if you place a sphere primitive in memory - you just HEAVILY WASTE memory. Just because "we support only meshes, curves, points and volumes" through hydra says at Pixar. Most trivial shape type, which is implemented even in education renderers - and we have almost standard of geometry handling like Hydra which lost this trivial feature. I cannot imagine, why production-oriented renderers should cut their functionality for this lowest common denominator then/
It is better to kick Hydra off from rendering pipelines until it will not get more controllable. It is full of less needed features, but still have not support redundant and useful (and memory saving :/) geometry types. Since render delegates themselves declare which type of geometry they can handle - why it is just 5 types of geometry, written during 6 years of development - and with worst approximations even don't produce any correctly (and carefully) subdividing geometry?
To be true, I see not reasons for this critical simplifications. Yes, USD itself is a nice structural and data-providing technology, but Hydra itself introduces more headaches, than solves problems. I cannot came to any production house and say "guys&girls, today we come to use Hydra, but it lacks something, some light types, some geometry primitives, some render gizmos, like clipping geometry or CSG - but it should give us.... WHAt?" Ability to see geometry with same settings and with different renderers? What for? Are not there good and production-ready translators for that renderers? If not - are they really production ready renderers?
I have an experience with OSPRay and Cycles delegates for hydra - first is not a production renderer at all right now, second is possible more adapted to be production ready - but passed through hydra - we have inadequate data to render. I wondered - at Pixar are they render spheres with such excessive data produced (just got from default Maya polysphere geometry) - or they just have no needs with them? Do they kick off analytical surfaces at all in favor of subdivion surfaces? Do all other renderers same? Karma, for example? Yes, Karma should support instancing with good degree of freedom - if we copy a lot of spheres - but it lacks internal defined parametrization on analytical primitives then - which is sometime very useful feature. Moreover - even if you define instancing - rendering of subdivided mesh with 90 faces (AND TWO 10 VALENCE POLES!!!) is not comparable with calculation costs than with trivial sphere, especially with raytracing.
Why are we here after almost 6 years of (public) Hydra development? Ok, Solaris itself is a nice layouting and lightdressing system, but why it should push these elaborate setups to such filter which so dramatically cuts features? If you look at latest renderman - it has all those useful geometry primitives sill available - quadrics, nurbs, metaballs, CGS. But if you will try to pass through Hydra - you've got just very restricted set of available features. Are really they don't use them at all? I see - they eliminated rendering of brickmaps as geometry. But I seem it was one elimination during 25+ of RIB evolution.
Karma look like fully oriented to be Hydra-based. Reasons described above makes me though that it can be less optimal solution, than it could be.
When OpenGL draws for you something similar to latlong divided sphere with two 10-valence poles - it is not an approximation. It is REAL geometry all renderer should get instead of sphere. 82 points. 90 polygons. 2.3Kb (min) of data per mesh - almost 600 times more than one float required to describe sphere radius. So, if you place a sphere primitive in memory - you just HEAVILY WASTE memory. Just because "we support only meshes, curves, points and volumes" through hydra says at Pixar. Most trivial shape type, which is implemented even in education renderers - and we have almost standard of geometry handling like Hydra which lost this trivial feature. I cannot imagine, why production-oriented renderers should cut their functionality for this lowest common denominator then/
It is better to kick Hydra off from rendering pipelines until it will not get more controllable. It is full of less needed features, but still have not support redundant and useful (and memory saving :/) geometry types. Since render delegates themselves declare which type of geometry they can handle - why it is just 5 types of geometry, written during 6 years of development - and with worst approximations even don't produce any correctly (and carefully) subdividing geometry?
To be true, I see not reasons for this critical simplifications. Yes, USD itself is a nice structural and data-providing technology, but Hydra itself introduces more headaches, than solves problems. I cannot came to any production house and say "guys&girls, today we come to use Hydra, but it lacks something, some light types, some geometry primitives, some render gizmos, like clipping geometry or CSG - but it should give us.... WHAt?" Ability to see geometry with same settings and with different renderers? What for? Are not there good and production-ready translators for that renderers? If not - are they really production ready renderers?
I have an experience with OSPRay and Cycles delegates for hydra - first is not a production renderer at all right now, second is possible more adapted to be production ready - but passed through hydra - we have inadequate data to render. I wondered - at Pixar are they render spheres with such excessive data produced (just got from default Maya polysphere geometry) - or they just have no needs with them? Do they kick off analytical surfaces at all in favor of subdivion surfaces? Do all other renderers same? Karma, for example? Yes, Karma should support instancing with good degree of freedom - if we copy a lot of spheres - but it lacks internal defined parametrization on analytical primitives then - which is sometime very useful feature. Moreover - even if you define instancing - rendering of subdivided mesh with 90 faces (AND TWO 10 VALENCE POLES!!!) is not comparable with calculation costs than with trivial sphere, especially with raytracing.
Why are we here after almost 6 years of (public) Hydra development? Ok, Solaris itself is a nice layouting and lightdressing system, but why it should push these elaborate setups to such filter which so dramatically cuts features? If you look at latest renderman - it has all those useful geometry primitives sill available - quadrics, nurbs, metaballs, CGS. But if you will try to pass through Hydra - you've got just very restricted set of available features. Are really they don't use them at all? I see - they eliminated rendering of brickmaps as geometry. But I seem it was one elimination during 25+ of RIB evolution.
Karma look like fully oriented to be Hydra-based. Reasons described above makes me though that it can be less optimal solution, than it could be.
3rd Party » Viper Style Muscle system
- JOEMI
- 128 posts
- Offline
https://www.patreon.com/posts/57747223 [www.patreon.com]
Finished deformer subsystem. It uses harmonic capturing (whatever it can mean), and it works plausible.
Some issues introducing shape-matcher constraint, so I decide to substitute it with triangle matching, which works much faster and better parallelized. And it required more sophisticated base rods geometry preparation, I seem, but I have sufficient ideas about implementation.
Finished deformer subsystem. It uses harmonic capturing (whatever it can mean), and it works plausible.
Some issues introducing shape-matcher constraint, so I decide to substitute it with triangle matching, which works much faster and better parallelized. And it required more sophisticated base rods geometry preparation, I seem, but I have sufficient ideas about implementation.
3rd Party » CGAL Soft Deformer SOP
- JOEMI
- 128 posts
- Offline
3rd Party » Viper Style Muscle system
- JOEMI
- 128 posts
- Offline
https://www.patreon.com/posts/57581379 [www.patreon.com]
1. Figured with cross-section shape-matching constraints, Viper provides two kind of them, one is faster, but weak with bending - another slower and much more robust and plausible. Now all muscle data have per-muscle controls about shaping using and its type. Since this constraint is really slow - and not so redundant as bend or stretch - so it should be processed in different way, than more computationally cheap. It lacks parallelization of computation - and there is some possibilites for it. Will research it a bit later. I have plans to cret such membrane-contraints with more control - with ability to add it everywhere it may be needed by rigger and configured more freely.
2. Contraction and hardening computation moved to CVEX procedures and it gets more comfortable to setup such phenomena - and have more Houdini's style. To be true, hardening is not implemented still, I have no ideas about it. Yes it is about straightening of bend-constraints rest pose, but it is buried deep into bend constraint implementation and overall solver concept.
Next step - make capturing and mesh movement.
1. Figured with cross-section shape-matching constraints, Viper provides two kind of them, one is faster, but weak with bending - another slower and much more robust and plausible. Now all muscle data have per-muscle controls about shaping using and its type. Since this constraint is really slow - and not so redundant as bend or stretch - so it should be processed in different way, than more computationally cheap. It lacks parallelization of computation - and there is some possibilites for it. Will research it a bit later. I have plans to cret such membrane-contraints with more control - with ability to add it everywhere it may be needed by rigger and configured more freely.
2. Contraction and hardening computation moved to CVEX procedures and it gets more comfortable to setup such phenomena - and have more Houdini's style. To be true, hardening is not implemented still, I have no ideas about it. Yes it is about straightening of bend-constraints rest pose, but it is buried deep into bend constraint implementation and overall solver concept.
Next step - make capturing and mesh movement.
Edited by JOEMI - 2021年10月19日 23:50:48
3rd Party » Viper Style Muscle system
- JOEMI
- 128 posts
- Offline
https://www.patreon.com/posts/57499195 [www.patreon.com]
Next iteration - more sophisticated contraction control, have ideas about reimplementing it as CVEX procedure
Next iteration - more sophisticated contraction control, have ideas about reimplementing it as CVEX procedure
3rd Party » Viper Style Muscle system
- JOEMI
- 128 posts
- Offline
https://www.patreon.com/posts/57427411 [www.patreon.com]
progress at this moment : consistent frame tracking implemented - now it is possible to use strand as an object - and SDF collision processing is added too (for nodes only now, for pills - possible later) - it is new for VIPER system
progress at this moment : consistent frame tracking implemented - now it is possible to use strand as an object - and SDF collision processing is added too (for nodes only now, for pills - possible later) - it is new for VIPER system
Edited by JOEMI - 2021年10月15日 03:00:40
3rd Party » Viper Style Muscle system
- JOEMI
- 128 posts
- Offline
https://www.patreon.com/posts/57059475 [www.patreon.com]
And about collision support (remember, that no inbetween "pills" are visible still. A bit later)
And about collision support (remember, that no inbetween "pills" are visible still. A bit later)
3rd Party » Viper Style Muscle system
- JOEMI
- 128 posts
- Offline
https://www.patreon.com/posts/57058430 [www.patreon.com]
Implemented special datatype for contraction controlling per-muscle. It lacks originally at Viper system sources.
Implemented special datatype for contraction controlling per-muscle. It lacks originally at Viper system sources.
3rd Party » Viper Style Muscle system
- JOEMI
- 128 posts
- Offline
3rd Party » Viper Style Muscle system
- JOEMI
- 128 posts
- Offline
Based over these sources
https://github.com/vcg-uvic/viper [github.com]
I've implemented first attempt
https://www.patreon.com/posts/56970585 [www.patreon.com]
If someone have intention to test it - you're welcome PM me - sorry, windows binaries only available now.
It will not commercial product later, I will publish sources after polishing and all licensing terms will be solved.
Keep in touch, it evolves daily now.
Thank you.
https://github.com/vcg-uvic/viper [github.com]
I've implemented first attempt
https://www.patreon.com/posts/56970585 [www.patreon.com]
If someone have intention to test it - you're welcome PM me - sorry, windows binaries only available now.
It will not commercial product later, I will publish sources after polishing and all licensing terms will be solved.
Keep in touch, it evolves daily now.
Thank you.
3rd Party » how can i embed pyqt into the node?
- JOEMI
- 128 posts
- Offline
Solaris and Karma » OSPRay HdDelegate and delegate options
- JOEMI
- 128 posts
- Offline
OSPRay team published recently superbuild script for building OSPray delegate over current houdini libraries, so it can be easily built and attached to Solaris or usdview utility, it is good chance to try. Not too much supported, but with good feedback it possible help them to move in right direction.
I am interested with such thing. Each Hydra delegate has their settings (for OSPRay it is for example - do denoise pass or not). Which way can we force Hydra init or set delegate with custom options?
Thank you
I am interested with such thing. Each Hydra delegate has their settings (for OSPRay it is for example - do denoise pass or not). Which way can we force Hydra init or set delegate with custom options?
Thank you
Technical Discussion » Deform HairGen by new guides...
- JOEMI
- 128 posts
- Offline
Yes, we figure with it more precisely, but it is really artful task to prepare good set of controlling guides. We decided try to combine dynamic-driven with skin-driven set where it may be needed. Most working is dedicated to make more automation, than technical issues. Hoping, it work much more robust, than yeti
Solaris and Karma » How to collect only required changes?
- JOEMI
- 128 posts
- Offline
We have next situation- topology of an asset and a lot of static data saved to layer with rest pose. We import them at SOP level - then spool back with another file, which collect changes made in sop - as example - get static posed aaset, then apply point deformations- then save everything as layer. Everything works fine, except forced first time sampling of almost everything, which doesn’t changed, except may be some topological data, if we turn on corresponding switch in sop import lop.
If we wish to “mute” overriding other attributes- like uv etc. - which way should we point do not pay in this overriding layer any attention to them?
Thank you
If we wish to “mute” overriding other attributes- like uv etc. - which way should we point do not pay in this overriding layer any attention to them?
Thank you
Technical Discussion » Deform HairGen by new guides...
- JOEMI
- 128 posts
- Offline
We have same task. Better to say - how to generate hierarchical guides animation? We have simplified set of guides, which drives another set of guides - till final hairgen - or not, if you've import already generated strands set.
Thank you.
Thank you.
Edited by JOEMI - 2021年8月19日 02:02:36
3rd Party » CGAL Soft Deformer SOP
- JOEMI
- 128 posts
- Offline
I had some fruitless attempts about two years ago to implement services from CGAL library as HDK modules. Today I have a bit more success, starting with less complicated nodes like Convex hull, Optimized boundbox, Surface reconstruction. I tried to implement its triangulated surface deformation technique as a SOP node and I have something to show just in a week of research and development
https://vimeo.com/569182688 [vimeo.com]
This is unoptimized and almost "out of the box" version of module and anyway I am impressed with performance and stability, I have some plans to implement it with more parallel-processing oriented libraries than Eigen, I hope it will allow more interactivity over complex meshes. Also I have plans to implement edge-weights to eliminate non-smooth transitions betweeen areas.
Later on my patreon I will publish source code to allow anyone interested to play with this fun feature
https://vimeo.com/569182688 [vimeo.com]
This is unoptimized and almost "out of the box" version of module and anyway I am impressed with performance and stability, I have some plans to implement it with more parallel-processing oriented libraries than Eigen, I hope it will allow more interactivity over complex meshes. Also I have plans to implement edge-weights to eliminate non-smooth transitions betweeen areas.
Later on my patreon I will publish source code to allow anyone interested to play with this fun feature
-
- Quick Links