That question somehow makes less than no sense.
“Is a Golf IV suitable for taking my daughter to school? Note that the question is targeted at a primary school.”
One would wonder how big your daughter is, how well in shape the Golf is, how far the school is from your daughter's bedroom, whether cars are allowed to access the school, whether cars built by a caught-in-the-act cheater are allowed access to the school, whether your daughter would want to be seen in any connection with a Golf IV and, last not least, whether you actually have a daughter or just a cat.
If you were able to define WHAT you want to do with a tool instead of saying “is it suitable for the purpose it has been made for”, one could be tempted to try to answer the question. I can assure you from my own experience that Houdini (including Indie) has been used for short films, in small studios and any kind of mixture thereof. Whether that has any relevance whatsoever to your situation is up to Grandma Ellie's guess. But she's still riding her motorbike out in the chicken coop.
Marc
Found 590 posts.
Search results Show results as topic list.
Houdini Lounge » Houdini Indie Suitable as Primary Tool for Short Film?
- malbrecht
- 806 posts
- Offline
Technical Discussion » Import facial rig whit facerig ? fbx included
- malbrecht
- 806 posts
- Offline
Hi,
I am German (only using my bio-bubble-brain-brainslator), so I apologize for what seems like rudeness
Depending on your concrete setup what I would do (and have done it in the past) is using a VEX wrangle that looks for “nearpoints”. So your lower resolution mesh “attaches” itself to “suitable” points on the higher resolution mesh. Again, depending on the setup, you may have to create a “driving weight map” on the higher resolution mesh.
You might be able to use a raycast node, which allows you to wrap the above into a single node - it does not always work perfectly (obviously for topology reasons), but it could be a good start.
In quintessence: It depends. The problem with those “hobbyists face rigs” is that it looks good enough “in the lab” or for quick show-off-videos, but if you want to use it outside a comic environment, you have to put in a LOT OF WORK to make it look acceptable.
Marc
I am German (only using my bio-bubble-brain-brainslator), so I apologize for what seems like rudeness
Depending on your concrete setup what I would do (and have done it in the past) is using a VEX wrangle that looks for “nearpoints”. So your lower resolution mesh “attaches” itself to “suitable” points on the higher resolution mesh. Again, depending on the setup, you may have to create a “driving weight map” on the higher resolution mesh.
You might be able to use a raycast node, which allows you to wrap the above into a single node - it does not always work perfectly (obviously for topology reasons), but it could be a good start.
In quintessence: It depends. The problem with those “hobbyists face rigs” is that it looks good enough “in the lab” or for quick show-off-videos, but if you want to use it outside a comic environment, you have to put in a LOT OF WORK to make it look acceptable.
Marc
Technical Discussion » Houdini Crashes
- malbrecht
- 806 posts
- Offline
Moin,
I had a ton of crashes with H17.5 recently, most of which have been almost impossible to reproduce, others are clearly FBX related (and from older experience I know that often it's the FBX import library that is causing the issues, so I am usually able to work around the crashes by converting the geometry to Houdini native *bgeo and work on that exclusively).
You may have to file a bug report including your test data - I know this can be painful especially with large files, but there's no “easy way out” if you only get crashes in a specific setup and cannot provide a way to reproduce them “from scratch”.
Marc
I had a ton of crashes with H17.5 recently, most of which have been almost impossible to reproduce, others are clearly FBX related (and from older experience I know that often it's the FBX import library that is causing the issues, so I am usually able to work around the crashes by converting the geometry to Houdini native *bgeo and work on that exclusively).
You may have to file a bug report including your test data - I know this can be painful especially with large files, but there's no “easy way out” if you only get crashes in a specific setup and cannot provide a way to reproduce them “from scratch”.
Marc
Technical Discussion » Import facial rig whit facerig ? fbx included
- malbrecht
- 806 posts
- Offline
Moin,
I am not clear about your question - you can “somehow import” almost everything into Houdini, so what is the answer you are looking for?
Creating a facial rig like the one in the Youtube video is quite simple - just use a reference rig with known facial landmark knots and drive that by an openCV facial landmark detection setup, so creating the “driver data” is pretty straight forward. An FBX with animation baked in should be fine, as long as topology and point order remain the same - if all fails, you just grab the point positions from a null-pose, reference those to your own driver rig and Bob's your uncle.
Whether the results you get are “usable” or not depends on the quality of YOUR rig: Do you have adequate automatic corrective morphs and a driver system for those? What about driving a hires mesh from a lowres mesh? That's all pretty standard when you are doing rigging/animation, but if you don't have any experience in that area, you might want to start by setting up your own pipeline and refining that to a point where you are comfortable with using external “low end” solutions (that you have to improve on anyway).
Not a REAL answer, but, again, your question isn't very specific either.
Marc
I am not clear about your question - you can “somehow import” almost everything into Houdini, so what is the answer you are looking for?
Creating a facial rig like the one in the Youtube video is quite simple - just use a reference rig with known facial landmark knots and drive that by an openCV facial landmark detection setup, so creating the “driver data” is pretty straight forward. An FBX with animation baked in should be fine, as long as topology and point order remain the same - if all fails, you just grab the point positions from a null-pose, reference those to your own driver rig and Bob's your uncle.
Whether the results you get are “usable” or not depends on the quality of YOUR rig: Do you have adequate automatic corrective morphs and a driver system for those? What about driving a hires mesh from a lowres mesh? That's all pretty standard when you are doing rigging/animation, but if you don't have any experience in that area, you might want to start by setting up your own pipeline and refining that to a point where you are comfortable with using external “low end” solutions (that you have to improve on anyway).
Not a REAL answer, but, again, your question isn't very specific either.
Marc
Houdini Indie and Apprentice » Houdini Consistent Crashes
- malbrecht
- 806 posts
- Offline
To add to what Mr. McSpurren said: You won't have much fun with 2GB GPUs either. I am running an RTX2080 with 8GB and can easily drive Houdini into a wall (because my jobs require me to work with today's resolutions, i.e. around the 100 million polygons).
Consider upgrading your RAM, too. 16GB is OK to do some simple tests for sure, but if you are doing heavy calculations, remeshing etc., it can get tight (examples: Skype eats up 600MB here, Chrome easily touches 800-1000MB - that's just two applications. Windows itself is “hungry” like the next dog - 16GB for a 3d system just sounds very, very, VERY basic).
Again, for running simple tests it should be fine.
There are massive issues with FBX based geometry in Houdini (I filed several bug reports over the years, unfortunately, it isn't really all to blame on Houdini, most FBX libraries aren't “nice to your sister”), so wherever possible, convert geometry and rig and what you need to native data (using a file node to write stuff out) and get rid of the FBX nodes ASAP.
Marc
Consider upgrading your RAM, too. 16GB is OK to do some simple tests for sure, but if you are doing heavy calculations, remeshing etc., it can get tight (examples: Skype eats up 600MB here, Chrome easily touches 800-1000MB - that's just two applications. Windows itself is “hungry” like the next dog - 16GB for a 3d system just sounds very, very, VERY basic).
Again, for running simple tests it should be fine.
There are massive issues with FBX based geometry in Houdini (I filed several bug reports over the years, unfortunately, it isn't really all to blame on Houdini, most FBX libraries aren't “nice to your sister”), so wherever possible, convert geometry and rig and what you need to native data (using a file node to write stuff out) and get rid of the FBX nodes ASAP.
Marc
Edited by malbrecht - April 29, 2019 14:20:41
Technical Discussion » Parameter Expression to return value of a string attribute
- malbrecht
- 806 posts
- Offline
Hi,
I have long given up trying to understand any “why” when it comes to Houdini “internals”. There seem to be at least three or four different philosophies at work (why anyone would want to have THREE completely different “languages” inside one and the same application for scripting purpose is beyond my horizon for example).
My guess (and it's only a guess, really) is that for historical or philosophical or religious reasons strings are handled differently in Houdini's core compared to vectors (“vectors” in a programming sense of the word), so that *maybe* “point(…,0)” would return the first character in a string, where it's supposed to return the first string POINTER in a vector of strings (that could have a size of 1).
Therefore “the powers that are” introduced additional functions (point*s* etc) to cater for strings, instead of doing an overload that handles strings and e.g. floats the same way.
Again, that's shooting blindly in the dark without knowing the weapon. If it *works* for you one way or another, roll with it :-)
Marc
I have long given up trying to understand any “why” when it comes to Houdini “internals”. There seem to be at least three or four different philosophies at work (why anyone would want to have THREE completely different “languages” inside one and the same application for scripting purpose is beyond my horizon for example).
My guess (and it's only a guess, really) is that for historical or philosophical or religious reasons strings are handled differently in Houdini's core compared to vectors (“vectors” in a programming sense of the word), so that *maybe* “point(…,0)” would return the first character in a string, where it's supposed to return the first string POINTER in a vector of strings (that could have a size of 1).
Therefore “the powers that are” introduced additional functions (point*s* etc) to cater for strings, instead of doing an overload that handles strings and e.g. floats the same way.
Again, that's shooting blindly in the dark without knowing the weapon. If it *works* for you one way or another, roll with it :-)
Marc
Technical Discussion » Parameter Expression to return value of a string attribute
- malbrecht
- 806 posts
- Offline
Moin,
I may be completely confused … why would you want to read the attribute named “r” (as in your example) if the attribute you want to read is named “myAttr”? Shouldn't “r” be “myAttr”?
That said - have you checked that the path is correct?
Have you tried using “details” instead? (http://www.sidefx.com/docs/houdini/expressions/details.html [www.sidefx.com] )
I hope any of this is of any help anyway (any overhanging any are due to Easter)
Marc
I may be completely confused … why would you want to read the attribute named “r” (as in your example) if the attribute you want to read is named “myAttr”? Shouldn't “r” be “myAttr”?
That said - have you checked that the path is correct?
Have you tried using “details” instead? (http://www.sidefx.com/docs/houdini/expressions/details.html [www.sidefx.com] )
I hope any of this is of any help anyway (any overhanging any are due to Easter)
Marc
Technical Discussion » Prepurchase questions, animation/character related
- malbrecht
- 806 posts
- Offline
I have worked with DAZ- or similar “game-style” characters. While it is possible to get them over into Houdini (you can even get the Hires-versions from DAZ, which are basically subdivided versions with additional deformations, probably from displacement information) over and keep the rigs' weightmaps intact (contrary to what is said on the DAZ forums, this is absolutely possible).
If you want to keep the DAZ rig you have to put in some work though, since one uses joints, the other uses bones. You can set up a CHOP net or use a helper rig, but - as like in most software packages - additional nodes bog down Houdini dramatically and make any “real-time-animation” impossible.
Cloth: In most cases those look incredibly bad in the DAZ world anyway. The best you can do with it is to subdivide it and use one of Houdini's dynamic simulations to make the best of it.
Inter-penetration-protection: Possible in Houdini, but not quite out of the box. This depends on how you set up your rigs and your animation needs. One situation may work best with corrective morphs, another with a muscle setup and a third with some dynamic simulation. I don't see a silver bullet here and I am not impressed by the “one size fits it all” other packages offer. The results usually are works-in-the-lab (and nowhere else).
Hair: You can use DAZ hair products as guide/generation helpers for Houdini's hair systems. Recent versions of Houdini have a set of game-developer tools that allow you to convert higher quality hair into texture mapped objects, which I would consider helpful for the animation step, as it is much faster to deal with.
This obviously can only reflect my subjective perspective. You may find that Houdini is a toolbox that MAKES you find your own point of view pretty quickly
Marc
If you want to keep the DAZ rig you have to put in some work though, since one uses joints, the other uses bones. You can set up a CHOP net or use a helper rig, but - as like in most software packages - additional nodes bog down Houdini dramatically and make any “real-time-animation” impossible.
Cloth: In most cases those look incredibly bad in the DAZ world anyway. The best you can do with it is to subdivide it and use one of Houdini's dynamic simulations to make the best of it.
Inter-penetration-protection: Possible in Houdini, but not quite out of the box. This depends on how you set up your rigs and your animation needs. One situation may work best with corrective morphs, another with a muscle setup and a third with some dynamic simulation. I don't see a silver bullet here and I am not impressed by the “one size fits it all” other packages offer. The results usually are works-in-the-lab (and nowhere else).
Hair: You can use DAZ hair products as guide/generation helpers for Houdini's hair systems. Recent versions of Houdini have a set of game-developer tools that allow you to convert higher quality hair into texture mapped objects, which I would consider helpful for the animation step, as it is much faster to deal with.
This obviously can only reflect my subjective perspective. You may find that Houdini is a toolbox that MAKES you find your own point of view pretty quickly
Marc
Technical Discussion » Creating Tears Welling Up effect in Houdini along with Tears rolling down the cheeks
- malbrecht
- 806 posts
- Offline
Hmm … you see me slightly confused: If it is a production shoot, you surely have colleagues you can ask for modelling help quickly across the table. Intersections with geometry: Yeah, but so what? I mean, the tear is probably getting its own shader/render pipeline anyway, so what problem is intersection causing? You can easily hook the additional tear geometry up to the main mesh object so that transformations are recognized. If the eye area is deformed by a rig, simply add influences from that rig/the weights to the tear model etc.
The thing is: I am incapable of running your production shot for you, that is what you are getting paid for. In order to give you “general help”, I would need to better understand your problem. If the problem is “I cannot do that”, then MAYBE it would be a good idea to create a TEST SCENARIO in which you can try things out, learn how you can solve the problem in a “neutral environment” and apply what you experimentally learned to your production shot, which you get paid for.
But most likely I am just confused.
Marc
The thing is: I am incapable of running your production shot for you, that is what you are getting paid for. In order to give you “general help”, I would need to better understand your problem. If the problem is “I cannot do that”, then MAYBE it would be a good idea to create a TEST SCENARIO in which you can try things out, learn how you can solve the problem in a “neutral environment” and apply what you experimentally learned to your production shot, which you get paid for.
But most likely I am just confused.
Marc
Technical Discussion » Creating Tears Welling Up effect in Houdini along with Tears rolling down the cheeks
- malbrecht
- 806 posts
- Offline
The Vimeo link does not work …
I was able to quickly see the 3:49-3:51 snippet on Youtube before another ad kicked in and I immediately closed the window …
Depending on what “quality” you want (I am not a fan of that style of animation/look), this, to me, actually *does* look like a longish tear-bubble that has been modelled in its biggest shape and then a blend shape/morph has been created to push the geometry into the eyesocket, where it remains invisible until “dialed out”.
Marc
I was able to quickly see the 3:49-3:51 snippet on Youtube before another ad kicked in and I immediately closed the window …
Depending on what “quality” you want (I am not a fan of that style of animation/look), this, to me, actually *does* look like a longish tear-bubble that has been modelled in its biggest shape and then a blend shape/morph has been created to push the geometry into the eyesocket, where it remains invisible until “dialed out”.
Marc
Technical Discussion » Creating Tears Welling Up effect in Houdini along with Tears rolling down the cheeks
- malbrecht
- 806 posts
- Offline
I cannot watch the video because it gets replaced by overly stupid advertisements - can you upload it to a professional video service instead?
That said, could you please describe your (technical) problem in more detail: “Tears rolling down cheeks” as such sounds like a liquid simulation - is that the problem? “Welling up” could be a morph/blend shape - is that the problem? “Teary eyes” sounds like a shader problem or maybe a lighting thing - is that the problem?
Where exactly do you need help?
Marc
That said, could you please describe your (technical) problem in more detail: “Tears rolling down cheeks” as such sounds like a liquid simulation - is that the problem? “Welling up” could be a morph/blend shape - is that the problem? “Teary eyes” sounds like a shader problem or maybe a lighting thing - is that the problem?
Where exactly do you need help?
Marc
Houdini Indie and Apprentice » Conforming cloth
- malbrecht
- 806 posts
- Offline
Moin,
depending on what you need to simulate, which part is animated, how complex your collision requirements are, whether you are doing a hero-character or background … yes, yes and yeah, definitely.
The simplest way usually is to do a quick pre-sim to get the cloth more or less snug to the character (e.g. switch off gravity and apply some constraints that pull points in), freeze the sim into your new rest-state and then use hard-constraints to tie anchor points to your animated mesh.
Your needs define how you would cater for simulations-over-animations (collisions etc). With low-poly game-characters like the DAZ figures you can easily get away with setting the whole figure as a collision mesh.
You might want to have a look at the “thousands” of demos/tutorials “how to create a flapping flag in Houdini” that show hard constraints in action.
Marc
depending on what you need to simulate, which part is animated, how complex your collision requirements are, whether you are doing a hero-character or background … yes, yes and yeah, definitely.
The simplest way usually is to do a quick pre-sim to get the cloth more or less snug to the character (e.g. switch off gravity and apply some constraints that pull points in), freeze the sim into your new rest-state and then use hard-constraints to tie anchor points to your animated mesh.
Your needs define how you would cater for simulations-over-animations (collisions etc). With low-poly game-characters like the DAZ figures you can easily get away with setting the whole figure as a collision mesh.
You might want to have a look at the “thousands” of demos/tutorials “how to create a flapping flag in Houdini” that show hard constraints in action.
Marc
Work in Progress » Tree growth phyllotaxy - an experiment
- malbrecht
- 806 posts
- Offline
Moin,
… you could say so. I did it in FabricEngine where everything was everything and you could do, well, literally everything :-)
Unfortunately, with the removal of FE from the market, I cannot provide any screenshots any more, but I can send you a link to an old YT video that I made in modo (my YT material is not public any longer) - it's not “that impressive”, though …
The light-searching was extremely simple, just a one-bounce-max attempt to see if a “node” could “see the light”, if yes, it got promoted (could move on), if not, it stayed.
I had a couple of ideas about how to make intertwining branches cooperate better but never found the time to dig deeper into it. Might be a fun project to try in Houdini, actually.
Marc
Edit/PS
Two screengrabs from the video mentioned, one showing two light sources I used to “art-direct” the growth of the vine, one showing a render.
bdav
Sounds great, did you compute illumination in sop context? would love to see some screens!
… you could say so. I did it in FabricEngine where everything was everything and you could do, well, literally everything :-)
Unfortunately, with the removal of FE from the market, I cannot provide any screenshots any more, but I can send you a link to an old YT video that I made in modo (my YT material is not public any longer) - it's not “that impressive”, though …
The light-searching was extremely simple, just a one-bounce-max attempt to see if a “node” could “see the light”, if yes, it got promoted (could move on), if not, it stayed.
I had a couple of ideas about how to make intertwining branches cooperate better but never found the time to dig deeper into it. Might be a fun project to try in Houdini, actually.
Marc
Edit/PS
Two screengrabs from the video mentioned, one showing two light sources I used to “art-direct” the growth of the vine, one showing a render.
Edited by malbrecht - Feb. 18, 2019 03:22:51
Work in Progress » Tree growth phyllotaxy - an experiment
- malbrecht
- 806 posts
- Offline
I love it! I am “so into CG-plant-growing” :-)
A few years back I created a plant-growing-system that used energy-sources (light) as guidance. By (keyframed) increasing or decreasing the amount of light available from a certain direction (and by avoiding shadowy areas) you could direct the growth and development of the plant (with the advantage of growth not being too linear, but more “natural”/randomish).
Your experiment makes me want to go back to that project.
Time … oh … someone send me a bottle of time.
Did I say that I love what you did there?
Marc
A few years back I created a plant-growing-system that used energy-sources (light) as guidance. By (keyframed) increasing or decreasing the amount of light available from a certain direction (and by avoiding shadowy areas) you could direct the growth and development of the plant (with the advantage of growth not being too linear, but more “natural”/randomish).
Your experiment makes me want to go back to that project.
Time … oh … someone send me a bottle of time.
Did I say that I love what you did there?
Marc
Houdini for Realtime » Reality Capture Plugin - Open Beta
- malbrecht
- 806 posts
- Offline
Moin,
I wonder if there is any progress on the plug-in. My current R&D project involves RC (again) and I stumbled over at least one critical bug in RC (the principal point for the lens undistortion coefficients seems to be calculated wrong) and one serious issue (undistorted images are exported with arbitrary crops, whereas other exports link original camera files, so there is no reproducible pixel-to-pixel relation between original images and undistorted exports, which makes any reprojection impossible).
Most of these issues could be worked around more easily if I could access RC parameters through Houdini (since I am trying to use Houdini for the R&D part). Also, the ground-point and reference-point setting mentioned before has proven to be most significant for any cooperation with “the outside world”.
Marc
I wonder if there is any progress on the plug-in. My current R&D project involves RC (again) and I stumbled over at least one critical bug in RC (the principal point for the lens undistortion coefficients seems to be calculated wrong) and one serious issue (undistorted images are exported with arbitrary crops, whereas other exports link original camera files, so there is no reproducible pixel-to-pixel relation between original images and undistorted exports, which makes any reprojection impossible).
Most of these issues could be worked around more easily if I could access RC parameters through Houdini (since I am trying to use Houdini for the R&D part). Also, the ground-point and reference-point setting mentioned before has proven to be most significant for any cooperation with “the outside world”.
Marc
Houdini Learning Materials » Cloth basics on animated character with vellum
- malbrecht
- 806 posts
- Offline
Moin,
have a look at this great video from Jeff:
https://vimeo.com/299987046 [vimeo.com]
He shows how to blend from T-Pose into a walk-cycle, I guess that covers everything you need.
Marc
have a look at this great video from Jeff:
https://vimeo.com/299987046 [vimeo.com]
He shows how to blend from T-Pose into a walk-cycle, I guess that covers everything you need.
Marc
Technical Discussion » Not working geo2_deform
- malbrecht
- 806 posts
- Offline
Hi,
I just found this discussion - could have spared me some time if I had seen it before :-)
Yes, the problem is that the guide-groom-node in your setup is “locking” the grooming to one single frame. H's documentation even mentions that the guide-groom-node expects static geometry. If you want the grooming to follow your animation, you need to groom on the rest geometry and then have a deform-node make the output follow the skin's (the animated geometry's) deformation.
I find that more than clumsy to set up, so I didn't dive deeper into your scene - the solution you show in the video is kind of the same I came up with (bypassing the lock-down-groom node).
Marc
I just found this discussion - could have spared me some time if I had seen it before :-)
Yes, the problem is that the guide-groom-node in your setup is “locking” the grooming to one single frame. H's documentation even mentions that the guide-groom-node expects static geometry. If you want the grooming to follow your animation, you need to groom on the rest geometry and then have a deform-node make the output follow the skin's (the animated geometry's) deformation.
I find that more than clumsy to set up, so I didn't dive deeper into your scene - the solution you show in the video is kind of the same I came up with (bypassing the lock-down-groom node).
Marc
Work in Progress » 3 point align of 2 similar meshes (like in Maya)
- malbrecht
- 806 posts
- Offline
Hi, Olivier,
it's been ages that I used Photoscan (not because it's bad, but simply because of dev requirements) - so I cannot tell for sure if Photoscan has a world-lock-function like RealityCapture.
The idea is this: When you do photogrammetry, the points in space you get have no “origin”, even if you have a metric cube in your scanned scene, you first have to tell your pipeline, which points (on the cube) belong to the reference system. In RC this is what ground points are for: By defining points (2d on images, relating to 3d in the point cloud) to sit at given coordinates in your world space, the model you get “simply” gets rotated/scaled/transformed to that universe. Actually, it doesn't get transformed, it's just that the point coordinates are matched up.
In your case - with a rotating object - you'd need to adjust the reference (world space) accordingly, meaning, you'd need to rotate the rock in a “known way”, so that your ground points rotate along. I'm certain that photoscan has something like this “hidden somewhere”
Since this *should* be part of any photogrammetry pipeline that works with “interlocking” meshes (aligning from one scan to another or, bob beware, even combining Lidar with Ph*metry), aligning meshes - in theory - “is part of the process”.
(I say “in theory”, because in reality ph*metry comes with its own pitfalls in terms of large-model-precision (floating point issues, from my perspective, being the most likely reason).)
Marc
it's been ages that I used Photoscan (not because it's bad, but simply because of dev requirements) - so I cannot tell for sure if Photoscan has a world-lock-function like RealityCapture.
The idea is this: When you do photogrammetry, the points in space you get have no “origin”, even if you have a metric cube in your scanned scene, you first have to tell your pipeline, which points (on the cube) belong to the reference system. In RC this is what ground points are for: By defining points (2d on images, relating to 3d in the point cloud) to sit at given coordinates in your world space, the model you get “simply” gets rotated/scaled/transformed to that universe. Actually, it doesn't get transformed, it's just that the point coordinates are matched up.
In your case - with a rotating object - you'd need to adjust the reference (world space) accordingly, meaning, you'd need to rotate the rock in a “known way”, so that your ground points rotate along. I'm certain that photoscan has something like this “hidden somewhere”
Since this *should* be part of any photogrammetry pipeline that works with “interlocking” meshes (aligning from one scan to another or, bob beware, even combining Lidar with Ph*metry), aligning meshes - in theory - “is part of the process”.
(I say “in theory”, because in reality ph*metry comes with its own pitfalls in terms of large-model-precision (floating point issues, from my perspective, being the most likely reason).)
Marc
Work in Progress » 3 point align of 2 similar meshes (like in Maya)
- malbrecht
- 806 posts
- Offline
Hi,
haven't looked at the HIP (but will do ASAP), but:
> The result is 2 meshes of different scale/rotation/translation/topology.
… if you are using a tool like RealityCapture, this should not happen if your workflow is set up correctly. Groundpoints are used to make sure that your (arbitrary) local space gets mapped to a common global space correctly, so that (small) objects align pixel-perfect.
With large scale objects (landscapes of >400m with fine detail) the nature of photogrammetry may still introduce issues (lens calibration errors, floating point limitations etc), but everything that fits inside your room should be fine.
Marc
haven't looked at the HIP (but will do ASAP), but:
> The result is 2 meshes of different scale/rotation/translation/topology.
… if you are using a tool like RealityCapture, this should not happen if your workflow is set up correctly. Groundpoints are used to make sure that your (arbitrary) local space gets mapped to a common global space correctly, so that (small) objects align pixel-perfect.
With large scale objects (landscapes of >400m with fine detail) the nature of photogrammetry may still introduce issues (lens calibration errors, floating point limitations etc), but everything that fits inside your room should be fine.
Marc
Houdini Indie and Apprentice » for(int i = 0; i < points; i++) explanation?
- malbrecht
- 806 posts
- Offline
Moin,
BabaJ's explanation is complete, but maybe a different wording helps, too …
“for” is one way of creating a loop. A loop is a block of program code that gets executed over and over again - usually, with start conditions being set, end conditions being checked and something being done after each iteration of the loop has ended.
In the case of “for”, these three components of a loop are defined as:
for ( Setup ; End-Condition-Check ; What-To-Do-After-Each-Run )
Setup: You do not NEED to set up anything, if you can check for something to have happened inside the loop. However, in most cases you want a loop to run a defined number of times, so you need a “timer”. The variable “i” is that timer. A timer has to start somewhere - you can either set it to your maximum number of runs/iterations (which would be the number stored in the variable “points”) here and then subtract 1 from your timer after each run of the loop OR you start at a given value (0 or 1, usually) and increase the timer after each iteration.
Saying “i=0” sets the timer to 0. It's 0, because in the computer world things start at 0, not at 1.
That's all the for-loop sets up in your case. You get a local (temporary) variable (your timer i) and after each run that timer is increased by 1 (the short form “i++” being lazy-programmers-make-it-look-like-magic speak) and compared against your end-definition.
End-Condition-Check: Before the iteration is run, this condition is being checked. Saying “i<points” will be a TRUE statement, as long as your timer “i” is smaller than the number in “points”, and FALSE once it is equal (or larger) than points. The loop will stop once the condition-check returns FALSE. As long as it returns TRUE, the loop will continue.
You could have a statement like “1==1” in that second part of the loop-definition. That comparison would always be true (1 always equals 1). So the loop would run indefinitely. You could BREAK it from inside the loop once some other condition is met.
You could have a statement like “1==0” in that second part of the loop-definition. That comparison would always be false (1 is never equal to 0), so the loop would NOT RUN, because BEFORE the iteration starts, this condition is checked and, being false, the loop would not be allowed to do anything.
What-To-Do-After-Each-Run: Once an iteration is done, the program code in this block is executed and the next iteration is started (if the end-condition-check returns true). Saying “i++” is the same as saying “i=i+1”. But since programmers are lazy AND want non-programmers to think that they “know things” and “can do magic”, they came up with a lot of crazy ways of typing things that no sane person would understand.
It's like learning how to understand women. It does not have to make sense, it's about accepting the facts of life and that some things are just … weird.
I digress.
I hope this, even though it is basically the same that BabaJ said, is of some help. If you like this style of explain-o-matic, check out my book about “how to become a programmer” [www.sidefx.com]. It's not about VEX, but does say a lot of things about loops, variables, women and life.
Marc
BabaJ's explanation is complete, but maybe a different wording helps, too …
“for” is one way of creating a loop. A loop is a block of program code that gets executed over and over again - usually, with start conditions being set, end conditions being checked and something being done after each iteration of the loop has ended.
In the case of “for”, these three components of a loop are defined as:
for ( Setup ; End-Condition-Check ; What-To-Do-After-Each-Run )
Setup: You do not NEED to set up anything, if you can check for something to have happened inside the loop. However, in most cases you want a loop to run a defined number of times, so you need a “timer”. The variable “i” is that timer. A timer has to start somewhere - you can either set it to your maximum number of runs/iterations (which would be the number stored in the variable “points”) here and then subtract 1 from your timer after each run of the loop OR you start at a given value (0 or 1, usually) and increase the timer after each iteration.
Saying “i=0” sets the timer to 0. It's 0, because in the computer world things start at 0, not at 1.
That's all the for-loop sets up in your case. You get a local (temporary) variable (your timer i) and after each run that timer is increased by 1 (the short form “i++” being lazy-programmers-make-it-look-like-magic speak) and compared against your end-definition.
End-Condition-Check: Before the iteration is run, this condition is being checked. Saying “i<points” will be a TRUE statement, as long as your timer “i” is smaller than the number in “points”, and FALSE once it is equal (or larger) than points. The loop will stop once the condition-check returns FALSE. As long as it returns TRUE, the loop will continue.
You could have a statement like “1==1” in that second part of the loop-definition. That comparison would always be true (1 always equals 1). So the loop would run indefinitely. You could BREAK it from inside the loop once some other condition is met.
You could have a statement like “1==0” in that second part of the loop-definition. That comparison would always be false (1 is never equal to 0), so the loop would NOT RUN, because BEFORE the iteration starts, this condition is checked and, being false, the loop would not be allowed to do anything.
What-To-Do-After-Each-Run: Once an iteration is done, the program code in this block is executed and the next iteration is started (if the end-condition-check returns true). Saying “i++” is the same as saying “i=i+1”. But since programmers are lazy AND want non-programmers to think that they “know things” and “can do magic”, they came up with a lot of crazy ways of typing things that no sane person would understand.
It's like learning how to understand women. It does not have to make sense, it's about accepting the facts of life and that some things are just … weird.
I digress.
I hope this, even though it is basically the same that BabaJ said, is of some help. If you like this style of explain-o-matic, check out my book about “how to become a programmer” [www.sidefx.com]. It's not about VEX, but does say a lot of things about loops, variables, women and life.
Marc
-
- Quick Links