Ah, “render in camera space” sounds like they are then first applying the camera matrix to the world space construction before projecting textures - that would get around massive floating-point issues that you would have with such huge scales (assuming you are using meters / SI units in the scene).
It *might* help to use a scene dependent unit like “diameter of the sun” instead of something like meter or kilometer. However, I fear that with the differences in sizes and distances in a solar system, ANY “fixed” system will run into limitations if you want to apply really fine detailed textures - I haven't looked at your source data, but I assume you are using meter/kilometer sized pixels there and are trying to project those onto immense areas. That's always going to be a problem, don't even think about SIMULATIONS with specifically tailored solvers for such setups :-D
Marc
Found 590 posts.
Search results Show results as topic list.
Technical Discussion » Improving 8K textures close-up (Redshift)
- malbrecht
- 806 posts
- Offline
Technical Discussion » Improving 8K textures close-up (Redshift)
- malbrecht
- 806 posts
- Offline
Hi,
the “broken” images don't look like they are showing a problem that a filter could fix, instead, they look like a stripe-offset-error in reading/displaying the actual data. It might be coming from your UVs (could, maybe, be a floating-point issue?) or could be a bug in RS.
Have you tried narrowing down if the error appears on a simplified model (like a plain grid)? If it also appears there, have you tried exposing the issue in the Red Shift community?
Marc
the “broken” images don't look like they are showing a problem that a filter could fix, instead, they look like a stripe-offset-error in reading/displaying the actual data. It might be coming from your UVs (could, maybe, be a floating-point issue?) or could be a bug in RS.
Have you tried narrowing down if the error appears on a simplified model (like a plain grid)? If it also appears there, have you tried exposing the issue in the Red Shift community?
Marc
3rd Party » WIP: FBX HD & SD importer, Joints to Bones Converter and Morph-Helper (was: DAZ to Houdini converter)
- malbrecht
- 806 posts
- Offline
Hi, “Snowflake71”,
I am not quite sure that I fully understand what you are saying - it's not exactly my “job” to post to some social media sites or pay for Google Adwords to make Houdini users see a Houdini extension being tested, is it :-)
This forum here is the core Houdini user meeting place - where else would I post about a Houdini project? Every minute I waste on some third party website is a minute lost for the project. ONE forum: Fine. Wasting time posting on the interwebs: Not fine.
To clarify: I have shut down the DAZ bridge (without the limitations that the current DAZ bridge impose, my idea was to actually convert the DUF files) because of ZERO INTEREST (I have made it pretty clear what I consider “interest”, anonymous postings here in the thread are NOT what I count as “interested users”).
I have NOT ended work on the FBX helpers. That project is pretty much alive, but is concentrating on usability in future Houdini versions right now for reasons that will become obvious soon enough.
> 3 feedback from 30 is actually a pretty cool turnout.
Well, you're free to have a different perspective on that, of course :-)
Marc
I am not quite sure that I fully understand what you are saying - it's not exactly my “job” to post to some social media sites or pay for Google Adwords to make Houdini users see a Houdini extension being tested, is it :-)
This forum here is the core Houdini user meeting place - where else would I post about a Houdini project? Every minute I waste on some third party website is a minute lost for the project. ONE forum: Fine. Wasting time posting on the interwebs: Not fine.
To clarify: I have shut down the DAZ bridge (without the limitations that the current DAZ bridge impose, my idea was to actually convert the DUF files) because of ZERO INTEREST (I have made it pretty clear what I consider “interest”, anonymous postings here in the thread are NOT what I count as “interested users”).
I have NOT ended work on the FBX helpers. That project is pretty much alive, but is concentrating on usability in future Houdini versions right now for reasons that will become obvious soon enough.
> 3 feedback from 30 is actually a pretty cool turnout.
Well, you're free to have a different perspective on that, of course :-)
Marc
Houdini Indie and Apprentice » Bas Relief from photo
- malbrecht
- 806 posts
- Offline
The problem with creating bumps or displacements from COLOUR ONLY is that you most often cannot be sure that ONLY those areas that you want to “emboss” use the colour you define for “extend outwards”. The same colour may be used all over the place, either creating unmanageable noise or offsetting the displacement height massively so that manual clean-up after the fact takes longer than doing it by hand from the beginning.
Since I do this kind of R&D for a living, I cannot provide an out-of-the-box solution. But think along these lines: Can you better separate your colour in non-RGB spaces so that the “embossed” colours are more “unique-ish”? Can you apply a high-pass (or maybe a low-pass) filter to mask out areas where you do NOT want embossing to appear (hint: This is a key-process in getting good results)? Can you deduct light directions from your image data (hint: This is GOLD STANDARD for physically plausible results)? Can you use GRADIENTS instead of colour?
If you are looking for a “cheap” solution, have a look at Github or do a search run on the interwebs, there are dozens of tools that claim to do photo-to-bump-map conversions. Their results CAN BE good enough (especially with “lab condition” photos) and depending on your output needs it may be quite straight forward to reproduce what those tools do. However, as long as you ONLY have a colour photo (maybe even with destroyed detail, read, a jpeg) you are limited to “faking it”. OR you apply a machine-learning computer-vision step, that might give you “better” results.
Marc
Since I do this kind of R&D for a living, I cannot provide an out-of-the-box solution. But think along these lines: Can you better separate your colour in non-RGB spaces so that the “embossed” colours are more “unique-ish”? Can you apply a high-pass (or maybe a low-pass) filter to mask out areas where you do NOT want embossing to appear (hint: This is a key-process in getting good results)? Can you deduct light directions from your image data (hint: This is GOLD STANDARD for physically plausible results)? Can you use GRADIENTS instead of colour?
If you are looking for a “cheap” solution, have a look at Github or do a search run on the interwebs, there are dozens of tools that claim to do photo-to-bump-map conversions. Their results CAN BE good enough (especially with “lab condition” photos) and depending on your output needs it may be quite straight forward to reproduce what those tools do. However, as long as you ONLY have a colour photo (maybe even with destroyed detail, read, a jpeg) you are limited to “faking it”. OR you apply a machine-learning computer-vision step, that might give you “better” results.
Marc
Technical Discussion » Isn't it time for Houdini to deal with Importing better?
- malbrecht
- 806 posts
- Offline
Joining into the discussion part of this - while I do agree that there are workarounds possible for most issues with imports in Houdini, stating that Houdini needs better imports all over the place is definitely true.
I am not quite sure what snapag is saying, but it might be something I have been struggling with to solve in my FBX-import-enhancer tool: Houdini tends to rename imported material tags in a way that isn't always predictable. For once, Houdini doesn't support blanks in tags. If it was ONLY that, it would be easy enough to assume that “_” is a blank. However, there are material tags that have both, blanks and underscores - and the screw-up just starts there.
If Houdini cannot or does not want to deal with a broader range of characters (UTF8 would be a minimum in my world), fine, that's something I can accept. But PLEASE make it USER-definable, don't always pretend that Houdini “knows better”. Let the user set up a replacement matrix instead of brute-forcing underscores to everything.
Also, if there is a mix of material tags stemming from different geometry “layers” in the FBX, don't start a COUNTER for those tags. That's just unnecessarily crude if a script tries to make sense of the output.
Don't get me wrong, I will continue filing RFEs every once in a while, despite the fact that I can neither check existing RFEs, or even see my own ones (which makes me file RFEs about 0.00001% of the time I would like to do so). But I welcome an exchange of perspectives BEFORE I do so, since some RFEs I have seen before they got filed (and thus vanished from my view) are kind of contrary to what I consider a good idea and could have been improved by a short discussion.
Marc
I am not quite sure what snapag is saying, but it might be something I have been struggling with to solve in my FBX-import-enhancer tool: Houdini tends to rename imported material tags in a way that isn't always predictable. For once, Houdini doesn't support blanks in tags. If it was ONLY that, it would be easy enough to assume that “_” is a blank. However, there are material tags that have both, blanks and underscores - and the screw-up just starts there.
If Houdini cannot or does not want to deal with a broader range of characters (UTF8 would be a minimum in my world), fine, that's something I can accept. But PLEASE make it USER-definable, don't always pretend that Houdini “knows better”. Let the user set up a replacement matrix instead of brute-forcing underscores to everything.
Also, if there is a mix of material tags stemming from different geometry “layers” in the FBX, don't start a COUNTER for those tags. That's just unnecessarily crude if a script tries to make sense of the output.
Don't get me wrong, I will continue filing RFEs every once in a while, despite the fact that I can neither check existing RFEs, or even see my own ones (which makes me file RFEs about 0.00001% of the time I would like to do so). But I welcome an exchange of perspectives BEFORE I do so, since some RFEs I have seen before they got filed (and thus vanished from my view) are kind of contrary to what I consider a good idea and could have been improved by a short discussion.
Marc
Technical Discussion » What is the best choice for soft bodies ? fem vs vellum
- malbrecht
- 806 posts
- Offline
Like @toadstorm said, there is no “accuracy”. Take, as an example, wool: The way the “fabric” you see is made up from strands of individual threads that support and inhibit each others movement would be insane to “simulate” (and, in reality, it LOOKS like the fabric is behaving like a more or less coherent mass anyway), so “accuracy” isn't really what you are after. What you want to get is “close enough to reality” - or, if you are working in the movie industry, “close enough to what the director wants to see” (cynic remark deleted).
Grain solvers (and to some extent in an over-simplified view Vellum is one of them) are fast and “look good enough” in many cases, especially for simulating cloth-like material, whereas FEM sometimes can be more “flexible” (pun intended) for tricky situations where setting up Vellum restraints or controlling areas of influence would mean investing much more time into fiddling with the scene than just simulating it away in a slower solver.
It's been some time that I have done production-ready “soft body” stuff, but in my experience FEM sometimes took me closer to what I wanted to have than Vellum. Obviously, that can have been because of old habits that I didn't want to break just for getting a job done.
In my experience, more solvers (that behave differently, providing predictably different visual outcome) are better. Especially when their inner workings are as distinguishable as Vellum and FEM.
Marc
Grain solvers (and to some extent in an over-simplified view Vellum is one of them) are fast and “look good enough” in many cases, especially for simulating cloth-like material, whereas FEM sometimes can be more “flexible” (pun intended) for tricky situations where setting up Vellum restraints or controlling areas of influence would mean investing much more time into fiddling with the scene than just simulating it away in a slower solver.
It's been some time that I have done production-ready “soft body” stuff, but in my experience FEM sometimes took me closer to what I wanted to have than Vellum. Obviously, that can have been because of old habits that I didn't want to break just for getting a job done.
In my experience, more solvers (that behave differently, providing predictably different visual outcome) are better. Especially when their inner workings are as distinguishable as Vellum and FEM.
Marc
Technical Discussion » ACES/OCIO
- malbrecht
- 806 posts
- Offline
Hi,
> when working with exaxt color values?
… “exact color values” and “Aces” don't fit into one context in my world :-) (tongue in cheek). ACES, like all other attempts to agree on something, is a “forced definition”, not something exact. Besides, due to some obscure “640kB will be enough for everyone” philosophy, ACES uses 16bit floating point, limiting (necessary) headroom for floating point rounding errors.
That ranted, a well defined color workflow depends on every device being set up properly: From your monitor's calibration and a matching profile for your display pipeline (your Operating System may apply its own “correction”, so may do your graphic card driver) through your tool (Houdini) to your input data (do you need a “de-gamma” to be applied to get into anything linear?). Without knowing your precise workflow it's hard to tell why exactly you are seeing different colors when turning ONE switch in the whole process.
Marc
> when working with exaxt color values?
… “exact color values” and “Aces” don't fit into one context in my world :-) (tongue in cheek). ACES, like all other attempts to agree on something, is a “forced definition”, not something exact. Besides, due to some obscure “640kB will be enough for everyone” philosophy, ACES uses 16bit floating point, limiting (necessary) headroom for floating point rounding errors.
That ranted, a well defined color workflow depends on every device being set up properly: From your monitor's calibration and a matching profile for your display pipeline (your Operating System may apply its own “correction”, so may do your graphic card driver) through your tool (Houdini) to your input data (do you need a “de-gamma” to be applied to get into anything linear?). Without knowing your precise workflow it's hard to tell why exactly you are seeing different colors when turning ONE switch in the whole process.
Marc
3rd Party » WIP: FBX HD & SD importer, Joints to Bones Converter and Morph-Helper (was: DAZ to Houdini converter)
- malbrecht
- 806 posts
- Offline
Well, I guess that's settled then.
ZERO interest in a “REAL DAZ Bridge to Houdini” without the limitations that the DAZ bridges for other tools currently show.
Interesting, but that's freeing up time for other projects :-)
Marc
ZERO interest in a “REAL DAZ Bridge to Houdini” without the limitations that the DAZ bridges for other tools currently show.
Interesting, but that's freeing up time for other projects :-)
Marc
Technical Discussion » Questions on Mantra materials and megascans
- malbrecht
- 806 posts
- Offline
Hi, Guy,
if I may join in: Regarding your comment about “having to tweak especially when scaling the model” - jsmack answered that in the previous comment:
A displacement “map” (texture file) usually is treated as a scaling FACTOR. A JPEG won't work well here, since it's only 8 bit and would therefore only give you 255 different “distance” values. Even EXR, when used with half-precision, will often give you “jaggies” due to lack of “intermediate values”. Anyway, the value read from the displacement “map” will usually be something between 0 and 1 (grey scale), which is “mapped” to -1 to +1 (halving its precision by the conversion). If you apply this “-1 to +1” directly to a displacement (without any scale), you end up with displaced geometry that can be offset by 1 unit at maximum.
Now, if you SCALE your model, this displacement does not get scaled with the model (if you haven't set the displacement up to respect model scale). If your model was, say, one meter in size and your displacement “correctly” had a maximum offset of 1 meter, scaling the model to 10m in size will, obviously, make that displacement way too small.
That's where the read-out value from a displacement “map” will usually be treated as a factor: It's to be multiplied by your (model dependent) “size factor” (or “max value”, depending on your naming scheme). 0.5 times your model scale will give you a “consistent” displacement outcome.
If you set up your pipeline accordingly, any scaling of the model will be respected by the displacement and it will always yield physically correct results. Which, unfortunately, may not be what you need for the job assigned, since LOOKS often don't go along with physical correctness :-)
As for gamma: A proper colour pipeline setup will define whether you have to correct a non-color-corrected input “map” (diffuse, albedo etc) or not. Applying a 0.5454 gamma curve to a texture means that the texture is expected to have a 2.2 gamma “baked in” (which shouldn't be the case in a “professional” setup). “Linear” textures are textures that do NOT have a gamma curve baked in but can be represented as (again) color values (per channel) from 0 to 1. Those linear colors do not have any specific “color space” assigned - the color space is a conversion that creates “human readable” or “understandable” colour output from pure mathematic linear colour values (hinting at a brute-force gamma curve of 0.5454 being most often insufficient, since red/green/blue most likely need different curves applied for human-friendly colour spaces).
In short: It makes a lot of sense to take an hour and read about colour spaces, linear colour space and the different options of colour correction models. Even if you don't need this knowledge “constantly” in your job, having a basic understanding of what colour actually IS will help you tremendously with setting up render pipelines.
Marc
if I may join in: Regarding your comment about “having to tweak especially when scaling the model” - jsmack answered that in the previous comment:
A displacement “map” (texture file) usually is treated as a scaling FACTOR. A JPEG won't work well here, since it's only 8 bit and would therefore only give you 255 different “distance” values. Even EXR, when used with half-precision, will often give you “jaggies” due to lack of “intermediate values”. Anyway, the value read from the displacement “map” will usually be something between 0 and 1 (grey scale), which is “mapped” to -1 to +1 (halving its precision by the conversion). If you apply this “-1 to +1” directly to a displacement (without any scale), you end up with displaced geometry that can be offset by 1 unit at maximum.
Now, if you SCALE your model, this displacement does not get scaled with the model (if you haven't set the displacement up to respect model scale). If your model was, say, one meter in size and your displacement “correctly” had a maximum offset of 1 meter, scaling the model to 10m in size will, obviously, make that displacement way too small.
That's where the read-out value from a displacement “map” will usually be treated as a factor: It's to be multiplied by your (model dependent) “size factor” (or “max value”, depending on your naming scheme). 0.5 times your model scale will give you a “consistent” displacement outcome.
If you set up your pipeline accordingly, any scaling of the model will be respected by the displacement and it will always yield physically correct results. Which, unfortunately, may not be what you need for the job assigned, since LOOKS often don't go along with physical correctness :-)
As for gamma: A proper colour pipeline setup will define whether you have to correct a non-color-corrected input “map” (diffuse, albedo etc) or not. Applying a 0.5454 gamma curve to a texture means that the texture is expected to have a 2.2 gamma “baked in” (which shouldn't be the case in a “professional” setup). “Linear” textures are textures that do NOT have a gamma curve baked in but can be represented as (again) color values (per channel) from 0 to 1. Those linear colors do not have any specific “color space” assigned - the color space is a conversion that creates “human readable” or “understandable” colour output from pure mathematic linear colour values (hinting at a brute-force gamma curve of 0.5454 being most often insufficient, since red/green/blue most likely need different curves applied for human-friendly colour spaces).
In short: It makes a lot of sense to take an hour and read about colour spaces, linear colour space and the different options of colour correction models. Even if you don't need this knowledge “constantly” in your job, having a basic understanding of what colour actually IS will help you tremendously with setting up render pipelines.
Marc
Houdini Indie and Apprentice » Importing and using skeletal animations
- malbrecht
- 806 posts
- Offline
Well, what I said above - alternatively, with H18 you could have a look at the Crowd Animation system in Houdini.
Marc
Marc
3rd Party » WIP: FBX HD & SD importer, Joints to Bones Converter and Morph-Helper (was: DAZ to Houdini converter)
- malbrecht
- 806 posts
- Offline
Here's some news from the farm:
The next iteration of the tool (again another rewrite from scratch that aims for things to come) solves a bunch of problems with various FBX sources/issues. It will “recreate” high definition geometry instead of using parallel imported OBJ (it still allows for OBJ imports to DEFINE the HD version, but also enables users to create high definition layers like clothes for better simulation results), it will feature a material creation system (with the ability to use different render engines) and it will, again, improve on animation performance.
That said, I have also looked deeper into DUF files. While this tool is NOT intended to be a DAZ-only-thing, DAZ is doing some nasty shortcuts when creating their FBX exports that are the cause of more than 50% of the problems I have solved by now.
I am now able to write a DUF-importer system for Houdini, which would be able to
Reason for writing about this in public instead of sending a mail to the over 30 people who “test” the tool is that out of those over 30 people the overwhelming number of 3 have actually given any feedback, 1 reporting problems that I was, partially, able to solve. I would not need to write a special “DAZ DUF TO HOUDINI” system for myself and ONE other user. Creating such a tool and providing support for it, extending its features over time and adjusting to ongoing development in Houdini would require a significant amount of time, so I would have to SELL such a tool (I haven't found a sales channel yet, but haven't really searched either).
INTERESTINGLY it would, technically speaking, be possible to PORT BACK changes one did to a DUF based geometry in Houdini into a DUF. Meaning, it would, technically speaking, be possible to DEFORM say a building in Houdini and create a new version of the base asset for DAZ Studio (writing back to a new DUF). This includes changes to materials (imagine using a paint system in Houdini to add dirt etc onto DAZ characters), sculpting etc.
The same caveat as above applies: Developing this only makes sense if development time “is worth the investment” - I am way too old to “just do it and see how many people are buying into it”. I've only got so many hours left of my lifetime :-)
TLDR: Please provide feedback about whether you would be willing to pay real money for a Houdini-DAZ-connectivity system that, at minimum, supports the listed above features and would aim for backward exporting features as mentioned above.
Thank you.
Marc Albrecht
The next iteration of the tool (again another rewrite from scratch that aims for things to come) solves a bunch of problems with various FBX sources/issues. It will “recreate” high definition geometry instead of using parallel imported OBJ (it still allows for OBJ imports to DEFINE the HD version, but also enables users to create high definition layers like clothes for better simulation results), it will feature a material creation system (with the ability to use different render engines) and it will, again, improve on animation performance.
That said, I have also looked deeper into DUF files. While this tool is NOT intended to be a DAZ-only-thing, DAZ is doing some nasty shortcuts when creating their FBX exports that are the cause of more than 50% of the problems I have solved by now.
I am now able to write a DUF-importer system for Houdini, which would be able to
- automatically create material setups with texture files and value settings (not limited to diffuse channels)
- sail around most of the FBX problems (including some random UV-error-deformation slips)
- work with static and deformed meshes, meaning, it would work with characters, animals, environments, buildings etc (as opposed to the infamous “DAZ bridge tools” that are currently hardcoded to work mainly with Genesis3 and Genesis8 figures, nothing else)
- use the OBJ HD workaround and recreate geometry OR, if so desired, create displacement maps tailored for individual polygon groups (to get around the deformation issues a simple bake would create)
Reason for writing about this in public instead of sending a mail to the over 30 people who “test” the tool is that out of those over 30 people the overwhelming number of 3 have actually given any feedback, 1 reporting problems that I was, partially, able to solve. I would not need to write a special “DAZ DUF TO HOUDINI” system for myself and ONE other user. Creating such a tool and providing support for it, extending its features over time and adjusting to ongoing development in Houdini would require a significant amount of time, so I would have to SELL such a tool (I haven't found a sales channel yet, but haven't really searched either).
INTERESTINGLY it would, technically speaking, be possible to PORT BACK changes one did to a DUF based geometry in Houdini into a DUF. Meaning, it would, technically speaking, be possible to DEFORM say a building in Houdini and create a new version of the base asset for DAZ Studio (writing back to a new DUF). This includes changes to materials (imagine using a paint system in Houdini to add dirt etc onto DAZ characters), sculpting etc.
The same caveat as above applies: Developing this only makes sense if development time “is worth the investment” - I am way too old to “just do it and see how many people are buying into it”. I've only got so many hours left of my lifetime :-)
TLDR: Please provide feedback about whether you would be willing to pay real money for a Houdini-DAZ-connectivity system that, at minimum, supports the listed above features and would aim for backward exporting features as mentioned above.
Thank you.
Marc Albrecht
Houdini Indie and Apprentice » Importing and using skeletal animations
- malbrecht
- 806 posts
- Offline
Hi, anonymous user,
you are not giving any context about what you mean by “importing a skeleton” - this could be importing a Houdini scene file (which you would have all the answers about that you are looking for), an FBX or anything else. Depending on WHAT you mean by “importing”, these thoughts might be helpful:
1) you could write a script that copies/links your deformed mesh's skeleton's bones to your imported bones. If you have a naming match, that script would be pretty straight forward.
2) without an example I find this hard to understand. If you import a scene you created yourself, you are the only one who can tell what “empty transforms” you have placed where. If, however, you are importing an FBX, what you are describing is the “joints based rig versus bones based rig” issue that I have discussed in detail in my thread about “converting joints based rigs to bones based rigs” in the “third party” part of this forum. Bascially, what you want to do is recreate the rig based on the joints (nulls) weights but using Houdini bones. Or you implement a workflow that uses joints INSTEAD of bones.
3) I am not clear about what you mean by “sequence”. Blending animations would be simple if you followed the idea outlined in 1), binding your deform mesh to the animated skeleton, you could use a global float value that would specify the amount of the various animated skeletons' input for the “final output” on your deformed mesh.
Marc
you are not giving any context about what you mean by “importing a skeleton” - this could be importing a Houdini scene file (which you would have all the answers about that you are looking for), an FBX or anything else. Depending on WHAT you mean by “importing”, these thoughts might be helpful:
1) you could write a script that copies/links your deformed mesh's skeleton's bones to your imported bones. If you have a naming match, that script would be pretty straight forward.
2) without an example I find this hard to understand. If you import a scene you created yourself, you are the only one who can tell what “empty transforms” you have placed where. If, however, you are importing an FBX, what you are describing is the “joints based rig versus bones based rig” issue that I have discussed in detail in my thread about “converting joints based rigs to bones based rigs” in the “third party” part of this forum. Bascially, what you want to do is recreate the rig based on the joints (nulls) weights but using Houdini bones. Or you implement a workflow that uses joints INSTEAD of bones.
3) I am not clear about what you mean by “sequence”. Blending animations would be simple if you followed the idea outlined in 1), binding your deform mesh to the animated skeleton, you could use a global float value that would specify the amount of the various animated skeletons' input for the “final output” on your deformed mesh.
Marc
Houdini Lounge » MIDI Timing/Tempo Accuracy
- malbrecht
- 806 posts
- Offline
Hi,
I am starting an extremely busy week tomorrow, so I won't have time to dive into your sample file any time soon - but I do remember having read about export issues with Ableton before. It *might* just be that Ableton isn't creating the MIDI file “correctly”. Have you tried creating a simple test “beat” in another DAW, just for comparison?
Otherwise what I wrote above may still be the correct approach: Resampling (using CHOPs) the MIDI file could solve the issue.
Marc
I am starting an extremely busy week tomorrow, so I won't have time to dive into your sample file any time soon - but I do remember having read about export issues with Ableton before. It *might* just be that Ableton isn't creating the MIDI file “correctly”. Have you tried creating a simple test “beat” in another DAW, just for comparison?
Otherwise what I wrote above may still be the correct approach: Resampling (using CHOPs) the MIDI file could solve the issue.
Marc
Houdini Lounge » MIDI Timing/Tempo Accuracy
- malbrecht
- 806 posts
- Offline
Hi,
without a sample scene and sample data it's hard to tell what you are doing and what might be a problem.
Zerost, MIDI doesn't contain sound, only control data. There is no “direct” way to “sync” MIDI “Audio” to movie/picture data, you have to use the MIDI trigger data (Note On) to “start” something creating sound.
First, MIDI isn't a “set in stone” synced system, it usually is getting its heartbeat from a sync device. Without a sync device (which would have to be implemented in Houdini for proper syncing). Since (classic) MIDI hardware is running on ~31kbps, which, depending on your “note resolution”, may not be enough for precise syncing. Speeding this up or down is feasible.
Second, if you are using a built-in MIDI-to-Audio conversion, lags from “Note On” events to actual sound can well go up to 300ms depending on how you create the sound (“rendering” MIDI makes more sense in this context).
And that's only from the outside … depending on how Houdini implements MIDI sync, there may be more issues at hand. My bet (I haven't read the source code to Houdini's MIDI implementation) is that you'd get “better” syncing by either outputting a new MIDI file that is synced to your movie data/SMPTE OR by implementing a sync clock for your MIDI playback inside Houdini.
Marc
without a sample scene and sample data it's hard to tell what you are doing and what might be a problem.
Zerost, MIDI doesn't contain sound, only control data. There is no “direct” way to “sync” MIDI “Audio” to movie/picture data, you have to use the MIDI trigger data (Note On) to “start” something creating sound.
First, MIDI isn't a “set in stone” synced system, it usually is getting its heartbeat from a sync device. Without a sync device (which would have to be implemented in Houdini for proper syncing). Since (classic) MIDI hardware is running on ~31kbps, which, depending on your “note resolution”, may not be enough for precise syncing. Speeding this up or down is feasible.
Second, if you are using a built-in MIDI-to-Audio conversion, lags from “Note On” events to actual sound can well go up to 300ms depending on how you create the sound (“rendering” MIDI makes more sense in this context).
And that's only from the outside … depending on how Houdini implements MIDI sync, there may be more issues at hand. My bet (I haven't read the source code to Houdini's MIDI implementation) is that you'd get “better” syncing by either outputting a new MIDI file that is synced to your movie data/SMPTE OR by implementing a sync clock for your MIDI playback inside Houdini.
Marc
Houdini Lounge » Running Clarisse, Houdini and Nuke simultaneously
- malbrecht
- 806 posts
- Offline
My remark above may seem sarcastic (because I often am), but it holds my true opinion: Of course, you can RUN those programs simultaneously, but without any definition of “incredible scenes”, without ANY data to base assistance on, even the best support can only say “erm”.
From experience, I can say that Clarisse can eat up A LOT OF RAM and then some. So can Houdini. My personal “experimental” system only has 64GB, that is maxed out almost all the time, running ONLY Houdini (and maybe Visual Studio). 128GB RAM does not seem like a lot for simulation and it surely isn't a lot for “Clarisse typical scenes”, read LOTS of points.
In short, if I was asked that question without ANY proper data (the kind I asked about above) I'd say, you are talking about “starting machines” for “entrance level work” in those programs (Nuke may be less hungry in “normal” usage, though). Simultaneous work? Usually not, except for setting up some more or less simple tasks.
If you don't have any experience (and it sounds like that, apologies) with the software you are asking about, I recommend talking to some studios that do similar jobs to those you want to do and ask them about typical requirements on their hardware. This may sound like a completely off-the-board question in this industry (while it is pretty normal in other industries to talk to people), but, again, speaking from experience: Asking politely, talking to the right people and not being a German Jerk actually will get you answers.
(Then there's the fact that almost no software is bug-free. Running different BIG TOOLS simultaneously feels like asking for trouble to me, but that's a different topic, that is NOT hardware related.)
Marc Albrecht
From experience, I can say that Clarisse can eat up A LOT OF RAM and then some. So can Houdini. My personal “experimental” system only has 64GB, that is maxed out almost all the time, running ONLY Houdini (and maybe Visual Studio). 128GB RAM does not seem like a lot for simulation and it surely isn't a lot for “Clarisse typical scenes”, read LOTS of points.
In short, if I was asked that question without ANY proper data (the kind I asked about above) I'd say, you are talking about “starting machines” for “entrance level work” in those programs (Nuke may be less hungry in “normal” usage, though). Simultaneous work? Usually not, except for setting up some more or less simple tasks.
If you don't have any experience (and it sounds like that, apologies) with the software you are asking about, I recommend talking to some studios that do similar jobs to those you want to do and ask them about typical requirements on their hardware. This may sound like a completely off-the-board question in this industry (while it is pretty normal in other industries to talk to people), but, again, speaking from experience: Asking politely, talking to the right people and not being a German Jerk actually will get you answers.
(Then there's the fact that almost no software is bug-free. Running different BIG TOOLS simultaneously feels like asking for trouble to me, but that's a different topic, that is NOT hardware related.)
Marc Albrecht
Houdini Lounge » Running Clarisse, Houdini and Nuke simultaneously
- malbrecht
- 806 posts
- Offline
Please provide the exact details about your task (RAM usage listed by application, processor - both CPU and GPU - drain listed by application, IO bandwidth usage listed by application) and your available hardware (especially IO throughput capabilities).
Please provide a scientific definition of “incredible environment” (I am most interested in a reliable explanation of “incredible”).
Else it doesn't make sense to ask for crystal ball readout. Except if you want to read “yes, you can, but it won't be much fun” (you didn't ask whether it's FEASIBLE, you only asked if you CAN).
An answer to your question that matches the quality of your current problem description would be: Of course you can, as long as your ogloophoom-conversion is tightly encrypted in reverse contradicting selfprovisions of artificial RAM usage predictions, your core processing unit has a vapor-inverter that reduces steam induction on catalized memory ions and your user is politically incorrect.
Marc Albrecht
Please provide a scientific definition of “incredible environment” (I am most interested in a reliable explanation of “incredible”).
Else it doesn't make sense to ask for crystal ball readout. Except if you want to read “yes, you can, but it won't be much fun” (you didn't ask whether it's FEASIBLE, you only asked if you CAN).
An answer to your question that matches the quality of your current problem description would be: Of course you can, as long as your ogloophoom-conversion is tightly encrypted in reverse contradicting selfprovisions of artificial RAM usage predictions, your core processing unit has a vapor-inverter that reduces steam induction on catalized memory ions and your user is politically incorrect.
Marc Albrecht
Technical Discussion » Take a screenshot of the Network Editor
- malbrecht
- 806 posts
- Offline
Technical Discussion » Take a screenshot of the Network Editor
- malbrecht
- 806 posts
- Offline
Hi, Julien,
if you are on Windows, you should be able to find screen coordinates of client windows using the win32gui module. It's been some time that I used Python for C++ jobs (sorry, couldn't resist ), but it should work somewhat along this line of thought:
If that's too academic, I'll have a peek at some old code and try to come up with a rudimentary “listing”!
Marc
EDIT: Sorry, this forum's editor screwed up my text, deleting and rearranging what I typed …
if you are on Windows, you should be able to find screen coordinates of client windows using the win32gui module. It's been some time that I used Python for C++ jobs (sorry, couldn't resist ), but it should work somewhat along this line of thought:
- get your window handle (probably Houdini's main window handle, you can look that up from the project name and Houdini version) using win32gui.FindWindow(…)
- get main window's screen estate from win32gui.GetWindowRect(handle)
- get client windows (all “sub-windows” in Houdini's UI), iterate over them until you find the network editor
- inside the window screen estate get relative/offset positions for the client window using “ClientToScreen”, I think it's win32gui.ClientToScreen(clienthandle,(x,y))
If that's too academic, I'll have a peek at some old code and try to come up with a rudimentary “listing”!
Marc
EDIT: Sorry, this forum's editor screwed up my text, deleting and rearranging what I typed …
Edited by malbrecht - Aug. 10, 2020 02:25:46
Houdini Indie and Apprentice » Applying FBX scale (pre) transforms. Like in Blender.
- malbrecht
- 806 posts
- Offline
Hi,
resizing influence zones only is necessary if you don't have weights stored in the points, since without you wouldn't capture geometry.
If you import DAZ models (I have done months on R&D on that topic, see this thread [www.sidefx.com]), the weights are niftly stored in the points by the FBX importer, so rescaling actually is a piece of cake.
Marc
resizing influence zones only is necessary if you don't have weights stored in the points, since without you wouldn't capture geometry.
If you import DAZ models (I have done months on R&D on that topic, see this thread [www.sidefx.com]), the weights are niftly stored in the points by the FBX importer, so rescaling actually is a piece of cake.
Marc
Houdini Indie and Apprentice » Applying FBX scale (pre) transforms. Like in Blender.
- malbrecht
- 806 posts
- Offline
> There is no such thing as a natural scale, it just has to be known.
If I wanted to be pedantic (which I am but don't want to be), I'd say: “Natural scale 1 means 1m everywhere except, maybe, in the US”. But that's semantics, I guess.
> Set the scale on the simulations to the scale of the world.
Can do, sure. But why hassle with additional settings if you can have things “right” to begin with. I am not saying you're wrong, I'm just saying that sometimes a personal workflow is just the way it works best for the one using that personal workflow. For me that's “natural scale”, as in “1 = 1m”, period. :-)
> There is no more or less ‘eye-balling’ with one scale over another, when the scale is known.
I'd love a world where this is so. Most of the places I have dealt with in “the industry” are eyeballing most of the time most of the things. Because, at the end of the day, it has to look good “on screen”. ESPECIALLY with scaling. I have worked with movie people who insisted in “changing character scales from shot to shot is totally acceptable if it suits the visual narration” - if that ain't eyeballing, I don't know what is.
> Unfortunately scaling rigs in houdini is virtually impossible without causing major pain.
I disagree to some extent. I fully subscribe to your “there's no easy way to rescale the rig”, for sure, but “virtually impossible” is a different ballpark. Yes, you have to rescale influence zones as well as transforms/positions, but “virtually impossible” would only refer to “without scripting it”.
Marc
If I wanted to be pedantic (which I am but don't want to be), I'd say: “Natural scale 1 means 1m everywhere except, maybe, in the US”. But that's semantics, I guess.
> Set the scale on the simulations to the scale of the world.
Can do, sure. But why hassle with additional settings if you can have things “right” to begin with. I am not saying you're wrong, I'm just saying that sometimes a personal workflow is just the way it works best for the one using that personal workflow. For me that's “natural scale”, as in “1 = 1m”, period. :-)
> There is no more or less ‘eye-balling’ with one scale over another, when the scale is known.
I'd love a world where this is so. Most of the places I have dealt with in “the industry” are eyeballing most of the time most of the things. Because, at the end of the day, it has to look good “on screen”. ESPECIALLY with scaling. I have worked with movie people who insisted in “changing character scales from shot to shot is totally acceptable if it suits the visual narration” - if that ain't eyeballing, I don't know what is.
> Unfortunately scaling rigs in houdini is virtually impossible without causing major pain.
I disagree to some extent. I fully subscribe to your “there's no easy way to rescale the rig”, for sure, but “virtually impossible” is a different ballpark. Yes, you have to rescale influence zones as well as transforms/positions, but “virtually impossible” would only refer to “without scripting it”.
Marc
-
- Quick Links