Thanks for the explanation Paul. So I'm still having issues with height maps as I might not be understanding the logic correctly. I've attached a simple scene that attempts to generate a height map on a grid from a scanned wall model. In normals mode it works for some parts but the rest of the texture is just a gradient. I hope this example can illustrate the issue I'm having.
https://drive.google.com/file/d/1G6IQLtovkvn03kSgupmX64GQUFZWS_VV/view?usp=sharing [drive.google.com]
Found 55 posts.
Search results Show results as topic list.
Houdini for Realtime » Map Baker Displacement texture workflow
-
- Eyecon
- 55 posts
- Offline
Technical Discussion » Displacement and Vector Displacement from Labs map baker
-
- Eyecon
- 55 posts
- Offline
Hello,
I'm still trying to figure out how to use map baker to produce a plain displacement map like from the mantra baker. Any thoughts? Height output seems to produce a gradient to represent how the general heigh of one surfaces over another but there are never any displacement details.
I'm still trying to figure out how to use map baker to produce a plain displacement map like from the mantra baker. Any thoughts? Height output seems to produce a gradient to represent how the general heigh of one surfaces over another but there are never any displacement details.
Houdini for Realtime » Map Baker Displacement texture workflow
-
- Eyecon
- 55 posts
- Offline
Hello,
I'm trying to generate a simple displacement map for a low to high poly model(experimenting with planes at the moment). The map baker “height” output doesn't seem to work in nearest distance mode and not properly in normal trace mode, not sure what going on there. I however did manage to bake a vector displacement map by using the attribute mapping feature to bake the high poly mesh world position into the low poly mesh's uv space(using P as the attribute). Did the same thing for the low poly mesh's world position by wiring the low poly mesh into both inputs and subtracted one from the other to generate a vector displacement map. That seems to work but details are not captured accurately and they seem to be slightly offset from he original model. My questions are:
1)Why is the height map producing empty results? In nearest distance mode I keep getting a gradient which seems to indicate the average height of the low poly mesh over the high poly one but there are no high resolution details in the map. It doesn't matter if the high poly mesh has normals or even UV, I'm always getting this weird gradient result. In normal mode I can't use it and I understand why it doesn't work as both models basically overlap(just like they would in a LOD situation).
2)What's the position map supposed to be? I assumed it was world position of the high poly mesh baked into the low poly's UV space. I keep getting an empty teal map with no details. Again regardless of trace mode settings.
3)Is there a better/more accurate way of generating vector or scalar displacement maps using the tool?
I'm trying to generate a simple displacement map for a low to high poly model(experimenting with planes at the moment). The map baker “height” output doesn't seem to work in nearest distance mode and not properly in normal trace mode, not sure what going on there. I however did manage to bake a vector displacement map by using the attribute mapping feature to bake the high poly mesh world position into the low poly mesh's uv space(using P as the attribute). Did the same thing for the low poly mesh's world position by wiring the low poly mesh into both inputs and subtracted one from the other to generate a vector displacement map. That seems to work but details are not captured accurately and they seem to be slightly offset from he original model. My questions are:
1)Why is the height map producing empty results? In nearest distance mode I keep getting a gradient which seems to indicate the average height of the low poly mesh over the high poly one but there are no high resolution details in the map. It doesn't matter if the high poly mesh has normals or even UV, I'm always getting this weird gradient result. In normal mode I can't use it and I understand why it doesn't work as both models basically overlap(just like they would in a LOD situation).
2)What's the position map supposed to be? I assumed it was world position of the high poly mesh baked into the low poly's UV space. I keep getting an empty teal map with no details. Again regardless of trace mode settings.
3)Is there a better/more accurate way of generating vector or scalar displacement maps using the tool?
PDG/TOPs » Generic Processor and Nuke
-
- Eyecon
- 55 posts
- Offline
Thank you so much, makes sense and works perfectly! would the same concept apply when using hqueue scheduler? Does the executing node need to have it's own environment variable parameters in order to execute successfully on remote machines?
PDG/TOPs » Generic Processor and Nuke
-
- Eyecon
- 55 posts
- Offline
I tried to wrap the nuke command into a shell script and executed that and now I'm getting a segmentation fault in the log file when attempting to run the command. Any idea what's going on?
PDG/TOPs » Generic Processor and Nuke
-
- Eyecon
- 55 posts
- Offline
Hello,
I'm trying to call the Nuke executable from a generic processor to run a nuke script passing some arguments. I type the fully qualified path for everything including arguments and from the workitem log I can see the Nuke version and build message so I know the command has run but the workitem fails and the log show no other messages or errors. Is there a special way to write the custom command string?
The command line string that I currently have follows the below structure
/opt/Nuke12.0v5/Nuke12.0 -xi pathtomyscript argument1 argument2 1-300
I'm trying to call the Nuke executable from a generic processor to run a nuke script passing some arguments. I type the fully qualified path for everything including arguments and from the workitem log I can see the Nuke version and build message so I know the command has run but the workitem fails and the log show no other messages or errors. Is there a special way to write the custom command string?
The command line string that I currently have follows the below structure
/opt/Nuke12.0v5/Nuke12.0 -xi pathtomyscript argument1 argument2 1-300
Technical Discussion » Packed geometry to packed alembic with attributes?
-
- Eyecon
- 55 posts
- Offline
Hello,
Been looking into how to do export packed geometry to alembic(to import as packed alembic) without needing to unpack. Exporting works just fine, it's just that packed or point level attributes don't get transferred and I would like to avoid unpacking if possible as it's very slow. Most topics on this forum and others are from 2015 or so, not sure if there new features since then that makes this possible.
Been looking into how to do export packed geometry to alembic(to import as packed alembic) without needing to unpack. Exporting works just fine, it's just that packed or point level attributes don't get transferred and I would like to avoid unpacking if possible as it's very slow. Most topics on this forum and others are from 2015 or so, not sure if there new features since then that makes this possible.
Technical Discussion » Sample channel at frame inside CHOP
-
- Eyecon
- 55 posts
- Offline
Is it possible to sample a channel at a particular time inside a chop network(freeze at frame or sample number)? For example if I use a fetch parameter to get a channel values for an entire frame range but then I want to offset that channel by subtracting it's value at a particular frame.
Technical Discussion » Remap image from one uv to another
-
- Eyecon
- 55 posts
- Offline
malbrecht
Hi, anonymous user,
without having looked at your project, the general idea of a “baking” (reproduction) process is to imagine a virtual camera sitting “on the normal” of each (target) texture UV pixel over the (source) object and “attracting” (in the purest sense of the word) that single pixel.
What most people do wrong when reinventing that wheel is to either directly calculate one UV position from the other ad ignoring that you have to “cross over” the target UV map onto the source “virtual camera position” and only at render stage actually USE the source UV map - or they ignore different UV tile rotations (not doing a trace through a camera but trying to calculate the correct pixel offset only on UVs).
The standard approach is to x/y (or u/v) walk over the target UV map, calculate camera positions from P using those UV offsets, offset the camera slightly by the local normal and read the source (source-UV based) texture information (which may be interpolated).
Marc
Thanks Marc. I didn’t take camera position into consideration since I have two sets of UVs already: one mapped from camera projection and the other geometrically as polar mapping. The idea of my wrangle node is for each UV value (represented by individual points in a 512x512 grid), I find the world position of that uv value on the surface of the object(using uvsample). I then use that position with another uvsample to find the second UV set’s value at this surface position. Finally I use that found value from the second UV set to assign the color value from the grid at that new position(uv space obviously) to my current uv position. Does that make any sense?
It seems first uvsample returns erroneous values if the uv attribute value exists in multiple positions which is obviously the case when doing camera projection(ie uv tiles overlap). Forcing uvsample to return a vector array doesn’t do anything either…I think that’s my problem. To test idea I tried mapping textures from a grid in orthographic projection to the same grid in orthographic projection as well and in this case it works as expected(nothing happens on the color map grid basically)
The goal at the end like you said to bake a 512x512 map from one uv mapping to another.
Regards,
IA
Edited by Eyecon - May 26, 2020 14:24:17
Technical Discussion » Remap image from one uv to another
-
- Eyecon
- 55 posts
- Offline
Hello,
I'm looking for a way to essentially implement texture baking in SOPs. I came up with the attached solution using uvsample on a map loaded to a grid object where each point represents a pixel in the desired texture resolution. For some reason this simple approach seems correct to me but doesn't work. Any idea what I'm doing wrong?
I'm looking for a way to essentially implement texture baking in SOPs. I came up with the attached solution using uvsample on a map loaded to a grid object where each point represents a pixel in the desired texture resolution. For some reason this simple approach seems correct to me but doesn't work. Any idea what I'm doing wrong?
Technical Discussion » COPs to SOP attribute?
-
- Eyecon
- 55 posts
- Offline
Thanks Martin. I was using absolute paths as you suggested already but I guess my cops network was a bit too much for Houdini to handle…no idea honestly why it wouldn't work. Based on what you say, I guess baking out the frames and just importing them seems to be the way to go.
Technical Discussion » COPs to SOP attribute?
-
- Eyecon
- 55 posts
- Offline
Is there a way to use a COP network to set an attribute value in SOP land? I've tried using attribute from map and the colormap vop with the op: qualifier but attribute form map didn't work at all, and the colomap vop node works if I set the channel name in {}. However it doesn't seem to refresh frame to frame. I've even tried to do but that didn't work.
Whats the best way to bring cop data back into the sop context?
op:/.../[`$F`]{ChannelName}
Whats the best way to bring cop data back into the sop context?
Edited by Eyecon - May 19, 2020 02:09:10
Work in Progress » Hair Like Smoke? Looking for suggestions
-
- Eyecon
- 55 posts
- Offline
Thanks Daniel, I've tried the smoke sim route but wasn't able to get this viscous breaking effect seen in the reference that's why I decided to go with flip to get the best of both worlds between pure particle physics and non-divergent smoke dynamics. My main issue is achieving the hair like fine details. As you'll see in my test attached below using a rasterization technique similar to what you suggest, the general movement is similar but it still looks a bit too “blobby”. I've even added a second layer of details using a vellum wire sim advected by the results of the main sim. Just very curious about other ideas to get super fine details without necessarily having to simulate at insane resolutions which I'm not sure would even achieve this look.
To clarify: I'm referring to the details in the black smoke layer of the reference, the firey bit is not that hard to achieve even in a low res smoke sim.
To clarify: I'm referring to the details in the black smoke layer of the reference, the firey bit is not that hard to achieve even in a low res smoke sim.
Edited by Eyecon - May 15, 2020 04:27:03
Work in Progress » Hair Like Smoke? Looking for suggestions
-
- Eyecon
- 55 posts
- Offline
Been trying to recreate this lovely portal effect from Star Trek: Picard specifically the hair like black smoke around the edges.
From the attached video reference, you can see the black smoke behaves like hair/trails but also like a liquid. My approach to get a smoke like sim while maintaining insane level of details is to use FLIP (i.e. advected particles + non-divergent velocity field for nice swirls and so on).
I've attached my simple implementation of an attribute controlled viscosity based FLIP sim. I'm struggling with achieving the hair like streaks and I would appreciate your feedback and suggestions. A more sparse flipbook of the sim results is attached as well.
Edited by Eyecon - May 14, 2020 11:04:51
PDG/TOPs » Run partitions sequentially
-
- Eyecon
- 55 posts
- Offline
understood but I guess this process may not be deterministic for my setup because I want all 63 cores available to the scheduler to be used for the network in general. I guess since I setup the geo rop node for a siming I only it's using 2 slots per sim at any given point in time. So slots per node / 2 for the ROP geo node would determine the total number of sims if I'm understanding your explanation correctly.
PDG/TOPs » Run partitions sequentially
-
- Eyecon
- 55 posts
- Offline
Thanks Chris. I thought this only limits the number of CPUs per work item not the number of work items per node. Am I confused as to what “slots” mean? I understand that work items in my case are the input jobs (from wedge partitions) that I’m trying to limit but I want all system resources to be available for these work items. I was just asking if instead of running a single work item at a time if I could run say exactly two or three at a time.
When I tested single in the setup described above, it ran the simulations one at a time as expected but the system was mostly idle in the meantime(even though all other downstream top nodes were running in parallel). I want to tell my ROP geo to cache a maximum of 3 sims at a time for example because beyond that I run out of RAM.
When I tested single in the setup described above, it ran the simulations one at a time as expected but the system was mostly idle in the meantime(even though all other downstream top nodes were running in parallel). I want to tell my ROP geo to cache a maximum of 3 sims at a time for example because beyond that I run out of RAM.
PDG/TOPs » Run partitions sequentially
-
- Eyecon
- 55 posts
- Offline
PDG/TOPs » Run partitions sequentially
-
- Eyecon
- 55 posts
- Offline
Yeah I saw that but that means that I have to have the default scheduler for everything that I still want to run in parallel while creating a local scheduler with “single” selected only for the ROP geometry?
I wonder if there a way to specify the total number of input jobs that should be processed by a given scheduler this way I can choose 1 (single) or a specific number depending on my system resources.
I wonder if there a way to specify the total number of input jobs that should be processed by a given scheduler this way I can choose 1 (single) or a specific number depending on my system resources.
PDG/TOPs » Run partitions sequentially
-
- Eyecon
- 55 posts
- Offline
Hello,
I have a relatively simple PDG network driving a pyro sim using wedges. In the network, I partition the various wedges based on wedge index and feed the output of the partition node into a ROP geometry output that runs and caches the sims. My main issue is that when I have a larger number of wedges, I run out of RAM as all the sims run at the same time. However, I have some downstream render and image processing nodes that I'd like to run in parallel as their frames become available. Is there a way to force the simulations(ROP Geo node) only to run sequentially rather than in parallel while leaving everything else as is?
I have a relatively simple PDG network driving a pyro sim using wedges. In the network, I partition the various wedges based on wedge index and feed the output of the partition node into a ROP geometry output that runs and caches the sims. My main issue is that when I have a larger number of wedges, I run out of RAM as all the sims run at the same time. However, I have some downstream render and image processing nodes that I'd like to run in parallel as their frames become available. Is there a way to force the simulations(ROP Geo node) only to run sequentially rather than in parallel while leaving everything else as is?
Edited by Eyecon - Feb. 25, 2020 11:04:31
Houdini for Realtime » Python from Gplay?
-
- Eyecon
- 55 posts
- Offline
Is there a way to run python inside a gplay session? I'm exploring an idea that requires UI events in gplay to generate simple viewport transform information in order to automate some tasks in Houdini(e.g. using python to recook a pdg network).
-
- Quick Links