Found 51 posts.
Search results Show results as topic list.
Houdini Lounge » Game example
- axebeak
- 51 posts
- Offline
Thanks, here is updated OTL: http://kinnabari.googlecode.com/svn/wiki/GLSL_toon_fix.zip [kinnabari.googlecode.com]
Technical Discussion » Fit to range in VOPPOP
- axebeak
- 51 posts
- Offline
I think the problem here is that while the expected bounds are documented and it's known roughly what range to expect, the actual generated noise may be biased one way or another. That is, as I understand it, what's needed here is to remap the range of actually generated values not the expected range.
You can try an altogether different approach.
See the attached example, here the noise values are first scaled and filtered with Logistic function [en.wikipedia.org], that brings them roughly to range, then snapped to grid with 0.001 resolution to fill the range exactly (min at 0, max at 1). The snapping step can be skipped if you don't need the exact min/max values.
The setup in this scene is best suitable for “Original Perlin Noise” (onoise expected range -1 to 1), it will work for “Sparse Convolution Noise” as well but more values will tend to 0 or 1 (snoise expected range -1.7 to 1.7). For other noise types an additional remapping is needed before the sigmoid transformation.
Still, the two VOP approach is probably better and more general.
You can try an altogether different approach.
See the attached example, here the noise values are first scaled and filtered with Logistic function [en.wikipedia.org], that brings them roughly to range, then snapped to grid with 0.001 resolution to fill the range exactly (min at 0, max at 1). The snapping step can be skipped if you don't need the exact min/max values.
The setup in this scene is best suitable for “Original Perlin Noise” (onoise expected range -1 to 1), it will work for “Sparse Convolution Noise” as well but more values will tend to 0 or 1 (snoise expected range -1.7 to 1.7). For other noise types an additional remapping is needed before the sigmoid transformation.
Still, the two VOP approach is probably better and more general.
Technical Discussion » Fit to range in VOPPOP
- axebeak
- 51 posts
- Offline
Maybe you can split your VOP into two?
That is:
1) Generate the noise values in the first VOP;
2) Find the range for all the generated values;
3) Fit to a new range in the second VOP;
Here is a simple _SOP_ example to illustrate the idea.
That is:
1) Generate the noise values in the first VOP;
2) Find the range for all the generated values;
3) Fit to a new range in the second VOP;
Here is a simple _SOP_ example to illustrate the idea.
Technical Discussion » dynamic loading of custom operators (.so or .dll)
- axebeak
- 51 posts
- Offline
There is probably no way to _load_ plugins once Houdini is already running, but at least in some cases they can be _reloaded_ with the following hack.
On Windows, open Python console, and type the following:
import _ctypes
handle = _ctypes.LoadLibrary(…your library path…)
_ctypes.FreeLibrary(handle)
If this the first reload during the session then invoke FreeLibrary once again.
Then recompile your plugin and do LoadLibrary etc.
On a Unix system use dlopen/dlclose instead, of course you can invoke load/free in any other way (calling external program etc), but it's already exposed via _ctypes (used by inlinecpp).
There are apparently limitations to this, e.g. you can't change operator name/label etc. but at least in some cases it works and it's faster than restart.
On Windows, open Python console, and type the following:
import _ctypes
handle = _ctypes.LoadLibrary(…your library path…)
_ctypes.FreeLibrary(handle)
If this the first reload during the session then invoke FreeLibrary once again.
Then recompile your plugin and do LoadLibrary etc.
On a Unix system use dlopen/dlclose instead, of course you can invoke load/free in any other way (calling external program etc), but it's already exposed via _ctypes (used by inlinecpp).
There are apparently limitations to this, e.g. you can't change operator name/label etc. but at least in some cases it works and it's faster than restart.
Technical Discussion » Python Houdini
- axebeak
- 51 posts
- Offline
Sure, just put one of those statements at the end of your script, something like:
myCam = hou.node(“/obj”).createNode(“cam”, “myCam”)
hou.hscript(“viewcamera -c ” + myCam.name() + “ *.*.world.persp1”)
myCam = hou.node(“/obj”).createNode(“cam”, “myCam”)
hou.hscript(“viewcamera -c ” + myCam.name() + “ *.*.world.persp1”)
Technical Discussion » Python Houdini
- axebeak
- 51 posts
- Offline
You need an instance of GeometryViewport to call setCamera, it's not static. I think the full expression is something like this:
hou.ui.paneTabOfType(hou.paneTabType.SceneViewer).curViewport().setCamera(hou.node(“/obj/cam1”))
You can also execute hscript commands from Python:
hou.hscript(“viewcamera -c cam1 *.*.world.persp1”)
hou.ui.paneTabOfType(hou.paneTabType.SceneViewer).curViewport().setCamera(hou.node(“/obj/cam1”))
You can also execute hscript commands from Python:
hou.hscript(“viewcamera -c cam1 *.*.world.persp1”)
Technical Discussion » Camera Frustrum Handle Geometry
- axebeak
- 51 posts
- Offline
Not sure if the handle geometry can be accessed somehow, but frustum representation suitable for culling or clipping can be easily calculated from the camera parameters directly.
For perspective camera see the attached example, there are 5 Python SOPs:
“Make Frustum” calculates the corner points;
“Cull AABB” - fast culling check for boxes (plane “sidedness”);
“Cull AABB Ex” - exact culling test for axis-aligned boxes;
“Cull Sphere” - fast sphere culling;
“Cull Sphere Ex” - exact sphere culling;
Note that the sphere and the box in the scene are positioned so that they are marked as visible (green) by the fast tests, but culled by the exact versions.
For perspective camera see the attached example, there are 5 Python SOPs:
“Make Frustum” calculates the corner points;
“Cull AABB” - fast culling check for boxes (plane “sidedness”);
“Cull AABB Ex” - exact culling test for axis-aligned boxes;
“Cull Sphere” - fast sphere culling;
“Cull Sphere Ex” - exact sphere culling;
Note that the sphere and the box in the scene are positioned so that they are marked as visible (green) by the fast tests, but culled by the exact versions.
Technical Discussion » HDK: RGB to XYZ/LAB conversion
- axebeak
- 51 posts
- Offline
Oh yes - correctness is the right word here.
The conversion chain above is correct in the sense that it accounts for the white point in the original RGB->XYZ transformation. I just can't believe UT_Color was intended to be used this way!
Can't really comment on the matrix difference yet - after all maybe there is some imprecision in my own calculations. HDK matrix is probably precomputed or taken directly from some reference documentation (however I can't seem to find a precomputed NTSC/Ill.C matrix anywhere).
The conversion chain above is correct in the sense that it accounts for the white point in the original RGB->XYZ transformation. I just can't believe UT_Color was intended to be used this way!
Can't really comment on the matrix difference yet - after all maybe there is some imprecision in my own calculations. HDK matrix is probably precomputed or taken directly from some reference documentation (however I can't seem to find a precomputed NTSC/Ill.C matrix anywhere).
Technical Discussion » HDK: RGB to XYZ/LAB conversion
- axebeak
- 51 posts
- Offline
Thanks for the info!
In my case I need to perform these conversions on GPU in a real-time rendering system, so I was looking if I can use some of these HDK functions in my Houdini-side tools. But UT_Color seems too problematic indeed.
Well, HDK conversion matrix can be obtained like this:
float m;
UT_Color(UT_RGB, 1, 0, 0).getXYZ(&m, &m, &m);
UT_Color(UT_RGB, 0, 1, 0).getXYZ(&m, &m, &m);
UT_Color(UT_RGB, 0, 0, 1).getXYZ(&m, &m, &m);
cout << UT_Matrix3(m) << endl;
And, in principle, this matrix can be used for my needs, but it seems wrong to use it if it's not documented (it's slightly different from the one computed from NTSC values).
PS:
I think that more accurate RGB->LAB conversion can be done like this:
float xyzv;
float labv;
UT_Color(UT_RGB, 1, 1, 1).getXYZ(&xyzv, &xyzv, &xyzv);
UT_Vector3 xyzIN = UT_Vector3(1.0f/xyzv, 1.0f/xyzv, 1.0f/xyzv);
UT_Color(UT_RGB, r, g, b).getXYZ(&xyzv, &xyzv, &xyzv);
UT_Vector3 xyz = UT_Vector3(xyzv) * xyzIN;
UT_Color(UT_XYZ, xyz, xyz, xyz).getLAB(&labv, &labv, &labv);
UT_Color Lab = UT_Color(UT_LAB, labv, labv, labv);
For example, by default, RGB(1,1,1) is converted to LAB(100, -3.30344, -11.4058) - that's too far off center in , while the fragment above produces LAB(100, 0, 0).
In my case I need to perform these conversions on GPU in a real-time rendering system, so I was looking if I can use some of these HDK functions in my Houdini-side tools. But UT_Color seems too problematic indeed.
Well, HDK conversion matrix can be obtained like this:
float m;
UT_Color(UT_RGB, 1, 0, 0).getXYZ(&m, &m, &m);
UT_Color(UT_RGB, 0, 1, 0).getXYZ(&m, &m, &m);
UT_Color(UT_RGB, 0, 0, 1).getXYZ(&m, &m, &m);
cout << UT_Matrix3(m) << endl;
And, in principle, this matrix can be used for my needs, but it seems wrong to use it if it's not documented (it's slightly different from the one computed from NTSC values).
PS:
I think that more accurate RGB->LAB conversion can be done like this:
float xyzv;
float labv;
UT_Color(UT_RGB, 1, 1, 1).getXYZ(&xyzv, &xyzv, &xyzv);
UT_Vector3 xyzIN = UT_Vector3(1.0f/xyzv, 1.0f/xyzv, 1.0f/xyzv);
UT_Color(UT_RGB, r, g, b).getXYZ(&xyzv, &xyzv, &xyzv);
UT_Vector3 xyz = UT_Vector3(xyzv) * xyzIN;
UT_Color(UT_XYZ, xyz, xyz, xyz).getLAB(&labv, &labv, &labv);
UT_Color Lab = UT_Color(UT_LAB, labv, labv, labv);
For example, by default, RGB(1,1,1) is converted to LAB(100, -3.30344, -11.4058) - that's too far off center in , while the fragment above produces LAB(100, 0, 0).
Technical Discussion » HDK: RGB to XYZ/LAB conversion
- axebeak
- 51 posts
- Offline
Does anyone have any practical experience with using UT_Color and can comment on its usefulness for color space conversions? Is it documented somewhere how UT_Color::getXYZ does its conversion, what are the primaries and what's the white point?
I did some tests and the closest match I found is the NTSC primaries and CIE Illuminant C - see the code snippet below. I'm certainly not a color expert, but some of the UT_Color's conversion methods doesn't seem very useful without this additional information.
For example, what's the proper way to use UT_Color::getLAB? Apparently this method doesn't use any reference white point at all - let's say when doing RGB->LAB, the method supposedly performs RGB->XYZ internally, and this implies the white point mentioned above, but when it just assumes white at XYZ(1,1,1).
The test below compares UT_Color's RGB->XYZ conversion results with “NTSC” and HDTV/Rec709/sRGB matrices.
“HDK” & “NTSC” values are very close, while “HDTV” results are quite different, so I presume that Houdini's internal matrix is based on the NTSC values (maybe sRGB would be a better choice these days?).
Knowing the white point makes getLAB somewhat more useful I think, that is, I can do RGB->XYZ, divide by wp, then XYZ->LAB.
Thanks in advance for any comments on this.
void testColor(float r, float g, float b) {
float hxyz;
UT_Color c = UT_Color(UT_RGB, r, g, b);
cout << c;
c.getXYZ(&hxyz, &hxyz, &hxyz);
cout << “ HDK: ” << UT_Vector3(hxyz, hxyz, hxyz) << endl;
static const UT_Matrix3 primNTSC(
0.67f, 0.33f, 0.0f, // Rxyz
0.21f, 0.71f, 0.08f, // Gxyz
0.14f, 0.08f, 0.78f // Bxyz
);
static const UT_Vector3 whiteNTSC(0.31f, 0.316f, 0.374f);
UT_Matrix3 CI = UT_Matrix3(primNTSC);
CI.invert();
UT_Vector3 J = rowVecMult(whiteNTSC * (1.0f/whiteNTSC), CI);
UT_Matrix3 T;
T.identity();
T.scale(J, J, J);
T *= primNTSC;
cout << “NTSC: ” << rowVecMult(UT_Vector3(r, g, b), T) << endl;
static const UT_Matrix3 T709(
0.412453f, 0.212671f, 0.019334f,
0.357580f, 0.715160f, 0.119193f,
0.180423f, 0.072169f, 0.950227f
);
cout << “HDTV: ” << rowVecMult(UT_Vector3(r, g, b), T709) << endl;
}
I did some tests and the closest match I found is the NTSC primaries and CIE Illuminant C - see the code snippet below. I'm certainly not a color expert, but some of the UT_Color's conversion methods doesn't seem very useful without this additional information.
For example, what's the proper way to use UT_Color::getLAB? Apparently this method doesn't use any reference white point at all - let's say when doing RGB->LAB, the method supposedly performs RGB->XYZ internally, and this implies the white point mentioned above, but when it just assumes white at XYZ(1,1,1).
The test below compares UT_Color's RGB->XYZ conversion results with “NTSC” and HDTV/Rec709/sRGB matrices.
“HDK” & “NTSC” values are very close, while “HDTV” results are quite different, so I presume that Houdini's internal matrix is based on the NTSC values (maybe sRGB would be a better choice these days?).
Knowing the white point makes getLAB somewhat more useful I think, that is, I can do RGB->XYZ, divide by wp, then XYZ->LAB.
Thanks in advance for any comments on this.
void testColor(float r, float g, float b) {
float hxyz;
UT_Color c = UT_Color(UT_RGB, r, g, b);
cout << c;
c.getXYZ(&hxyz, &hxyz, &hxyz);
cout << “ HDK: ” << UT_Vector3(hxyz, hxyz, hxyz) << endl;
static const UT_Matrix3 primNTSC(
0.67f, 0.33f, 0.0f, // Rxyz
0.21f, 0.71f, 0.08f, // Gxyz
0.14f, 0.08f, 0.78f // Bxyz
);
static const UT_Vector3 whiteNTSC(0.31f, 0.316f, 0.374f);
UT_Matrix3 CI = UT_Matrix3(primNTSC);
CI.invert();
UT_Vector3 J = rowVecMult(whiteNTSC * (1.0f/whiteNTSC), CI);
UT_Matrix3 T;
T.identity();
T.scale(J, J, J);
T *= primNTSC;
cout << “NTSC: ” << rowVecMult(UT_Vector3(r, g, b), T) << endl;
static const UT_Matrix3 T709(
0.412453f, 0.212671f, 0.019334f,
0.357580f, 0.715160f, 0.119193f,
0.180423f, 0.072169f, 0.950227f
);
cout << “HDTV: ” << rowVecMult(UT_Vector3(r, g, b), T709) << endl;
}
Houdini Indie and Apprentice » UI question : Middle Mouse Button
- axebeak
- 51 posts
- Offline
Also RMB -> “Extended Information…” displays essentially the same info just organized in a tree-like fashion.
Technical Discussion » Simple Texture GLSL shader
- axebeak
- 51 posts
- Offline
It hard to tell why the binding fails without the actual scene, but try the attached example see if it works for you.
Note that OGL renderer is a bit different in H11 instead of writing to gl_FragColor it's necessary to take render passes into account and use HOUassignOutputs - see the docs for more info.
(Also I posted a couple of simple GLSL examples here: http://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&t=22491) [sidefx.com]
Note that OGL renderer is a bit different in H11 instead of writing to gl_FragColor it's necessary to take render passes into account and use HOUassignOutputs - see the docs for more info.
(Also I posted a couple of simple GLSL examples here: http://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&t=22491) [sidefx.com]
Technical Discussion » Face normals: 32-bit vs 64-bit
- axebeak
- 51 posts
- Offline
It seems that, at least under Windows, face normals computed by hou.Face.normal() in Python (and GEO_Face::computeNormal() in HDK) vary slightly between 32- and 64-bit builds.
For example try loading quad.bgeo from the attached archive and then execute something like this from the Python console:
print hou.node('/obj/geo1/file1').geometry().prims().normal()
For me it's under Win32 and under Win64.
The difference is small, but enough to cause some troubles - e.g. see ck_planar.py in the archive - this checks if polygon is planar using the normal reported by HOM - the result is planar under Win32, non-planar under Win64.
Well, I think I know exactly why this happens, but I wonder how to deal with it in Python. Python runtime itself apparently suffers from the same problem, so computing the normal in Python produces yet another result (and again it's apparently different between platforms). I solved this using explicit SSE code via inlinecpp (essentially emulating Win64 results), but this seems too heavy for something this simple.
Thanks!
Edit:
In case anyone may ever need something like this here is that inlinecpp code, mentioned above:
import inlinecpp
fnmod = inlinecpp.createLibrary(
name=“face_norm”,
includes=“#include <UT/UT_Vector3.h>\n#include <VM/VM_Math.h>\n”,
function_sources=[
“”"
void newell_calc(UT_Vector3D* pNml, UT_Vector3D* pVI, UT_Vector3D* pVJ, int mode) {
v4uf n((float)pNml->x(), (float)pNml->y(), (float)pNml->z(), 0.0f);
if (mode == 0) {
v4uf vi((float)pVI->x(), (float)pVI->y(), (float)pVI->z(), 0.0f);
v4uf vj((float)pVJ->x(), (float)pVJ->y(), (float)pVJ->z(), 0.0f);
v4uf dv = vi - vj;
v4uf sv = vi + vj;
n += v4uf(dv, dv, dv, 0.0f) * v4uf(sv, sv, sv, 0.0f);
} else {
v4uf nn = n * n;
n *= v4uf(1.0f) / sqrt(v4uf(nn) + v4uf(nn) + v4uf(nn));
}
(*pNml) = n;
(*pNml) = n;
(*pNml) = n;
}
“”"])
It's used like this:
def faceNorm(prim):
nml = hou.Vector3(0, 0, 0)
nvtx = len(prim.vertices())
for i in xrange(nvtx):
j = i - 1
if j < 0: j = nvtx - 1
vi = prim.vertices().point().position()
vj = prim.vertices().point().position()
fnmod.newell_calc(nml, vi, vj, 0)
fnmod.newell_calc(nml, vi, vj, 1)
return nml
At least on my test data this produces the same exactly _binary_ values on both platforms, and these are exactly the same values as computed by Houdini under Win64.
For example try loading quad.bgeo from the attached archive and then execute something like this from the Python console:
print hou.node('/obj/geo1/file1').geometry().prims().normal()
For me it's under Win32 and under Win64.
The difference is small, but enough to cause some troubles - e.g. see ck_planar.py in the archive - this checks if polygon is planar using the normal reported by HOM - the result is planar under Win32, non-planar under Win64.
Well, I think I know exactly why this happens, but I wonder how to deal with it in Python. Python runtime itself apparently suffers from the same problem, so computing the normal in Python produces yet another result (and again it's apparently different between platforms). I solved this using explicit SSE code via inlinecpp (essentially emulating Win64 results), but this seems too heavy for something this simple.
Thanks!
Edit:
In case anyone may ever need something like this here is that inlinecpp code, mentioned above:
import inlinecpp
fnmod = inlinecpp.createLibrary(
name=“face_norm”,
includes=“#include <UT/UT_Vector3.h>\n#include <VM/VM_Math.h>\n”,
function_sources=[
“”"
void newell_calc(UT_Vector3D* pNml, UT_Vector3D* pVI, UT_Vector3D* pVJ, int mode) {
v4uf n((float)pNml->x(), (float)pNml->y(), (float)pNml->z(), 0.0f);
if (mode == 0) {
v4uf vi((float)pVI->x(), (float)pVI->y(), (float)pVI->z(), 0.0f);
v4uf vj((float)pVJ->x(), (float)pVJ->y(), (float)pVJ->z(), 0.0f);
v4uf dv = vi - vj;
v4uf sv = vi + vj;
n += v4uf(dv, dv, dv, 0.0f) * v4uf(sv, sv, sv, 0.0f);
} else {
v4uf nn = n * n;
n *= v4uf(1.0f) / sqrt(v4uf(nn) + v4uf(nn) + v4uf(nn));
}
(*pNml) = n;
(*pNml) = n;
(*pNml) = n;
}
“”"])
It's used like this:
def faceNorm(prim):
nml = hou.Vector3(0, 0, 0)
nvtx = len(prim.vertices())
for i in xrange(nvtx):
j = i - 1
if j < 0: j = nvtx - 1
vi = prim.vertices().point().position()
vj = prim.vertices().point().position()
fnmod.newell_calc(nml, vi, vj, 0)
fnmod.newell_calc(nml, vi, vj, 1)
return nml
At least on my test data this produces the same exactly _binary_ values on both platforms, and these are exactly the same values as computed by Houdini under Win64.
Technical Discussion » HDK: speedup creating geometry
- axebeak
- 51 posts
- Offline
The slowdown is probably because of the getVertex() calls - particles are stored in a linked list and it goes through the list every time up to the current i. Try using iterate*() methods.
But why don't you write .geo file directly from Maya? GEO format is very simple, especially since you only need to store a list of points with attributes - you'll probably only need minor modifications to your existing code.
edit
(You probably meant +1 for the 2nd index here: velList,velList[index+2],velList)
Here is some code:
Merely caching vertex ref inside the loop makes it 3 times faster as expected (note that this can't be optimized by the compiler):
for (uint i = 0; i < numParticles; i++) {
uint index = i * 3;
GEO_Vertex& vtx = particle->getVertex(i);
vtx.getPt()->setPos(…);
vtx.getPt()->setValue<UT_Vector3>(…);
vtx.getPt()->setValue<float>(…);
}
But what you really want is this:
GEO_ParticleVertex* vtx = particle->iterateInit();
for (uint i = 0; i < numParticles; i++) {
uint index = i * 3;
vtx->getPt()->setPos(…);
vtx->getPt()->setValue<UT_Vector3>(…);
vtx->getPt()->setValue<float>(…);
vtx = particle->iterateNext(vtx);
}
The last fragment is almost 200x faster on my machine for 10K points.
But why don't you write .geo file directly from Maya? GEO format is very simple, especially since you only need to store a list of points with attributes - you'll probably only need minor modifications to your existing code.
edit
(You probably meant +1 for the 2nd index here: velList,velList[index+2],velList)
Here is some code:
Merely caching vertex ref inside the loop makes it 3 times faster as expected (note that this can't be optimized by the compiler):
for (uint i = 0; i < numParticles; i++) {
uint index = i * 3;
GEO_Vertex& vtx = particle->getVertex(i);
vtx.getPt()->setPos(…);
vtx.getPt()->setValue<UT_Vector3>(…);
vtx.getPt()->setValue<float>(…);
}
But what you really want is this:
GEO_ParticleVertex* vtx = particle->iterateInit();
for (uint i = 0; i < numParticles; i++) {
uint index = i * 3;
vtx->getPt()->setPos(…);
vtx->getPt()->setValue<UT_Vector3>(…);
vtx->getPt()->setValue<float>(…);
vtx = particle->iterateNext(vtx);
}
The last fragment is almost 200x faster on my machine for 10K points.
Edited by - Aug. 29, 2011 17:21:51
Houdini Lounge » Houdini for Indie Game Devs
- axebeak
- 51 posts
- Offline
Well, sure… but that's too general don't you think?
I mean, I understand the sentiment, but I don't think it's enough to get anyone interested.
(For the record, I do think that Houdini has a better extrude - because of the extrude groups.)
The original post says that Houdini is not even considered because of the missing export options.
But surely they will look if there are some demos that show how Houdini can make they life easier. I think a direct comparison with the apps they're already using is absolutely necessary here. What will replace their Animation Mixer and Ultimapper if they're switching from Softimage? In what ways the replacement is better?
Exporting is a minor problem really, a moderately experienced programmer should be able to write a basic Collada geometry/material exporter in 2-3 days using HOM.
BTW, Houdini is very programmer-friendly, and I think this aspect is often downplayed. But we're talking about developers here, aren't at least a significant portion of them are programmers?
Also, I was under impression that independent developers are mostly working on smaller projects, something like Lugaru [en.wikipedia.org]. But large amount assets implies something bigger it seems.
In fact, I'm a bit confused as to the definition of “independent developers”. At least for the purpose of this threads, what kind of groups are we talking about? 1-2 man teams, probably students since there is a mention of Academic License in the original post? Larger groups with some financial backing? What types of games they're making? Do they usually have someone on their team in the “technical artist” position?
I mean, I understand the sentiment, but I don't think it's enough to get anyone interested.
(For the record, I do think that Houdini has a better extrude - because of the extrude groups.)
The original post says that Houdini is not even considered because of the missing export options.
But surely they will look if there are some demos that show how Houdini can make they life easier. I think a direct comparison with the apps they're already using is absolutely necessary here. What will replace their Animation Mixer and Ultimapper if they're switching from Softimage? In what ways the replacement is better?
Exporting is a minor problem really, a moderately experienced programmer should be able to write a basic Collada geometry/material exporter in 2-3 days using HOM.
BTW, Houdini is very programmer-friendly, and I think this aspect is often downplayed. But we're talking about developers here, aren't at least a significant portion of them are programmers?
Also, I was under impression that independent developers are mostly working on smaller projects, something like Lugaru [en.wikipedia.org]. But large amount assets implies something bigger it seems.
In fact, I'm a bit confused as to the definition of “independent developers”. At least for the purpose of this threads, what kind of groups are we talking about? 1-2 man teams, probably students since there is a mention of Academic License in the original post? Larger groups with some financial backing? What types of games they're making? Do they usually have someone on their team in the “technical artist” position?
Houdini Lounge » Houdini for Indie Game Devs
- axebeak
- 51 posts
- Offline
freeflyklownCan you elaborate on this?
I think Houdini has lots of potential in game dev
What advantages do you think Houdini has over, say, 3DSMax?
I don't mean general gamedev, but for independent developers specifically.
Is it:
modeling (characters, environments?);
rigging and animation (char/mechanical?);
lighting (say, lightmap baking);
scripting, batch processing, versioning;
gameplay data editing;
image processing, particle effects, something else;
Houdini Lounge » Game example
- axebeak
- 51 posts
- Offline
I wanted to share some things I've learned while working on a game project of my own.
Unfortunately these things are rather difficult to explain in isolation and out of context, so some time ago I've started putting together a much smaller example to illustrate, in as basic form as possible, how various game data types can be created in Houdini, how to export the data and how to use it at run-time.
Warning: this is a low-level technical example, this is not about saving out Collada files or using high-level interfaces in the existing middleware. This example is intended for those who want to write their own tools based on Houdini.
The project's Google Code page is here: http://code.google.com/p/kinnabari/ [code.google.com]
The run-time code is Windows/DX9 - no plans to port it to anything else.
GCode repository contains the run-time system sources (C, some C++, HLSL), Houdini tools (Python), and some additional utilities (C#).
These archive contains the snapshots of the current sources, Houdini scenes, some additional Python scripts not in the repository and precompiled binary:
http://kinnabari.googlecode.com/files/kinnabari_test__Aug23_2011.zip [kinnabari.googlecode.com]
You can see how the test program looks on the first picture.
There is not much on the surface at the moment - you can move the character around the “room” using the cursor keys and bump into walls.
Here is the list of some things under the surface:
* using GLSL in Houdini to render the materials as you will see them at run-time. A complete material system is quite involved, I was looking for some way to make a really simple but illustrative example, so I made a kind of toon-material, which is controlled by a small number of parameters - see the second pic.
Also I've posted some GLSL examples here: http://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&t=22491 [sidefx.com] will try to post some more if time permits.
I'm not using textures at the moment, just colors on the geometry, that's mostly to avoid UV mapping and save me some time, but I think I will add textures at some point.
* the character setup uses joint system with custom IK solver as described here: http://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&t=22527 [sidefx.com]
I've tried to make the setup as transparent as possible to make it easier for a programmer to understand what's going on. So the rig is very straightforward.
The IK is not baked into rotations, the effectors' positions are exported as-is and evaluated at run-time in the same way the Python solver does it in Houdini.
* and example of how to export and evaluate keyframe data with cubic() interpolation, this is used for both character animations and camera movement;
* geometry export examples for the character (weighted model), the scene (static model), collision geometry (export quadrilaterals), and special camera control “lanes” geometry;
* generating culling data for the character - the model is deformed by the skeleton, actual deformation is calculated on the GPU, that is _after_ the drawcall was submitted. So it's not possible to recompute bounding volumes based on the deformed geometry (that would be terribly inefficient on CPU anyway). This demonstrates how to generate bounding spheres for point clusters affected by a particular joint, at run-time the bounding volume for a primitive group is calculated by transforming the corresponding spheres and computing their new bounding box. This method is suitable for both frustum and occlusion culling. This is shown on the 3rd screenshot;
* spatial camera control system - lookup into camera animation channels based on character's position;
Unfortunately these things are rather difficult to explain in isolation and out of context, so some time ago I've started putting together a much smaller example to illustrate, in as basic form as possible, how various game data types can be created in Houdini, how to export the data and how to use it at run-time.
Warning: this is a low-level technical example, this is not about saving out Collada files or using high-level interfaces in the existing middleware. This example is intended for those who want to write their own tools based on Houdini.
The project's Google Code page is here: http://code.google.com/p/kinnabari/ [code.google.com]
The run-time code is Windows/DX9 - no plans to port it to anything else.
GCode repository contains the run-time system sources (C, some C++, HLSL), Houdini tools (Python), and some additional utilities (C#).
These archive contains the snapshots of the current sources, Houdini scenes, some additional Python scripts not in the repository and precompiled binary:
http://kinnabari.googlecode.com/files/kinnabari_test__Aug23_2011.zip [kinnabari.googlecode.com]
You can see how the test program looks on the first picture.
There is not much on the surface at the moment - you can move the character around the “room” using the cursor keys and bump into walls.
Here is the list of some things under the surface:
* using GLSL in Houdini to render the materials as you will see them at run-time. A complete material system is quite involved, I was looking for some way to make a really simple but illustrative example, so I made a kind of toon-material, which is controlled by a small number of parameters - see the second pic.
Also I've posted some GLSL examples here: http://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&t=22491 [sidefx.com] will try to post some more if time permits.
I'm not using textures at the moment, just colors on the geometry, that's mostly to avoid UV mapping and save me some time, but I think I will add textures at some point.
* the character setup uses joint system with custom IK solver as described here: http://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&t=22527 [sidefx.com]
I've tried to make the setup as transparent as possible to make it easier for a programmer to understand what's going on. So the rig is very straightforward.
The IK is not baked into rotations, the effectors' positions are exported as-is and evaluated at run-time in the same way the Python solver does it in Houdini.
* and example of how to export and evaluate keyframe data with cubic() interpolation, this is used for both character animations and camera movement;
* geometry export examples for the character (weighted model), the scene (static model), collision geometry (export quadrilaterals), and special camera control “lanes” geometry;
* generating culling data for the character - the model is deformed by the skeleton, actual deformation is calculated on the GPU, that is _after_ the drawcall was submitted. So it's not possible to recompute bounding volumes based on the deformed geometry (that would be terribly inefficient on CPU anyway). This demonstrates how to generate bounding spheres for point clusters affected by a particular joint, at run-time the bounding volume for a primitive group is calculated by transforming the corresponding spheres and computing their new bounding box. This method is suitable for both frustum and occlusion culling. This is shown on the 3rd screenshot;
* spatial camera control system - lookup into camera animation channels based on character's position;
Technical Discussion » Custom IK solver
- axebeak
- 51 posts
- Offline
Here is a small update.
I started working on some animations for my project, I'm using these specialized Python solvers, and it looks like it's working well - no obvious problems so far.
Here is an example scene, using two instances of the solver this time and animated.
I have a capture pose set at frame 600, that's because I'm using deform regions for weighting - have yet to figure out how to use _capture_ regions properly… not sure if this makes any difference, e.g. making things a bit slower (BTW, if animation playback seems too slow, enable frame-skip in the Global Animation Options dialog).
Animation curves use cubic interpolation - the only reason for this is that I want to make a small example, demonstrating how to export and evaluate Houdini keyframes and cubic() is the only non-trivial interpolation I have figured out thus far.
I started working on some animations for my project, I'm using these specialized Python solvers, and it looks like it's working well - no obvious problems so far.
Here is an example scene, using two instances of the solver this time and animated.
I have a capture pose set at frame 600, that's because I'm using deform regions for weighting - have yet to figure out how to use _capture_ regions properly… not sure if this makes any difference, e.g. making things a bit slower (BTW, if animation playback seems too slow, enable frame-skip in the Global Animation Options dialog).
Animation curves use cubic interpolation - the only reason for this is that I want to make a small example, demonstrating how to export and evaluate Houdini keyframes and cubic() is the only non-trivial interpolation I have figured out thus far.
Technical Discussion » Floor collision in CHOPs
- axebeak
- 51 posts
- Offline
Brian, thank you!
This works great - somewhat more complicated than I initially thought it would be… but only slightly so - I just tried replicating your version manually, and it's still a pretty fast setup! Moreover, for what I'm doing, having actual collision checks in SOP context will make certain things much simpler.
This works great - somewhat more complicated than I initially thought it would be… but only slightly so - I just tried replicating your version manually, and it's still a pretty fast setup! Moreover, for what I'm doing, having actual collision checks in SOP context will make certain things much simpler.
Technical Discussion » Floor collision in CHOPs
- axebeak
- 51 posts
- Offline
I have an animated object and some “floor” geometry, I want to adjust object's Y position to the floor - in other words, object's world tx/tz are coming from key-framed animation, ty is calculated according to the floor geometry.
For the special case where the floor is just a single layer, without any overlaps in Y, it seems very easy to do this in CHOPs. See the attached scene. The setup is sketched in the 1st picture - it's essentially a single VOP CHOPs that fires a collision ray from high above using VEX intersect() function. The idea is to cover the entire possible range in Y - in this scene the ray's origin is shifted 100 units upwards and its length is set to 200 units (arbitrary large values).
This will not work in the general case though - see the 2nd picture, if there are overlapping floors the object will jump to the highest position (set /obj/floor/switch1 value to 1 in the example scene to see this in action).
The basic setup works fine for what I'm doing, but I thought it would be just as easy to extend it to support the general case. To do so the ray's origin must be placed somewhere near the recent object position - basically, this means using the adjusted ty from the previous frame.
I tried sampling previous value with an expression like this: chopcf(“.”, 1, $F-1)
but it seems the returned value is always coming from the original channel, it's not the adjusted ty from VOP CHOP.
I'm also experimenting with Feedback CHOP, which seems to be designed for such situations, but no luck so far.
Thanks in advance for any ideas and suggestions!
For the special case where the floor is just a single layer, without any overlaps in Y, it seems very easy to do this in CHOPs. See the attached scene. The setup is sketched in the 1st picture - it's essentially a single VOP CHOPs that fires a collision ray from high above using VEX intersect() function. The idea is to cover the entire possible range in Y - in this scene the ray's origin is shifted 100 units upwards and its length is set to 200 units (arbitrary large values).
This will not work in the general case though - see the 2nd picture, if there are overlapping floors the object will jump to the highest position (set /obj/floor/switch1 value to 1 in the example scene to see this in action).
The basic setup works fine for what I'm doing, but I thought it would be just as easy to extend it to support the general case. To do so the ray's origin must be placed somewhere near the recent object position - basically, this means using the adjusted ty from the previous frame.
I tried sampling previous value with an expression like this: chopcf(“.”, 1, $F-1)
but it seems the returned value is always coming from the original channel, it's not the adjusted ty from VOP CHOP.
I'm also experimenting with Feedback CHOP, which seems to be designed for such situations, but no luck so far.
Thanks in advance for any ideas and suggestions!
-
- Quick Links