Search - User list
Full Version: Can't create RBD... (Solved)
Root » Houdini Indie and Apprentice » Can't create RBD... (Solved)
dirkmitt
Hello,

When trying to create an RBD object of any type, even in a new project, I get this big error:

Bad node type found: odesolver in
/ob/AutoDopNetwork/rigidbodysolver1

Yet, when I create an rigidbodysolver manually, I can enter the node, which can be switched back and forth from RBD to ODE. The shelf tools create a new one, so to change the order on a manually created one to put RBD before ODE doesn't help. And after the error, the rigidbodysolver created by the GUI has been removed again, so that I can't modify its preference order manually either.

My Houdini version is still Apprentice 10.0, Build 295, and I don't look forward to upgrading. All in all, I enjoy exploring its many features. And I have tried renaming the houdini10.0 settings directory in my home, which doesn't solve the problem.

There was one other person on the BB who ran into this, and he was only able to solve his problem by upgrading, but without ever identifying the cause himself.

I'm guessing that with me, the cause might be the fact that I have an old, outdated version of ODE installed on my system, and that Houdini might be trying to use the libraries that are already installed system-wide. Is there any way I could tell Houdini just to ignore the ODE system installed? I need this exact version for other 3D graphics software.

These are the details:

Traceback (most recent call last):
File “dop_rbdobject”, line 3, in <module>
File “toolutils.py”, line 603, in new_func
return function(*args, **kwargs)
File “/opt/hfs10.0.295/houdini/scripts/python/doptoolutils.py”, line 1259, in genericTool
theDopObjectTypeDict.prompt)
File “/opt/hfs10.0.295/houdini/scripts/python/doptoolutils.py”, line 1168, in genericDopConverterTool
newdopnode = genericConvertToDopObject(objectnode, dopobjecttype, nodename)
File “/opt/hfs10.0.295/houdini/scripts/python/doptoolutils.py”, line 1126, in genericConvertToDopObject
solvernode = createSolver(info.solvertype, info.mergeobjects)
File “/opt/hfs10.0.295/houdini/scripts/python/doptoolutils.py”, line 469, in createSolver
solvernode = dopnet.createNode(dopsolvertype, dopsolvertype + “1”)
File “/home/prisms/builder-new/Nightly10.0/dev/hfs/houdini/scripts/python/hou.py”, line 3679, in createNode
return _hou.Node_createNode(*args, **kwargs)
MatchDefinitionError: Failed to match node type definition.
Error: Bad node type found: odesolver in /obj/AutoDopNetwork/rigidbodysolver1.
Warning: Unknown channel(s) “erp, cfm, oversample, rand, maxomega, usemaxomega” converted to spare parameter(s).


Warning: Skipping unrecognized parameter “parmop_erp”.
Skipping unrecognized parameter “parmop_cfm”.
Skipping unrecognized parameter “parmop_oversample”.
Skipping unrecognized parameter “parmop_rand”.

…etc.

Dirk
Soothsayer
Sooo…why not download the latest version and see if it's fixed?
dirkmitt
I decided that for the moment, your advice was good, uninstalled Houdini 10.0.295 and installed Houdini 10.0.595 , 32-bit, gcc 3.3 fresh. I also renamed my personal houdini10.0 settings folder, since there might have been some compatibility issues there.

But now what I've found, is that I get exactly the same behavior. Which is not what the other poster obtained. Animations which worked before, work again. But trying to create an RBD object still gives me this same error message as before.

Now there are two possible ways to proceed. One is, that maybe you guys could tell me what else could be wrong, and the other would be, that I just keep playing with Houdini Apprentice, but that I simply avoid trying to use RBD objects. :?

Dirk

BTW: The ODE version installed on my system is 0.11.1


Edit: One concept which I've been working on, is to chroot Houdini. One configuration convention which my setup abides by, is that custom-compiled software usually installs itself to /usr/local/lib, while the packages usually install their libraries to /usr/lib… Hence, I can set up a fake file system with ‘sudo mount –bind …’ repeatedly, which copies my whole FS, but which then mounts an empty directory over leafs of the same FS. Surprisingly, those leafs do not disappear from the FS which they were copied from, because I'm being certain not to ‘mount –rbind’ under any circumstances. Then I can schroot to the edited FS.

But what I've already found, is that the casual use of schroot just to run one command, apparently fails to reload the shared library link cache, via ldconfig, because the command would need to be given somewhere under root to run ldconfig again. And so the only way to do this, would be to isolate the environment of this ‘read/write jail’ further, and to start a new session under schroot. I'm learning that I might add a custom setup script to /etc/schroot/setup.d , which just runs ldconfig, with no real guarantee about how this would affect the rest of my system, but without requiring then, that I start a new session.

But there is also a limit to how far I'd want to mess with that. I'm not ready to ramp up my jail to start a new session, with /usr/local/lib removed, plus certain other directories. I'm hoping you guys could show me a better way.

AND, I may as well also ask whether to do so would go beyond my Apprentice license permissions? I mean, to go this extra distance would begin to define a virtual machine… And then I might also need to prepare for some error messages related to this use of schroot, depending on which session-startup processes might accidentally also want to run something in the masked-out directories.
dirkmitt
(Bump)

I'd hate to leave anybody else hanging, who might face similar issues such as mine, and whose Web search might lead to this forum.

chroot and schroot are tools for Linux, which are unrelated to Houdini. Now, I could have just accepted the documentation's word, according to which Houdini loads its own ODE library, but I'm inquisitive enough to try verifying even this (as long as something isn't working), and to do an experiment (or 5 of them) to be sure of this. And schroot is a tool which has enabled me to do this.

What chroot and schroot do at their core, is to let the system administrator select a specific directory, to pretend that this directory is now the root directory, and then to run some command as if that directory was in fact root. This has enabled me to create a “fake root”, out of which my native install of ODE is absent, and then to see how Houdini behaves.

My first attempt simply produced the same error message as before, but I took this to be inconclusive, because the search for shared libraries gets cached under Linux, by the ldconfig command, and there could have been stale hard-links to my native ODE library in this cache. This would not be curbed immediately by just pruning the FS directories themselves, because there would be an observable time lag to rebuild the shared library link cache, which did not happen.

So what I did now, was to remove this potential artifact from my chroot environment. Readers should be aware however, that there is some danger to system stability, from running ldconfig indiscriminately within a chroot environment. I'd like to explain briefly where I saw this danger:

A properly set-up virtual machine, will launch a new instance of the kernel to create a ‘fully managed session’. Both the chroot and the schroot utilities allow for this kind of use, because the command which is run in this fake root, could just as easily be ?init? as anything else… But schroot offers as one alternative, that we can have ‘simple session management’, which means that scripts are run which reside in /etc/schroot/setup.d , and these scripts define to what extent this fake session is isolated from the real one. The border isn't clear, and the same kernel instance continues to serve the processes created.

Finally, schroot also offers us a mode, in which a single command is run in the modified environment (unmanaged).

The problem with running ldconfig within a chroot, is the fact that this program exports its findings to all environments, which means in reality that it affects everything running under one kernel instance. Hence, if this command is given in a partially managed environment, it will probably change where the programming searches for shared libraries, right across the whole computer, and not just for processes running within this ‘fake root’, which is an arbitrary notion affecting the explicit opening of files… Hence, a disaster could take place, by which the rest of the system starts to look for its libraries within this chroot directory, where ldconfig last found them, and worse yet, after we unmount the FS again (after Houdini is done), all the shared libraries could seem to ~wink out~ ! This may only be partially solved, by the possibility that ldconfig stored hard-links to the files in RAM, which point back to the real files, in the special case where the fake FS has ‘mount –bind’ commands as its basis: ldconfig could have merely truncated the range of shared libraries available to the whole computer, instead of misplacing them.

In fact, I think I've heard of some blogs in which people complained about a hung ldconfig process in association with chroot.

Therefore, in my unmount script for this test, I have included a second call to ‘sudo /sbin/ldconfig’ , to undo what the first call did within the chroot. Hence, /etc/schroot/exec.d contains the first call to ldconfig, which is where scripts are found which will affect the one command given within the fake root.

None of this would pose any problem with a fully managed session, or a true VM, because when a new kernel instance starts up, it runs its own init script, from where the ldconfig command given once will only affect that kernel instance, and therefore only affect that managed session.

But my experiment was just for the sake of running one command: ~/houdini .

——–

So, Houdini launched, and amazingly enough, ran fine with my settings, except that it still produces that one error message about creating an RBD body from simple geometry. And I think I've fully eliminated the probability by now, that the native version of ODE could still be affecting Houdini.

So my main question to the forum for now is, since everybody has had a good laugh, can somebody please tell me what I'm doing wrong, whenever I just run Houdini normally and try to get an RBD body (by itself)?

Dirk

Edit: In fact, I changed the configuration again, by inserting my ‘sudo /sbin/ldconfig’ command within the script ~/houdini , just to make sure that this program call takes place within the ‘fake root’, because there might even have been the possibility that the scripts in /etc/schroot/exec.d are run from outside it.

But this didn't make any difference in the behavior of the program either. Existing projects load, with volume fluid, terrain etc. animated properly, but I can't seem to create the RBD object.
Pagefan
Dirk,

just out of curiosity, what kind of linux distro are you using that still uses gcc 3.3? It's old, really old……
dirkmitt
I'm using an old version of “Kanotix”, which was based on Debian/Etch, kernel version 2.6.24 , 32-bit. Even though your download is labeled as being compatible with gcc 3.3 , I have installed on this box, gcc 4.1 . I have 2GB of RAM, and a dual-core AMD processor running at 2.6GHz.

I wouldn't want to install the Ubuntu download, because Ubuntu isn't Debian, and because Ubuntu is based on GNOME, which I don't have installed. Kanotix is based on KDE.

Also, today's Kanotix builds are based on Debian/Lenny , with a 64-bit as well as a 32-bit version available. However, I only have this Debian/Lenny build installed on a computer with limited hard drive capacity, and with absolutely no hardware-3D. Therefore, even though Houdini uses software rendering for its main output, Houdini still uses HW-3D for its GUI (which is normal by today's standards). And yet, the Debian/Lenny box isn't suitable for Houdini to be installed on it.

I once tried to launch Houdini with schroot, and yet having forgotten to overlay the directory /dev via ‘mount –bind’ , during that one session Houdini didn't detect HW-3D either, and was not able to display its GUI properly.

Which it does fine with the directory /dev remounted in my fashion.

Dirk

BTW, this is how I constructed my ‘fake root’:


#!/bin/bash

# This script will bind the root directory to
# /media/localbind
# including explicitly /home .

# It will next bind /empty to /media/localbind/usr/local/lib ,
# hoping that this does not affect /usr/local/lib .

# Then, houdini can be run within the schroot environment.

if
then
exit
fi

sudo mount –bind / /media/localbind
# DO NOT USE rbind !

sudo mount –bind /home /media/localbind/home

# Objective – remove:
sudo mount –bind /temp_usr_lib /media/localbind/usr/lib
sudo mount –bind /empty /media/localbind/usr/local/lib
sudo mount –bind /empty /media/localbind/usr/lib/python2.3/site-packages
sudo mount –bind /empty /media/localbind/usr/lib/python2.4/site-packages
sudo mount –bind /empty /media/localbind/usr/lib/python2.5/site-packages

sudo mount –bind /dev /media/localbind/dev
sudo mount –bind /proc /media/localbind/proc
sudo mount –bind /sys /media/localbind/sys


Obviously, each and every one of these mounts must be unmounted later, before which my scripts execute ‘sudo /sbin/ldconfig’ . As all of my scripts are still only experiments, I've been running them in a console window, and not getting error messages (only time delays). Also, I've gone to /media/localbind/usr/lib and verified via ‘ls -l libode*’ that the offending libs are gone, which are still present in /usr/lib …

And I'm sure that Houdini is running in its chroot, which is /media/localbind . Further, I've used the GUI of Houdini to visit such directories as /usr/local/lib, which the app sees, which would normally have subdirectories, but which are completely empty as seen from the app, when run in the chroot, just for further verification.
dirkmitt
(Bump)

I think that one fact which I should mention, is that my Debian/Etch is based on glibc 2.2 , while your Houdini software seems to ask for glibc 2.3.4 .

I've vaguely remember having noticed the difference over a year ago, but taking my chances with it back then. In recent months and years, I've completely forgotten about this detail, because my experiments with Houdini mainly work.

Wouldn't it be odd though, if the glibc version of my computer, should affect the Python import of exactly one module? In general, the application seems stable enough. It's just this one module, that seems to create problems.

*** What the GUI error box seems to tell me, is that odesolver has additional parameters, which the calling context doesn't know how to handle. And yet within the GUI, I can create a rigibodysolver, dimiss a brief warning message, and then switch this solver back and forth between RBD and ODE. Either way, one of the tabs will show me settings which I can change through the GUI. So it appears that the message I get is merely a warning. ***

However, it seems that when trying to connect its AutoDOPNetwork, the GUI takes this warning to be an error. If it simply let the ?strange? ODE object stay in the DOP network, I suspect the former would actually work. In any case, if the node was still there, I'd be able to switch off its ODE option, and switch it to the RDB solver.

Dirk
dirkmitt
Hooray! I was able to solve my problem!

When I go to the download page for Houdini Apprentice, I overlook the 32-bit Red Hat Edition, because Debian, 32-bit is only listed as a second compatibility. And so I sell myself short by selecting on the last item of the list, the gcc 3.3 version.

I downloaded the Red Hat 32-bit version, because that one is also Debian-compatible, specifically listed with “Etch” (which is what I happen to be running. This is version 10.0.595 , gcc 3.4 ! And it works, where the other one did not.

I can finally create an RBD object without problems, and my earlier, saved volume fluid still animates properly.

Isn't it silly how I just overlooked that choice?

Thank you for giving me the hint about gcc versions!

Dirk
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Powered by DjangoBB