VML Streamer: - added support for video playback - fixed crashes on windows
Houdini HDAs: - polished all node interfaces - added help cards and hda icons
---
I think this will be the last major update before release. I tried to implement OpenCV Aruco markers, but it will take a bit more work since it heavily depends on camera calibration.
Help cards got some love, examples attached.
Video playback demo:
Puppet rig demo: (As Mediapipe inference on linux is GPU accelerated, there is a lot of dropped frames due to the screen capture, sadly.)
I'm building a set of hdas to bring computer vision and machine learning algorithms into Houdini.
It uses SideFX Labs "Mocap stream" node as a generic UDP client that receives data from a program I created called VML Streamer.
VML Streamer runs on both windows and linux, uses openCV, and for now it can stream the webcam image + Google's MediaPipe hands/body/face realtime trackers from a simple webcam. (body still needs work!)
The code was conceived in such a way that anything can be plugged in easily, and streamed into Houdini as a python dictionary, with very little effort on the UI and data handling side.
In Houdini, a number of hdas are able to decode received data into geometry, using opencl nodes where possible for faster performance. For realtime trackers, users can snapshot poses into a pose library and create triggers anywhere based on pose matching.
There are also a couple onnx node helpers that facilitate transforming image grids to and from tensors.
If time allows, I'd like to implement an OpenPose binding (to get better full body tracking since MediaPipe is not really suited to 3d world coordinates, as stated by the devs), and also openCV Aruco markers, that can help with rigid object tracking and camera tracking.
Here's VML Streamer (python), it needs a bit of testing on different envs as I was mainly focusing on the alpha features for now: VML Streamer Github [github.com]
A speed up demo (no sound):
playground:
I'll be posting any updates here before the final submission.
This is super cool , great job! thanks for sharing.
does it stream translation as well, like getting xforms from some AR type of camera ?
btw, regarding your question (1), use parm.setKeyframes() instead, and pass in a Keyframe() array, you'll get faster times than adding keyframes at every loop iteration, according to the docs. (see screenshot attached) So maybe you could find a way to store all transforms during stream and set keyframes at a later stage ? (you'l need to have a frame range defined of course, as to know when its time to gather xforms and bake)
Just came across this problem today, ended up using this simple alternative which handles crossing UV islands:
create a grid > uv texture merge a couple copies with a "UV Transform" node before merging, so you construct the desired udim tiling grid. uvlayout with the merged grid in the second input, set Pack into=Islands from second input
it is that simple, and should definitely be an ootb option in the uv layout node.
* attached example, and a comparision between packing into udim tiles Vs. packing into islands of second input (using the method above).
Just came across this problem today, ended up using this simple alternative which handles crossing UV islands:
create a grid > uv texture merge a couple copies with a "UV Transform" node before merging, so you construct the desired udim tiling grid. uvlayout with the merged grid in the second input, set Pack into=Islands from second input
it is that simple, and should definitely be an ootb option in the uv layout node.
* attached example, and a comparision between packing into udim tiles Vs. packing into islands of second input (using the method above).
I'm updating Houdini 18.0 from build 348 to 460, and looks like we had quite a few PDG related changes since then. I have a top network graph that reconstructs a scene and then renders. Scene reconstruction fetches dependencies from many sources and delete/modify/create new nodes (and new workitems) through a python processor top node. It used to work on 348, but on 460 I'm getting this:
first error (line 5): related to an hda where I have “OnLoad” module that hides a button parm dynamically. second error (line 17): related to calling node.destroy() from inside the python processor node.
Hi, is there a way to take control over workitem names in a ROP Fetch node ? I'm sending a lot of renders to HQueue, but it is really difficult to check progress on the monitor when all your jobs are named “something_1_2”, “something_1_7”, etc. The ropfetch workitems are named top node name + suffix. I'd like to ideally use upstream attribute + suffix instead, something like:
is that possible? …or maybe event handlers are an alternative to this ? Or python changing hqueue job properties on a post submit process ? I'm not sure what would be the best approach to this.
I just upgraded from H18.0.416 to H18.0.460 and started getting errors on the event handler for “pdg.EventType.CookComplete”. The code inside the event is selecting the first workitem, like this:
Hi, is there a way to get the hou.parm object for the parm currently under the mouse cursor in python ? I'm writting a tool to speed up channel linking and wanted to access that.
I think QT already has methods that can be useful - QApplication.widgetAt(QPoint()) - like in here https://doc.qt.io/qt-5/qapplication.html#widgetAt [doc.qt.io], but I'm not sure this works or if it does, how to translate the widget object back into the hou.parm object.