A "Roll-Your-Own" VOP Packaging System

   10506   11   2
User Avatar
Member
941 posts
Joined: July 2005
Offline
Hi all,

I'm starting this thread as a result of a separate discussion here [sidefx.com].
My intent is to give a rough sketch of a VOP packaging system – a tool that can be used to build VOPs straight from text sources. This is a desirable thing when designing a build system for all things VEX while also supporting a version-control system in a “natural” way.

I may end up posting the whole finished thing in the forum some day, and if I do, this will serve as a road map for how all the bits fit together, so hopefully this is not a redundant effort… (I actually copy/pasted most of the stuff here from my packager's help, so I didn't have to type much)

REQUIREMENTS:

The overriding requirement is this: we wish to be able to write a “vop” file which when “compiled” results in an OTL that defines *every* *single* *aspect* of the VOP – no “dangling” bits allowed
Furthermore, we wish to be able to take *full* advantage of the usual development tools: macros, includes, etc. In short, we want to be able to write these things in much the same way we write VEX or C++ code, and all that that implies. This, in turn, will address our other important requirement, which is that we want to keep all VOP sources under CVS control in the same way we keep all other sources – since they're written in the same way, it should follow that they can be maintained in the same way.

Now to the specifics….

VOPs are just a dialog script with “code”, “outer-code”, “input”, “output”, and “signature” sections – a dialog script on steroids . That's it. All other adornments, such as Icon images, callback scripts, related DSOs, etc; are the domain of the OTL that wraps them – they have to be dealt with as well, of course, but they don't actually *define* the VOP: they are just resources used by the VOP.
So we'll leave those for later and concentrate on this dialog script thingy first

The dialog script is made up of different sections that represent different “kinds” of things (this is actually part of a bigger problem, since they are themselves a kind of OTL, but that's a separate thread ). It should come as no surprise then, that each section carries its own syntax. For VOPs, our packager needs to deal with the following sections:

ID:
Things that identify the OP. Some are defined in the dialog script, and some in the OTL. These include:
Name
Label
Max/Min Inputs
Icon (optional)
NetMask (optional)


Help:
A block that begins with "help {“ and ends with a closing ”}“. Each line in the block is quoted, and all embedded quotes are escaped with a backslash ”\".

Parameters:
Parameters are not contained within a block – they live in the main “body” of the script and follow their own syntax rules. The general syntax for a single parameter is:
parm {
name
label
……
}

Note that the following is equally legal (though much harder to read)

parm { name var1 label “My Input Var” … }

…meaning that macro expansions (which end up as a single line) can be exploited to great advantage

Parameters can live in the main client space of the UI, or be assembled into groups. Each group will become a “tab” in the final UI. Groups can be embedded within groups, ending up with tabs within tabs. A group block has the same general syntax as a parameter block, but (to my knowledge) only accepts two (required) token-value pairs. These are (like for parm{}) a “name” and a “label”. The label field is what gets displayed as the tab's name, and the name field is presumably used internally to id the group – although I suspect it's a placebo field at the mo. The keyword that starts a group block is, appropriately: “group”

Here then, is an example with one floating parameter, followed by a tabbed group with two parameters:
parm {
name var1
label “Anti-Social”

}

group {
name g1
label “Hello World”

parm {
name var2
label “Groupie I”

}
parm {
name var3
label “Groupie II”

}
}
Again; it is perfectly legal to write all that in a single line. Can you say “macro”?!

I've been writing ellipses (…) for the implicit guts of each parameter in the examples so far. What goes in there is really more token-value pairs giving more details about each parameter. Here's a complete list of (to my knowledge) all the allowable token value pairs and their corresponding meanings:
name
label
type
<opfilter>
size
default val
disable { {token value} {token value} … }
callback
range { lo_val hi_val }
export
invisible
menu { “tok” “val” “tok” “val” … }

The type is really either int, float or string. But with all the hints,
they expand to:
integer
float
vector
vector4
uv
uvw
string
toggle
button
angle
file
image
geometry
oppath
direction
color
color4

Inputs:
The “inputs” section describes the name and type of inputs expected by this VOP. A VOP can have more inputs than parameters or more parameters than inputs, it doesn't matter – although they are usually somewhat synchronized.
However, if your VOP has 100 parameters, you probably won't want to expose each and every one as an input!
The packager should take care of counting the number of inputs and setting that as the “max inputs” field in the OTL for you.
The syntax for each input line is quite simple:
input
Where “vex_type” is an actual vex type: int, float, vector, vector4, or string (I haven't tried the matrix types yet). “parm_name” is the variable name for that parameter/input and “label” in this case is the string that gets displayed when you hold the mouse over the widget – the “tooltip”.

Here are some examples:
input vector pos Position
input float oct Octaves
input float fsize “Filter Size”

Outputs
As you'd imagine, this section describes the number and types of outputs that this VOP will produce. Our packager should count them and set the “max output” field in the OTL for you.
Much like inputs, the syntax for these is very simple:
input
And they carry the same meaning as with the inputs.

Signatures
Signatures describe variations in the allowed types for either the inputs or the outputs (or both) for a VOP. The so-called “default” signature, is simply the input/output types as defined within the dialog script (see the Input and Output sections above). But you are also allowed to specify alternative types for one or more of these inputs/outputs; each alternative set then becomes a different “signature” for your VOP (function).

The syntax for each signature is:
signature { input_types output_types }

Please note that “name” and “label” are in reverse order from most of the other dialog commands. Watch out!
The block of types is a space-separated list of the vex-type associated with each input and output respectively. Their order should match the order in which they were defined in their respective sections.
Each VOP must have at least one signature, its name must be “default”, it must have an empty type block, but it can have any descriptive label.




IMPLEMENTATION:

First, let's arbitrarily say that VOP-type files will have the extension “.vop” – in case this can be used as a discriminant somewhere in our system, to differentiate them from, say, “.h” or “.vfl” or “.C” files.

One of our main requirements is that we be allowed to use “standard” VEX/C/C++ syntax, along with #include directives, macros, etc – the whole deal. The solution that immediately suggests itself is to do a C-Preprocessor pass as a first step in our process; this will do all the things we want, as well as something we don't want: it will, by default, pollute the result with all kinds of “File/LineNumber” entries. A quick look at the man pages for the cpp (c-preprocessor) that's bundled with Red Hat reveals that the option “-P” suppresses line numbers from the output (your version may be different). Great; full C++ syntax; done!

We'll then need a second pass that will interpret the contents of our file, and format/assemble them in a way that's recognizable to Houdini as a “dialog script”. The classical way to do this is to embed symbols that are only meaningful to our interpreter, but otherwise transparent to the cpp pass, and which don't show up in the final result. I've arbitrarily chosen the character '@' to signal the beginning of a keyword for my interpreter – lots of other equally successful possibilities here.
So; keywords for our interpreter will be formed by the character '@', followed by something meaningful. Let's also say that these keywords will only occur at the beginnings of lines; no other location is allowed – this just simplifies things
Given that some sections need begin/end code (such as the “Help” section), we'll need to formalize that syntax a bit. Let's tentatively say that our begin/end blocks will be defined by "@<token> BEGIN“ and ”@<token> END" pairs. Additionally, we'll say that if one exists, then so should the other – anything else represents a syntax error.

Armed with these specs, we can sketch what our vop syntax will look like, here's an early attempt:

@NAME myNoise
@LABEL “My Noise”
@ICON /some/image/file.pic
@CONTEXT “surface cop2filter cop2gen sop”

#include <useful1.h>
#include <useful2.h>

@HELP BEGIN
// This should be stripped by cpp, thank-you-very-much
This is the help for my vop… dig it!
@HELP END

parm {
name pos
label “Position”
type vector
}

input vector pos Position
output vector chaos Noise
signature default {}

@CODE BEGIN
vector pp = $is_connected_P ? pos : P;
$chaos = myOuterFunction($pp);
@CODE END

@OUTER BEGIN
vector myOuterFunction(vector p) {
return vector(noise(p));
}
@OUTER END
The amount of encoding going on is so minimal that even a passing familiarity with any scripting language (awk, perl, python, tcl, whatever) should be sufficient to write an interpreter for that!

So; here's the emerging picture of our packager:

  • 1. It's a shell script that takes one or more “.vop” files as input.
    2. For each file it:
    A) Runs cpp on it. The output of this step gets passed to…
    B) A custom interpreter which looks for ‘@’ tokens and formats them appropriately. Tokens that it doesn't understand (if any) are left alone so they can be interpreted by the caller. I ended up writing mine in about 40 lines of awk (haven'thad time to learn perl yet), and embedded them inside the shell script so the whole thing is one file – another pretty standard trick with unix shell scripts
    3. Using “hotl”, the shell script creates a stub OTL and inserts the result of step #2 as the dialog script, and then adds other sections as indicated by the remaining commands embedded (through the same ‘@’ mechanism) in the .vop file that weren't consumed by step #2: things like adding an icon file, callback scripts, whatever – things that need to happen at the OTL-level, as opposed to the dialog-script level.

    Looking at our vop fragment above, you'll no doubt note that the input, output, and signature sections needed no special treatment, so they were not accompanied by a ‘@’ keyword.
    Also; alarm bells should be going off when looking at the parameter definition – these things could get mighty lengthy for an actually useful VOP! Not only that; I need to learn a whole new syntax for defining parameters, yikes!
    Not to worry… we're in luck:)
    Remember we're doing a cpp pass… this means full macro expansions are allowed! (also recall that all these things can be defined in a single line – which is what macros expand to).
    Say we define a header like this:

    file: vop_stuff.h
    #ifndef vop_stuff_h_GUARD
    #define vop_stuff_h_GUARD

    #define VOP_PARM(TYPE,NAME,LABEL,DFLT,RNG,DSBL) \
    parm { \
    name NAME \
    label LABEL \
    type TYPE \
    default { DFLT } \
    }

    #endif

    Now I can define the parameter for my vop as follows:


    #include <vop_stuff.h>


    VOP_PARM(vector,P,Position,0 0 0)



    One line, and now it sports a default value… that's *much* better
    Any programmers out there can, I'm sure, see what kind of automated “yummyness” this can lead to
    Same goes for the other sections – for example, in my current version, I can define an HTML table and a row entry with custom (embedded) style sheets in 4 lines of code (code that doesn't interfere with the main .vop file because it's included through #include<>)… you get the picture.

    Obviously; once everything is working and all the basic macros are written, this turns into a pretty slick and efficient way to write VOPs. Also; given that everything is now text-based source files, we can keep vop definitions in the same development structure as our VEX shaders and our C++ DSOs… and put the whole thing under CVS. Sweet!


    hmmmm…. I think I'll stop there for now. This should give an idea of where to start at least – it *looks* a lot more complicated than it really is; trust me!
Mario Marengo
Senior Developer at Folks VFX [folksvfx.com] in Toronto, Canada.
User Avatar
Member
51 posts
Joined: July 2005
Offline
Hey Mario,

Great stuff, just last week I was speaking with our tools guy about coming up with a VOP wrapper?

Your work will be really usefull.

Cheers

Andre
User Avatar
Staff
2591 posts
Joined: July 2005
Offline
Mario Marengo
Hi all,


IMPLEMENTATION:

One of our main requirements is that we be allowed to use “standard” VEX/C/C++ syntax, along with #include directives, macros, etc – the whole deal. The solution that immediately suggests itself is to do a C-Preprocessor pass as a first step in our process; this will do all the things we want, as well as something we don't want: it will, by default, pollute the result with all kinds of “File/LineNumber” entries. A quick look at the man pages for the cpp (c-preprocessor) that's bundled with Red Hat reveals that the option “-P” suppresses line numbers from the output (your version may be different). Great; full C++ syntax; done!

As a note, on some systems (Win) there isn't a cpp implementation supplied. As an alternative, you might want to try “vcc -E” which simply runs the VEX CPP on the input file…. There's no option to suppress the # lines, but these can be useful since cpp errors are reported through this mechanism… You should be able to scan easily for “# error” or “# warning” and do appropriate stuff.

Of course, there's also the issue that Houdini will also run cpp on the VOP definition file, so by pre-running cpp, you're limiting the ability to do cpp type stuff in VOP definitions (though the issue might be moot since you've already had the advantage of running cpp..)

P.S. Though vcc has a simple cpp parser, there might be a couple of advantages:
- There are CPP extensions like the environment(), access() and strcmp() functions.
- The output should be standard across all Houdini supported platforms
- The program is pretty much guaranteed to be there.
User Avatar
Member
941 posts
Joined: July 2005
Offline
mark
As a note, on some systems (Win) there isn't a cpp implementation supplied. As an alternative, you might want to try “vcc -E” which simply runs the VEX CPP on the input file…. There's no option to suppress the # lines, but these can be useful since cpp errors are reported through this mechanism… You should be able to scan easily for “# error” or “# warning” and do appropriate stuff.

DOH! I've been using the gnu tools for so long that it didn't even occur to me to use vcc itself!
Yes! Like Mark says: cross-platform, funky extensions, etc, etc… much goodness!
So replace every instance of “cpp -P” in my original post with “vcc -E”!

mark
Of course, there's also the issue that Houdini will also run cpp on the VOP definition file, so by pre-running cpp, you're limiting the ability to do cpp type stuff in VOP definitions (though the issue might be moot since you've already had the advantage of running cpp..)
Yes. Looking back and re-reading my post, I realize that there are quite a few gaping holes in my description – not to mention some copy-paste blunders! (but then the post was getting pretty long as it was )
What you mention is one such omission. So; let me see if I can cover some of these things here…

Here's a quick example of what Mark's talking about:

Say your VOP takes a “position” as an input (we'll call the parameter “pos”). But if the user doesn't connect anything to it, you want “pos” to graciously default to some “sane” value. Let's also say you want to get fancy and want your VOP to work “correctly” in several different VopNet types (sops, cops, surface, etc.). Suddenly, there's no single “sane value” for “pos” anymore – “sanity” is now in the eye of the VopNet within which your vop's being instanced!
The way to deal with this is by using a preprocessor switch in the “code” section of your vop; and it would look something like this:


code {
#if defined(VOP_COP2)
vector $pp = set(X,Y,0);
#elif defined(VOP_CHOP)
vector $pp = set(V,0,0);
#else
vector $pp = P;
#endif

$pp = $isconnected_pos ? $pos : $pp;
}

At the end of this, the local var “pp” has either the value of the input “pos” if something was connected to it, or a “sane” default value that's context-sensitive. Sweet BUT!…. this is how it should look in its *final* form; not while it's going through our cpp pass!… did you notice all those hash-marks? cpp would try to make sense of them and fail (e.g: there's no VOP_CHOP symbol #defined anywhere in our module) Mayday! Mayday!

All that to explain why there are cases where we *need* to let some directives pass through our interpreter unscathed, so that they may be resolved only at the last possible moment (when symbols like VOP_COP2 are defined) by Houdini's cpp pass. Make sense?
All it means is we need to choose YAS (yet-another-symbol ) for our interpreter to “escape” directives. I'm afraid that just slapping a ‘@’ in front of any line containing a ‘#’ is not good enough – cpp is very jealous about its hash-marks. So I've opted for ‘@!’ (don't ask), which is replaced with ‘#’ wherever it is found. With this choice, the above snippet, as it should appear in the “.vop” file, is:


@CODE BEGIN
@!if defined(VOP_COP2)
vector $pp = set(X,Y,0);
@!elif defined(VOP_CHOP)
vector $pp = set(V,0,0);
@!else
vector $pp = P;
@!endif

$pp = $isconnected_pos ? $pos : $pp;

@CODE END


And after it's been through our interpreter, should look like:


code {
“ #if defined(VOP_COP2)”
“ vector $pp = set(X,Y,0);”
“ #elif defined(VOP_CHOP)”
“ vector $pp = set(V,0,0);”
“ #else”
“ vector $pp = P;”
“ #endif”
“ ”
“ $pp = $isconnected_pos ? $pos : $pp;”
}

But I usually make a concerted effort to hide things that look *that* ugly! So; stashed away in some include file of mine, one may stumble across this hermetic little tidbit:

file really_useful.h:

// Context-sensitive ‘P’
#define VOP_CTXT_P(VAR) @BR \
@!if defined(VOP_COP2) @BR \
vector $##VAR = set(X,Y,0); @BR \
@!elif defined(VOP_CHOP) @BR \
vector $##VAR = set(V,0,0); @BR \
@!else @BR \
vector $##VAR = P; @BR \
@!endif


// Test connection
#define CONNECTED(TYPE,LHS,VAR,DFLT) @BR \
TYPE $##LHS = $##isconnected_##VAR ? $##VAR : DFLT

// Test connection with assign
#define CONNECTED_CTXT_P(LHS,VAR) @BR \
VOP_CTXT_P(LHS) @BR \
$##LHS = $##isconnected_##VAR ? $##VAR : $##LHS

//…


(More on that @BR business later…)
The important thing is that now our context sensitive assignment to “pos”, becomes:


#include <really_useful.h>
//…
@CODE BEGIN
CONNECTED_CTXT_P(pp,pos);
@CODE END

I can live with that


WARNING: If you end up using vcc -E as a preprocessor, note that now there are cases where hash marks are valid and part of the resulting code. So; careful how you go about discarding lines with ‘#’ in your interpreter!


Some other quick notes:

1. I have found that some part of the vex compilation process (vex cpp?) has a limit on line-lengths. Not sure what the number is exactly (Mark?), but macros can get a little out of hand some times and will expand past this limit, so I add another symbol ‘@BR’ which the interpreter simply replaces with a new-line character.

2. In my original post, I forgot to mention two of the most important sections: “code” and “outercode”! :shock:
They are quoted blocks that begin with “code {” and “outercode {” respectively, and end with a matching “}”. Their contents are straight vex. Simple.

3. When writing the “outercode” section, make sure that all local symbols (variables, function names, etc) are prefixed by a ‘$’. This will prevent name collisions when the VOP gets instanced multiple times within the same net. (or when other vops happen to use the same symbols – less likely, but possible).

4. I didn't quite explain why we need begin/end markers. It's not because they need to get replaced with “section_name {” and “}” respectively (it would be simpler to just put that directly in the vop file). It is because we need to signal to the interpreter that the contents need to be treated in some special way: in the majority of cases, this treatment is simply to quote each line and escape all embeded quotes. In fact that's the case for all sections except “input”, “output”, parameters, and “signature”.

5. I breezed over how the OTL stub is created and populated with the different sections and all that, but this is (again!) getting too long, so I'll stop here
Mario Marengo
Senior Developer at Folks VFX [folksvfx.com] in Toronto, Canada.
User Avatar
Member
12469 posts
Joined: July 2005
Online
Thank you very much for your ever-so-complete response! You hit the nail on the head as far as my train of thought was going; but you had already been around that block. I'll say that I am hesistant to place all sources inside OTLs and having to rely on Houdini as an IDE for those larger operators or shaders. The convenience of being able to just edit in place and embed VEX code right there in Houdini session is really great though. Perhaps a convenient tool would be to configure your text editors (emacs? nedit? or the ghastly vim??) to be able to shell and extract/commit changes to OTLs using the hotl commands and hconnect to Houdini to cause a refresh of that OTL; and in this way go through your custom packager on the way to the OTL… anyway, just thinking out loud..

Up until now, we've been keeping any large custom shaders outside of VOPs, for no reason other than plain convenience. One thing thats marginally scary about VOPs sometimes is that you're still leaving the user to connect the output Cf/Of/Af or N/P up for themselves, and while you can do a lot of work to make this the easiest task in the world, there still can be mistakes easily made. Perhaps a VOP packager could contain an optional post script too to make any connections to the Output VOP connectors too, I don't know.

I'm still mulling over this issue when I can time.. thanks again for such a great post.
Jason Iversen, Technology Supervisor & FX Pipeline/R+D Lead @ Weta FX
also, http://www.odforce.net [www.odforce.net]
User Avatar
Member
941 posts
Joined: July 2005
Offline
jason_iversen
I'll say that I am hesistant to place all sources inside OTLs and having to rely on Houdini as an IDE for those larger operators or shaders. The convenience of being able to just edit in place and embed VEX code right there in Houdini session is really great though.

Agreed. It is nice, but currently limited to small scale stuff. It quickly becomes hard to manage as things grow in complexity… but I admit this is an incredibly biased view from someone who's never really “got into” visual development environments :shock:

jason_iversen
Perhaps a convenient tool would be to configure your text editors (emacs? nedit? or the ghastly vim??) to be able to shell and extract/commit changes to OTLs using the hotl commands and hconnect to Houdini to cause a refresh of that OTL; and in this way go through your custom packager on the way to the OTL…

“…ghastly vim??”
That sound; that low ominous rumble… that apocalyptic whisper that made you catch your breath. That; sir, was the sound of 90% of the unix community turning around to give you a communal “Evil Eye”!

Now… where were we?… Oh yes:

Man; that would be a lot of coming-and-going and convertin'-on-th-fly ….hmmm… dunno. I know you're just thinking out loud, but. Well; I suppose it would be easy enough to try and hack together a little prototype of your idea; but I'm not sure if I'd feel comfortable with all that traffic.
Actually; hold on. What's the format of the thing you're editing inside Houdini, then?

jason_iversen
Up until now, we've been keeping any large custom shaders outside of VOPs, for no reason other than plain convenience.
Same here. I'm developing all these vops and they *are* getting used; but their applications have been pretty tiny so far. The big stuff is still in shader format (and most of it still in RSL, lol!).

jason_iversen
Perhaps a VOP packager could contain an optional post script too to make any connections to the Output VOP connectors too, I don't know.
The packager creates a static thing – the OTL. It would be up to the mechanism that instances VOPs to auto-run scripts to do any dynamic things like wiring and such.
Incidentally; isn't that “Event” panel designed for just that type of thing? “On <event> DO <stuff>”? Haven't looked at it yet, but it looks like exactly what you're looking for check it out

Cheers!
Mario Marengo
Senior Developer at Folks VFX [folksvfx.com] in Toronto, Canada.
User Avatar
Member
12469 posts
Joined: July 2005
Online
I was totally thinking of using the ever handy Event scripts for this, absolutely. Of course I have no idea of how to create one from the Outside, but I'm pretty sure some cool trickery with otlexpand could manage that pretty easily. I've made no research into it, but I seriously doubt there is any OTL “api” to attach these event scripts to an operator in an OTL, right?

I am also wondering if it'd be a difficult thing for SESI to allow us to have more than one Operator Properties… window open to allow easy copying. I sense not, since perhaps the Handle promotion gizmos might well rely on only one Properties.. window to be open. As well, shelling to an external editor freezes Houdini while you're busy editing, making it impossible to open 2 editors for copy/paste goodness. Undeniably, would be cool though?

BTW, some of my recent experiences with OTLs raise another point wrt to the “publishing” of operators to common locations. I think I'll post those on the Early Access forum.

Cheers,
Jason
Jason Iversen, Technology Supervisor & FX Pipeline/R+D Lead @ Weta FX
also, http://www.odforce.net [www.odforce.net]
User Avatar
Member
941 posts
Joined: July 2005
Offline
jason_iversen
I was totally thinking of using the ever handy Event scripts for this, absolutely. Of course I have no idea of how to create one from the Outside, but I'm pretty sure some cool trickery with otlexpand could manage that pretty easily. I've made no research into it, but I seriously doubt there is any OTL “api” to attach these event scripts to an operator in an OTL, right?

API? … no. I don't think so. Or maybe “not yet?”
But after a very quick look, it's dead easy to add them from the “Outside”. It's actually no different than adding any other section (like IconImage, for example). This is one of those things I didn't get to mention when describing the packager, but it's all part of the same thing…

The actual (unpacked) OTL structure is pretty straight forward:
At the top – the root directory of the OTL – you get one sub-dir per embedded Op-OTL; and the expected indexing files that point (and describe) the contents. This is the “outer bag”. There are two descriptive (index) files (as of this writing, natch!) – but one points to the other, so I guess that “technically”, there's only one:
  • INDEX__SECTION
    Holds one entry per embedded OTL. Each entry is made up of a bunch'o descriptive token-value pairs. This is mostly “ID” stuff such as Name (“Operator”), Label, Inputs, etc. I think of this as the “header” for each OTL.
    Sections.list
    This is the actual index. One entry per embedded OTL, as well as an entry for the INDEX__SECTION itself – the one I just mentioned above.
    And inside each actual OTL directory you get another “Section.list” (with the same structure and purpose as the one one level up that I just mentioned) which holds one entry per “section” of the OTL – these are the “sections” you see in the UI. Each entry, again, is a name-label pair. So, for example if you had an embeded binary icon image, you'd see its entry as IconImage IconImage. In this case both name and label are the same; and the separator is a Tab character. Of course you'd need an actual file in the same directory called “IconImage” that holds the actual pic.
    So…. I'll give you one guess as to what you'd see if you had every one of those Event “sections” defined (tic-toc-tic-toc-tic-toc…) you win!
    PreFirstCreate PreFirstCreate
    OnCreated OnCreated
    OnUpdated OnUpdated
    OnDeleted OnDeleted
    PostLastDelete PostLastDelete

    And, again, an actual file with the appropriate name that holds the hscript commands that run on each event.
    Done.

    And, in my case, the “api” is three more lines to the packager, and a generic command (I'm using “@ADD_SECTION”) to add arbitrary sections.

    For Mark Tucker (I think): I'd love to hear why underscores in the file names (not in the labels) seem to need to get doubled up: foo_bar :arrow: foo__bar :?:

    jason_iversen
    As well, shelling to an external editor freezes Houdini while you're busy editing, making it impossible to open 2 editors for copy/paste goodness. Undeniably, would be cool though?
    How about multiple files in one editor? no go?


    I'd love to hear about it once you get it working… just to see the kind of issues you run into.
    Please keep the posts coming! 8)
Mario Marengo
Senior Developer at Folks VFX [folksvfx.com] in Toronto, Canada.
User Avatar
Staff
4438 posts
Joined: July 2005
Offline
Mario Marengo
For Mark Tucker (I think): I'd love to hear why underscores in the file names (not in the labels) seem to need to get doubled up: foo_bar :arrow: foo__bar :?:

That's because the “_” characters is used as the “Escape” character when there are illegal characters in the section name. Because there are a number of characters that aren't legal within file names, we had to do some sort of replacement scheme to convert section names into file names. And here it is:

“/” -> “_1”
“\” -> “_2”
“\n” -> “_3”
“\r” -> “_4”
“\t” -> “_5”
“$” -> “_6”
“~” -> “_7”
“_” -> “__”

So if your section name was “f$o b_r”, the automatically generated file name would be “f_6o b__r”.

But of course none of this should really matter to you, since it is only used when we expand an OTL. For collapsing an OTL, that section list file lets you associate any file name you want with any section name you want. So you could do:

file_name “some section name”

If you were to collapse and re-expand this OTL, you'd get a different section list entry, but I assume you always keep your OTLs in expanded form, so this isn't really a problem. But I know how much people like to know the precise internal details, so there they are

Mark
User Avatar
Staff
4438 posts
Joined: July 2005
Offline
Mario Marengo
jason_iversen
I was totally thinking of using the ever handy Event scripts for this, absolutely. Of course I have no idea of how to create one from the Outside, but I'm pretty sure some cool trickery with otlexpand could manage that pretty easily. I've made no research into it, but I seriously doubt there is any OTL “api” to attach these event scripts to an operator in an OTL, right?

API? … no. I don't think so. Or maybe “not yet?”

If you have the HDK, there is an API. The FS_IndexFile and OP_OTLLibrary classes are the ones to look at. They aren't really documented in the HTML help, but their interfaces are pretty simple. The “hotl” application is only about 200 lines long, and about 3/4 of that is parsing and error messages and such. So if you find the one-step OTL collapse and expand operations don't give you enough flexibility or control, the HDK provides you with options.

Mark
User Avatar
Member
941 posts
Joined: July 2005
Offline
mtucker
That's because the “_” characters is used as the “Escape” character when there are illegal characters in the section name. Because there are a number of characters that aren't legal within file names, we had to do some sort of replacement scheme to convert section names into file names. And here it is:

“/” -> “_1”
“\” -> “_2”
“\n” -> “_3”
“\r” -> “_4”
“\t” -> “_5”
“$” -> “_6”
“~” -> “_7”
“_” -> “__”

So if your section name was “f$o b_r”, the automatically generated file name would be “f_6o b__r”.

Aahhh; thank you, sir. Curiosity satisfied. And how!


mtucker
But of course none of this should really matter to you, since it is only used when we expand an OTL. For collapsing an OTL, that section list file lets you associate any file name you want with any section name you want. So you could do:

file_name “some section name”

If you were to collapse and re-expand this OTL, you'd get a different section list entry, but I assume you always keep your OTLs in expanded form, so this isn't really a problem.

Well, now; that explains a bunch of things, thank you.
But no, I don't “keep” (as in “maintain”) OTLs in any form – they are simply the things that come out the other end of a “make”. As such, they are in expanded form only during the briefest of moments during a build, then always collapsed threafter– never again expanded (no write permissions for *anyone*) – until they get possibly replaced by a new version. To be perfectly honest, I don't really *care* what happens to an OTL after it gets built – that headache belongs to another part of the pipeline altogether.

mtucker
But I know how much people like to know the precise internal details, so there they are

Yes… “my bad”
Thanks for being a sport and playing along, much appreciated.
Mario Marengo
Senior Developer at Folks VFX [folksvfx.com] in Toronto, Canada.
User Avatar
Member
941 posts
Joined: July 2005
Offline
mtucker
Mario Marengo
API? … no. I don't think so. Or maybe “not yet?”
If you have the HDK, there is an API.

It is therefore accurate to say that Houdini has no API, since the HDK is a different product.

<running for cover!>

hohohoho …. well; I mean… C'mon! How could I possibly resist that, huh?
Mario Marengo
Senior Developer at Folks VFX [folksvfx.com] in Toronto, Canada.
  • Quick Links