How to divide in COPs

   1624   9   2
User Avatar
Member
207 posts
Joined: 11月 2015
Offline
How does one divide one image by another in COPs? There seems to be no divide node; the VEX COP filter is very confusing and gives no syntax clues as to how I might write this operation in text, and I cannot figure out how I might use the VOP COP to do this, as it's unclear how to pull in channel information from any input other than the first (despite this node having 4 inputs).

I feel like an idiot...
User Avatar
Member
143 posts
Joined: 5月 2017
Offline
Create a vopcop2filter and connect your images to inputs 0 and 1 as shown in the figure below. (Turn on the display flag of the vop to see the result in the composite viewer).

Dive into the Vopcop2filter and create a snippet (VEX) and a copinput. Connect them as shown in the figure below.

Select the copinput and set the input index to 1. That way we have access to the second (1) input.

Now select the snippet and set the name under variable name 1 to "x". This name will be used in the snippet to access the color channels.

Paste the following code into the snippet and you should get a division operation as a result.
// Global variables R, G, B and A are directly accessible. 
// To make calculations easier, you can assign them to a local vector.
vector input0 = set(R, G, B);
// Color form second input.
vector input1 = set(x.r, x.g, x.b);

// Calculation:
// - Add
//vector clr_out = input0 + input1;
// - Subtract
//vector clr_out = input0 - input1;
// - Mulitply
//vector clr_out = input0 * input1;
// - Divide
vector clr_out = input0 / input1;

// Assing the calculated vector (clr_out) to R, G, B.
assign(R, G, B, clr_out);

I'm not sure if there is a function to directly access another input in the Snippet, as it is usually done in the SOP context with point(1, "Cd", ...). Didn't find anything at the moment.
Edited by viklc - 2023年3月30日 16:01:50

Attachments:
01.png (32.1 KB)
02.png (18.0 KB)
03.png (12.1 KB)
04.png (6.5 KB)

User Avatar
Member
7737 posts
Joined: 9月 2011
Online
dhemberg
How does one divide one image by another in COPs? There seems to be no divide node; the VEX COP filter is very confusing and gives no syntax clues as to how I might write this operation in text, and I cannot figure out how I might use the VOP COP to do this, as it's unclear how to pull in channel information from any input other than the first (despite this node having 4 inputs).

I feel like an idiot...

the premultiply node set to 'unpremultiply' does division. Channel copy can be used to marshal channels accordingly.
User Avatar
Member
207 posts
Joined: 11月 2015
Offline
Neat, thanks! The insane constraints in COPS make them oddly fun to noodle through problems like this. And I hadn't played with snippets, that's a great tool to learn about.

Here's another question: I would like to calculate the average brightness of an image. Ideally I could stash that somewhere in COPS as a single vector, but the image-y way to do it is compute an image with a constant color value of the average brightness.

Tinkering with this today, I came up with this:



It's wildly inefficient though; doing this in a VOP Cop means I'm basically looping through every pixel to find an average, but repeating this for every pixel in the image, rather than just a single time. I'm curious if there's a way to perform the evaluation once, then set it once?

Attachments:
average.png (114.4 KB)

User Avatar
Member
8516 posts
Joined: 7月 2007
Online
dhemberg
t's wildly inefficient though; doing this in a VOP Cop means I'm basically looping through every pixel to find an average, but repeating this for every pixel in the image, rather than just a single time.
are you sure that's whats happening?
you don't seem to be using any varying inputs in your flow so I'd assume it would run just once and write to all pixels

if however you are experiencing slowdowns using this you can potentially also try computing this on 1 pixel image from second input image
Tomas Slancik
FX Supervisor
Method Studios, NY
User Avatar
Member
207 posts
Joined: 11月 2015
Offline
Hm...to be honest, I'm not sure how to be sure. It's very slow (like, it locks up Houdini for 30 seconds or so).

I tried the much more sensible strategy of first downscaling my original image to 1 pixel, then scaling it back up again to its original resolution. Theoretically this should yield something like the behavior I'm after (and when I test it using a constant-colored test swatch, it works fine...the output of this process exactly matches the input). What I'm seeing though is that if the input image isn't uniform, it produces results that seem incorrect to me (though I don't understand why)...the computed result is darker than I expect (and darker/different than what's produced when I do this much more explicit VOP approach). I'm not sure if it has to do with a filtering/sampling issue or what (I've tried jiggling the knobs on the scale nodes to see what clues I can find, but nothing seems to cause much of a change in the result).

FYI, this strategy is basically what's described in this [blog.selfshadow.com] very excellent paper.
User Avatar
Member
207 posts
Joined: 11月 2015
Offline
Like, here's an example: I make a 300x300 ramp from black to white. Using the VOP approach (which takes like 20 seconds to run on this small image), the resulting output is a 300x300 image that's 0.5 everywhere.

On the left though, I scale down to 1x1 pixel, and that pixel is totally black. Of course when I scale back up again, I get a 300x300 black image.

Attachments:
Screenshot 2023-03-31 at 11.30.18 AM.png (134.9 KB)

User Avatar
Member
7737 posts
Joined: 9月 2011
Online
dhemberg
Hm...to be honest, I'm not sure how to be sure. It's very slow (like, it locks up Houdini for 30 seconds or so).

I tried the much more sensible strategy of first downscaling my original image to 1 pixel, then scaling it back up again to its original resolution. Theoretically this should yield something like the behavior I'm after (and when I test it using a constant-colored test swatch, it works fine...the output of this process exactly matches the input). What I'm seeing though is that if the input image isn't uniform, it produces results that seem incorrect to me (though I don't understand why)...the computed result is darker than I expect (and darker/different than what's produced when I do this much more explicit VOP approach). I'm not sure if it has to do with a filtering/sampling issue or what (I've tried jiggling the knobs on the scale nodes to see what clues I can find, but nothing seems to cause much of a change in the result).

FYI, this strategy is basically what's described in this [blog.selfshadow.com] very excellent paper.

downscaling an image doesn't sample all of the pixels, it uses a sample filter and some subset of pixels. Otherwise downscaling a high res image to a thumbnail would take minutes, not the fraction of a second we expect it to.
User Avatar
Member
207 posts
Joined: 11月 2015
Offline
jsmack
downscaling an image doesn't sample all of the pixels, it uses a sample filter and some subset of pixels. Otherwise downscaling a high res image to a thumbnail would take minutes, not the fraction of a second we expect it to.

Hm, ok; I understand that. But, I mean...I've used this strategy according to this Unity paper with success, and thought replicating it in COPs would be a fairly simple affair, though clearly I am mistaken (for reasons I don't understand yet).
User Avatar
Member
28 posts
Joined: 11月 2016
Offline
You could use a Python COP to generate the average pixel values and write that into a additional plane and then access it with a copinput in another VOP filter, or modify your color values directly in the python COP.
Here is the documentation for it. https://www.sidefx.com/docs/houdini/hom/pythoncop.html [www.sidefx.com]

A caveat on it though, since I was just trying to make this work, is that a COP node seems to cook twice when modifying the C plane that is displayed as the small preview on COP nodes. Once for the actual image, or what is displayed in the composite view with the expected image size, and once more for the small preview image.
The second cook will expect a different amount of pixels/resolution to be set with .setPixelsOfCookingPlane() and will error, which is really annoying.
You can collapse the image preview on cop nodes to avoid this, fix it with a little workaround (e.g. checking the resolution*resolution value against the length of the pixel array and then construct a pixel array with the correct size and sample the it with getPixelByUv() from the original image if that makes sense?) or not write to the C plane directly at all.

This will however only work until you try to write to the C plane in any node further down the stream which then results in another error when cooking the image preview for the python COP.
Maybe someone on the Forum knows how to fix this or do it correctly?

I would maybe suggest doing image operations on points or 2D volumes/heightfields. This would make it a bit easier to work with since the COP2 context seems to be a bit outdated. In the end just sample the volume into COPs when writing it out as an image or do COP specific operations on it.

Anyway here's the example code I put into the python COP to get the average value for each channel in the C plane and write it to the avgC plane which could then be used for further processing in the network. There are plenty of other and better examples in the linked documentation as well:

import numpy as np

def output_planes_to_cook(cop_node):
    # This sample only modifies the color plane.
    #return ("avgC", "C", ) # Uncomment this to see the C plane issue
    return("avgC",)

def required_input_planes(cop_node, output_plane):
    # This sample requires the color and alpha planes from the first input.
    if output_plane in ["avgC", "C"]:
        return ("0", "avgC", "0", "C")
    return ()

def cook(cop_node, plane, resolution):
    input_cop = cop_node.inputs()[0]
    
    color = input_cop.allPixels("C")

    # If writing to the "C" plane this highlights the issue
    if plane=="C" and resolution[0]*resolution[1] != (len(color)/3):
        print("pixel difference: ", len(color)/3, resolution[0]*resolution[1])
        r = color[0:len(color):3]
        r = r[:(resolution[0]*resolution[1])]
        g = color[1:len(color):3]
        r = g[:(resolution[0]*resolution[1])]
        b = color[2:len(color):3]
        r = b[:(resolution[0]*resolution[1])]
    else:
        r = color[0:len(color):3]
        g = color[1:len(color):3]
        b = color[2:len(color):3]
    

    avg_vals = np.array([np.mean(r), np.mean(g), np.mean(b)])
    
    new_r = np.full_like(r, avg_vals[0])
    new_g = np.full_like(r, avg_vals[1])
    new_b = np.full_like(r, avg_vals[2])
    
    cop_node.setPixelsOfCookingPlane(new_r, component="r")
    cop_node.setPixelsOfCookingPlane(new_g, component="g")
    cop_node.setPixelsOfCookingPlane(new_b, component="b")
Edited by gorrod - 2023年4月2日 16:07:46

Attachments:
python_cop2_test.hipnc (73.3 KB)

  • Quick Links