Difference between revisions of "HoudiniCops"

From cgwiki
 
(30 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
Mike Lyndon has done a great overview of Cops, definitely worth watching: https://vimeo.com/247302953
 +
 +
Cops is Houdini's built in compositing network. In theory its powerful, but in practice its one of the older systems within Houdini and needs some love. If you're in the market for an unstable ''and'' slow compositor, Cops has you covered.
 +
 +
It's a little frustrating, as there's tricks you can do in cops that are hard or impossible in any other system. Writing vex and vops filters is one thing, but being able to query geometry attributes in 3d and read them into pixels directly, without requiring an expensive render to do raycasting, is a pretty cool trick. It's the secret sauce behind the Sidefx Labs VAT tools and fast baking operations.
 +
 +
You can also directly refer to cops nodes where you would otherwise use a texture path. Just put in the path to the node, eg /img/OUT_cool_cop, then go back and prefix it with op:, so the path would look like
 +
 +
op:/img/OUT_cool_cop
 +
 +
Just note that this cook chain starts to get unstable, and sometimes you'll find the textures aren't updating when you want, or go the dreaded pale pink (meaning it couldn't find the texture), or you get garbled results... all the joys of working with Cops.
 +
 +
Still, every time I swear I'll never use Cops again, I end up using it. Bad cops bad cops, whatcha gonna do?
 +
 +
=== Disable thumbnails ===
 +
 +
A surprising performance hit comes from the thumbnails that cops uses by default, best to turn them off. You can do this in the preferences ( Edit -> Preferences -> Network Editor, Nodes and Trees), and turn off 'Show previews on New COP Nodes'.
 +
 +
[[File:cops_thumbnails.png]]
 +
 +
On an existing network you'll need to select all the nodes, r.click and choose Flags -> Thumbnail
 +
 +
[[File:cops_thumbnail_rclick.png]]
 +
 +
=== Hide on screen controls ===
 +
 +
Some of the tools can get a bit busy and distracting on screen, you can toggle them with the stow-menu on the right of the viewer.
 +
 +
[[File:cops_ui_hide.gif]]
 +
 +
=== Cops and Vex ===
 +
 +
[[File:cop_vex_screencap.gif]]
 +
 +
Download scene: [[:File:cop_vex.hipnc]]
 +
 +
There's no Cops wrangle yet, boo.
 +
 +
Cops Vop with a snippet works though, yay!
 +
 +
It still feels a little crusty (why all capital letters? Why X Y R G B A as separate attributes rather than P and Cd?) , but still fun to play with.
 +
 +
If you're doing straight colour manipulation, that's simple enough, read the incoming rgba, modify, write it out.
 +
 +
If you're transforming pixels (say a rotate or a warp) the workflow is different; it's similar to a primuv call in that you request an xy position, it returns a value.
 +
 +
The vex functions to do this are cinput() binput() finput(), which return low/mid/high quality results. An easier way is to use the 'cop input' vop, which wraps all 3 and gives you a combo box to choose the quality.
 +
 +
So to do a sine ripple to an image, you put down a cop vop filter, dive inside, make a snippet, wire in X and Y, and modify X like:
 +
 +
<source lang='javascript'>
 +
vector2 pos = set(X,Y);
 +
pos.x += sin(pos.y*60)*0.02;
 +
X = pos.x;
 +
Y = pos.y;
 +
</source>
 +
 +
 +
That then feeds the u and v inputs of a cop input, set the output signature to 4D vector (ie rgba), then split that vector back into individual floats, and wire to RGBA.
 +
 +
 +
 +
==== Global inputs are directly available ====
 +
 +
Not many people do cops stuff, was fun to watch [https://www.youtube.com/watch?v=iDeE_XtaKk0&t=3s&ab_channel=KonstantinMagnus Konstantin] at work and see what's possible. A handy thing I learned was inputs are available without needing to wire them from the global inputs to the snippet:
 +
 +
float foo = R;
 +
 +
[[File:cops_snippet_get.gif]]
 +
 +
==== Global outputs are directly settable with assign() ====
 +
 +
Similarly Konstantin showed you can export your results directly, no wiring required:
 +
 +
assign(R, G, B, mycol);
 +
 +
[[File:cops_snippet_set.gif]]
 +
 +
==== For emphasis, no really, cops wrangles are super clean ====
 +
 +
Previous attempts at cops wrangles (which you can see below) used the inputs and outputs on a snippet. You don't need to. All the global inputs are available, all the global outputs are assignable. Great for quick prototyping, you don't need to be distracted with any vops stuff until you want to start making UI, similar to working with geometry wrangles.
 +
 +
Dare I say it, this might be even easier and faster than using the [http://www.nukepedia.com/written-tutorials/expressions-101 Nuke Expressions node] which I've written a lot about!
 +
 +
Check it:
 +
 +
[[File:cops_snippet_expression.gif]]
 +
 +
==== Sample 3d geometry in cops ====
 +
 +
[[File:cops_geo_sample.png]]
 +
 +
Download hip: [[:File:cops_geo_sample.hip]]
 +
 +
As I write this the sidefx labs team have just released a wrapped up Cops node called Attribute Import, but good to know anyway.
 +
 +
You have geo with attributes, it has clean non overlapping uvs, you want to unwrap it to a texture in cops.
 +
 +
The vex '''uvsample''' function can do this for you. Just specify the geo to sample, the attribute you want, the uv attribute on the geo (probably @uv), and the actual uv coordinate to lookup, which in cops would be the 0-1 xy coordinate of each pixel. Set this as the pixel colour, done.
 +
 +
<source lang="javascript">
 +
vector2 xy = set(X, Y);
 +
vector col = uvsample(geo, 'N', 'uv', xy);
 +
assign(R, G, B, col);
 +
</source>
 +
 +
Paul at Sidefx will be the first to jump on me, in that this will be sliiiiightly off if you have really high res detail; you want to ensure you're sampling at the center of each pixel, not the corner, so you shift the uv lookup position by half a pixel. But if you're that precious about perfect results, you may as well use the labs tool. :)
 +
 
=== Latlong to cubemap ===
 
=== Latlong to cubemap ===
  
Line 15: Line 123:
 
http://stackoverflow.com/questions/29678510/convert-21-equirectangular-panorama-to-cube-map
 
http://stackoverflow.com/questions/29678510/convert-21-equirectangular-panorama-to-cube-map
  
The trickiest part here was taking the python answer, and translating it into vex. Not the language per se, but the inherent serial operation of most other languages, vs the clean parallelism of vex. The python one set up lots of iterators to march through the pixels, took a few goes to understand what could be stripped away. They were also doing lots of stuff to fix aliasing and sampling, which wasn't a concern here as the cinput vop takes care of all that.
+
The trickiest part here was taking the python answer, and translating it into vex. Not the language per se, but taking the sequential method of the python example ('for every pixel in the images do this...') and making a parallel processing version in vex. The python example also has lots of code for anti-aliasing, which wasn't a concern here as the cinput vop takes care of all that.
  
Anyway, got it all ported and... it was almost right. Could see the N/S/E/W planes were almost correct, but the top and bottom were skewed, and all a little off. At that point it was past midnight, so I posted my work in progress to the discord houdini chatroom.  
+
Anyway, got it all ported and... it was almost right. Could see the N/S/E/W planes were sort of working, but the top and bottom were skewed. At that point it was past midnight, so I posted my work in progress to the discord houdini chatroom.  
  
 
In the morning, Eetu found the fix, amusingly using a technique similar to mine; try and understand the logic behind it, work out where the fix should be, find it didn't work, then randomly insert multipliers here and there until one started to move things in the right way, then playing with that number until its fixed.  At some point I'll go back and try and understand why, but not right now.
 
In the morning, Eetu found the fix, amusingly using a technique similar to mine; try and understand the logic behind it, work out where the fix should be, find it didn't work, then randomly insert multipliers here and there until one started to move things in the right way, then playing with that number until its fixed.  At some point I'll go back and try and understand why, but not right now.
Line 27: Line 135:
 
Now that it knows the regions, it calculates the uv position on the sphere using the previously defined outImgToXYZ function. This does a conversion from the 2d cubmap positions into 3d sphere positions. This is then used to get the polar coordinates (ie, the compass direction, or theta, then the up/down angle, or phi), to find the pixel we need on the latlong, which is in turn used to drive the copinput vop.
 
Now that it knows the regions, it calculates the uv position on the sphere using the previously defined outImgToXYZ function. This does a conversion from the 2d cubmap positions into 3d sphere positions. This is then used to get the polar coordinates (ie, the compass direction, or theta, then the up/down angle, or phi), to find the pixel we need on the latlong, which is in turn used to drive the copinput vop.
  
The top and bottom regions will cover the entire top and bottom strips, so I make a mask based on the regions to multiply the results against later to get a clean cubemap image.
+
The top and bottom regions will cover the entire top and bottom strips, so I make a mask based on the regions to multiply the results against later to get a clean cubemap image. You can bypass the multiply1 node to see the effect of this.
 +
 
 +
=== 80s stuff ===
 +
 
 +
[[File:stranger_effects.png]]
 +
 
 +
Download hip: [[:File:retro_cyan.hip]]
 +
 
 +
[http://forums.odforce.net/topic/32148-classic-retro-grid-80s-sci-fi-best-approach/?do=findComment&comment=177427 This odforce post on wireframe rendering] made me try a few things I've wanted to have a go at, which I'll sum up as 'retro kitsch'.
 +
 
 +
Scanlines are easy enough, vopcop filter, combine the R G B channels into a single Cd vector, and X and Y into a P vector, then run this in a snippet:
 +
 
 +
<source lang='javascript'>
 +
float scanline = clamp(sin(P.y*YRES),0,1);
 +
scanline = fit(scanline,0,1,0.6,1);
 +
Cd*=scanline;
 +
</source>
 +
 
 +
 
 +
At one point I wanted to test an effect, and needed a grid. Vopcop2 generator, snippet, thus:
 +
 
 +
<source lang='javascript'>
 +
float linewidth = 0.002;
 +
float gridsize=0.04;
 +
Cd  = P.x % gridsize < linewidth ?1:0;
 +
Cd += P.y % gridsize < linewidth ?1:0;
 +
</source>
 +
 
 +
 
 +
Next was a chromatic aberration effect, which is basically a radial distort that is mostly 0 at the center, and increases at the edges, applied slightly differently to the r/g/b channels.
 +
 
 +
The core of the distort is this in a snippet:
 +
 
 +
<source lang='javascript'>
 +
float d  = distance({0.5,0.5},P);
 +
d = smooth(0.2,2,d);
 +
d*=0.05;
 +
vector2 disp = set(d,d);
 +
if (P.x<0.5) {
 +
disp.x*=-1;
 +
}
 +
if (P.y<0.5) {
 +
disp.y*=-1;
 +
}
 +
 
 +
P +=disp;
 +
P-=0.5;
 +
P*=0.9;
 +
P+=0.5;
 +
</source>
 +
 
 +
 
 +
That then drives 3 copinput vops as before (each beign run through an addconstant to slightly increase/decrease the effect for each channel, then combined.
 +
 
 +
That, plus some blurs, convolves, other hacky things, made something I was kinda happy with.
 +
 
 +
=== Using cops as a renderer ===
 +
 
 +
[[File:cops_render_screenshot.png]]
 +
 
 +
Download hip: [[:File:cops_render.hip]]
 +
 
 +
Cops lets you query info from sops via vex. The [https://www.sidefx.com/tutorials/game-tools-maps-baker/ Sidefx Labs Maps Baker] tool uses this by looking up uv's and querying sops normals, positions, other attributes, and bake them down into images in cops.
 +
 
 +
'''uvsample()''' is one vex call to do this; give it a uv position, ask it for an attribute to return at that uv position, go do stuff. In cops you can use the current pixel X Y as the uv location to query, fun abounds.
 +
 
 +
It then raised the question; if you can do this in uv space, could you do it in other ways, say camera space? I asked [https://twitter.com/ambrosiussen_p Paul Ambrosiussen], who said yep, you just need to get your uv values via whatever means, and you can do what you want.
 +
 
 +
This lead me to the '''intersect()''' and '''fromNDC()''' vex functions. The idea being this:
 +
 
 +
# For each pixel in cops, get its position in camera space
 +
# Project that pixel into the scene, if it hits some geo, get the uv at that location
 +
 
 +
Early tests proved promising, I could then run those results to the copinput vop and see textured geo. While googling for vex help I stumbled across an odforce post and youtube vid by [https://www.konstantinmagnus.de/index_en.html Konstantin Magnus]:
 +
 
 +
https://www.youtube.com/watch?v=iDeE_XtaKk0&t=2s&ab_channel=KonstantinMagnus
 +
 
 +
Not only did he get the uv sample, Konstantin was running lights, shadows, reflection, occlusion... I was struggling to play with duplo blocks, then looked and seen he'd made the Eiffel Tower out of Technics. Ouch.
 +
 
 +
At any rate, his video let me get through the process much faster, and it all worked great.
 +
 
 +
One thing Konstantin didn't touch (or that I didn't see a solution for) was to link the houdini camera focal length to this setup. Konstantin had an arbitrary 'zoom' factor, I was hoping to find a way to make it be directly driven from the camera.
 +
 
 +
This is where fromNDC() came in; you give it a camera path, it will reformat the input point values as if they were rendered through the camera. Perfect, except... it doesn't work in cops. In sops and in a mantra render its perfect, but in cops it gave glitchy results, a throwaway line in the help said
 +
 
 +
"fromNDC may not be well defined outside of sops/mantra/light context."
 +
 
 +
Doh.
 +
 
 +
After a day of swearing, I found the workaround; the '''perspective()''' call. This will generate a matrix to do the perspective projection of a camera, but this too had a throwaway line that was annoyingly obtuse:
 +
 
 +
"If you want the world-to-camera matrix, it's simply worldToNDC = transform * projection"
 +
 
 +
Much swearing later, I got it work, helpfully by doing it in sops first, then matching colours between cops and sops until it did what I expected (hence this hip setup).
 +
 
 +
SO:
 +
 
 +
# Get your sample positions as a 0:1 square, centered at the origin
 +
# Multiply the sample positions by the perspective() matrix, which scales the sample values the right amount for the focal length
 +
# Move those values forward a bit in z, then multiply by the camera transform, which puts those sample positions in front of the camera, like a 'lens'
 +
# Calculate a vector from the camera focal plane to each sample point. This is the ray direction to fire out into the scene
 +
# intersect() to fire said ray from the sample point into the scene, return what prim it hit, what intrinsic uv is at that location
 +
# Use primuv() to query the human friendly texture uv at that prim+intrinsic uv value
 +
# Feed that to copinput to get a textured version of the scene
 +
 
 +
There's devil in the details (the focal length calculation needed fudging, I still don't think I'm exactly right with some of the stuff), but it works well enough for my needs, and more interestingly this is all open for play.
 +
 
 +
The comparison I keep making is this is similar to the scanlinerender node in Nuke, but completely programmable. It's also interesting in that it sits somewhere between the opengl rop and a mantra rop; its slower than opengl, but faster than mantra, its as programmable as a mantra material, and also sits within cops so all the silly 2d tricks you might want to use are available, the camera can be fudged as much as you to do whatever other crazy non-standard camera tricks you could desire.
 +
 
 +
Fun stuff.
 +
 
 +
=== Using animated cops in sops ===
 +
 
 +
[[File:cops_update_anim2.gif]]
 +
 
 +
You can use cops directly with sops by just drag-n-dropping the cops node onto a texture path, and then prefix it with 'op:'.
 +
 
 +
A key annoyance is if the cops graph is animated, you often won't see it update in sops, forcing an annoying need to render the cops sequence out, and refer to an on disk path like c:/render/damnitcops.$F4.png.
 +
 
 +
Turns out [https://www.sidefx.com/forum/topic/79938/#post-343474 this is by design]. If you want the live path, append [$F]. So
 +
 
 +
op:/obj/geo1/cop2net1
 +
 
 +
becomes
 +
 
 +
op:/obj/geo1/cop2net1[$F]
 +
 
 +
You can also refer to any frame in the same way, eg pull in only frame 10:
 +
 
 +
op:/obj/geo1/cop2net1[10]
 +
 
 +
=== Extrapolateboundaries ===
 +
 
 +
[[File:cops_border.JPG]]
 +
 
 +
Edge smear, edge extend, max edges, edge padding, texture fill, fill borders UV alpha, expand image borders. Right, I think that's every possible combo of SEO terms I can think of for this very handy node.
 +
 
 +
Often when working with textures and uv seams, you need to smear colour beyond the uv shell borders to avoid artifacts. It also comes up when working with premultiplied vs unpremultiplied alpha, and you just want to extend the colour a little bit beyond the alpha edge.
 +
 
 +
I've done this too many times by using an edge detect to get the border on the alpha, copy that alpha back to the original source, blur, dilate, more work, then composite that result under the original image. That's whats shown in the image above, on the left, and the red nodes.
 +
 
 +
Dave Brown was doing a similar thing and put down a single node, extrapolateboundaries, shown above on the right, the single green node. Does all the stuff I was trying to do, faster, better.
 +
 
 +
Don't make the same mistake I did. Stash extrapolateboundaries in your brain, it'll save you one day.

Latest revision as of 14:21, 4 August 2021

Mike Lyndon has done a great overview of Cops, definitely worth watching: https://vimeo.com/247302953

Cops is Houdini's built in compositing network. In theory its powerful, but in practice its one of the older systems within Houdini and needs some love. If you're in the market for an unstable and slow compositor, Cops has you covered.

It's a little frustrating, as there's tricks you can do in cops that are hard or impossible in any other system. Writing vex and vops filters is one thing, but being able to query geometry attributes in 3d and read them into pixels directly, without requiring an expensive render to do raycasting, is a pretty cool trick. It's the secret sauce behind the Sidefx Labs VAT tools and fast baking operations.

You can also directly refer to cops nodes where you would otherwise use a texture path. Just put in the path to the node, eg /img/OUT_cool_cop, then go back and prefix it with op:, so the path would look like

op:/img/OUT_cool_cop

Just note that this cook chain starts to get unstable, and sometimes you'll find the textures aren't updating when you want, or go the dreaded pale pink (meaning it couldn't find the texture), or you get garbled results... all the joys of working with Cops.

Still, every time I swear I'll never use Cops again, I end up using it. Bad cops bad cops, whatcha gonna do?

Disable thumbnails

A surprising performance hit comes from the thumbnails that cops uses by default, best to turn them off. You can do this in the preferences ( Edit -> Preferences -> Network Editor, Nodes and Trees), and turn off 'Show previews on New COP Nodes'.

Cops thumbnails.png

On an existing network you'll need to select all the nodes, r.click and choose Flags -> Thumbnail

Cops thumbnail rclick.png

Hide on screen controls

Some of the tools can get a bit busy and distracting on screen, you can toggle them with the stow-menu on the right of the viewer.

Cops ui hide.gif

Cops and Vex

Cop vex screencap.gif

Download scene: File:cop_vex.hipnc

There's no Cops wrangle yet, boo.

Cops Vop with a snippet works though, yay!

It still feels a little crusty (why all capital letters? Why X Y R G B A as separate attributes rather than P and Cd?) , but still fun to play with.

If you're doing straight colour manipulation, that's simple enough, read the incoming rgba, modify, write it out.

If you're transforming pixels (say a rotate or a warp) the workflow is different; it's similar to a primuv call in that you request an xy position, it returns a value.

The vex functions to do this are cinput() binput() finput(), which return low/mid/high quality results. An easier way is to use the 'cop input' vop, which wraps all 3 and gives you a combo box to choose the quality.

So to do a sine ripple to an image, you put down a cop vop filter, dive inside, make a snippet, wire in X and Y, and modify X like:

vector2 pos = set(X,Y);
pos.x += sin(pos.y*60)*0.02;
X = pos.x;
Y = pos.y;


That then feeds the u and v inputs of a cop input, set the output signature to 4D vector (ie rgba), then split that vector back into individual floats, and wire to RGBA.


Global inputs are directly available

Not many people do cops stuff, was fun to watch Konstantin at work and see what's possible. A handy thing I learned was inputs are available without needing to wire them from the global inputs to the snippet:

float foo = R;

Cops snippet get.gif

Global outputs are directly settable with assign()

Similarly Konstantin showed you can export your results directly, no wiring required:

assign(R, G, B, mycol);

Cops snippet set.gif

For emphasis, no really, cops wrangles are super clean

Previous attempts at cops wrangles (which you can see below) used the inputs and outputs on a snippet. You don't need to. All the global inputs are available, all the global outputs are assignable. Great for quick prototyping, you don't need to be distracted with any vops stuff until you want to start making UI, similar to working with geometry wrangles.

Dare I say it, this might be even easier and faster than using the Nuke Expressions node which I've written a lot about!

Check it:

Cops snippet expression.gif

Sample 3d geometry in cops

Cops geo sample.png

Download hip: File:cops_geo_sample.hip

As I write this the sidefx labs team have just released a wrapped up Cops node called Attribute Import, but good to know anyway.

You have geo with attributes, it has clean non overlapping uvs, you want to unwrap it to a texture in cops.

The vex uvsample function can do this for you. Just specify the geo to sample, the attribute you want, the uv attribute on the geo (probably @uv), and the actual uv coordinate to lookup, which in cops would be the 0-1 xy coordinate of each pixel. Set this as the pixel colour, done.

vector2 xy = set(X, Y);
vector col = uvsample(geo, 'N', 'uv', xy);
assign(R, G, B, col);

Paul at Sidefx will be the first to jump on me, in that this will be sliiiiightly off if you have really high res detail; you want to ensure you're sampling at the center of each pixel, not the corner, so you shift the uv lookup position by half a pixel. But if you're that precious about perfect results, you may as well use the labs tool. :)

Latlong to cubemap

Latlong cubemap preview.gif

Download scene: File:latlong_to_cubemap_v04.hipnc

Cop vop latlong cubemap.jpg

Many thanks to Eetu for solving the last bit I was struggling with!

Been doing more and more pano experiments in Houdini lately, but I always have to keep Nuke open to do relatively simple things. A key thing is to be able to transform from a equirectangular/latlong panorama to a cubemap. Having recently worked out how to put a vex snippet in a cop vop, this seemed a good thing to try.

First, I did some intense research on polar maths and space conversion , which led me to this post:

http://stackoverflow.com/questions/29678510/convert-21-equirectangular-panorama-to-cube-map

The trickiest part here was taking the python answer, and translating it into vex. Not the language per se, but taking the sequential method of the python example ('for every pixel in the images do this...') and making a parallel processing version in vex. The python example also has lots of code for anti-aliasing, which wasn't a concern here as the cinput vop takes care of all that.

Anyway, got it all ported and... it was almost right. Could see the N/S/E/W planes were sort of working, but the top and bottom were skewed. At that point it was past midnight, so I posted my work in progress to the discord houdini chatroom.

In the morning, Eetu found the fix, amusingly using a technique similar to mine; try and understand the logic behind it, work out where the fix should be, find it didn't work, then randomly insert multipliers here and there until one started to move things in the right way, then playing with that number until its fixed. At some point I'll go back and try and understand why, but not right now.

To show-off, you can slide the pano horizontally (making sure wrap is enabled), and you get that cool cube tumble effect. I also show off that handy feature of houdini to use http paths to images. I'd planned to use the panos that ship with Houdini in $HFS/houdini/pic/, but annoyingly they're in a houdini cubemap format already, and to unpack those into a latlong, then back to cubemap, seemed more effort than it was worth.

The idea behind the code is to treat the image as a new blank cubemap, and work out where to lookup the corect values from the latlong. First it identifies the NSEW zones, which are every 1/4 across the image. Then it divides the image vertically into thirds, and defines the top third as top of the cube, and bottom third as bottom of the cube.

Now that it knows the regions, it calculates the uv position on the sphere using the previously defined outImgToXYZ function. This does a conversion from the 2d cubmap positions into 3d sphere positions. This is then used to get the polar coordinates (ie, the compass direction, or theta, then the up/down angle, or phi), to find the pixel we need on the latlong, which is in turn used to drive the copinput vop.

The top and bottom regions will cover the entire top and bottom strips, so I make a mask based on the regions to multiply the results against later to get a clean cubemap image. You can bypass the multiply1 node to see the effect of this.

80s stuff

Stranger effects.png

Download hip: File:retro_cyan.hip

This odforce post on wireframe rendering made me try a few things I've wanted to have a go at, which I'll sum up as 'retro kitsch'.

Scanlines are easy enough, vopcop filter, combine the R G B channels into a single Cd vector, and X and Y into a P vector, then run this in a snippet:

float scanline = clamp(sin(P.y*YRES),0,1);
scanline = fit(scanline,0,1,0.6,1);
Cd*=scanline;


At one point I wanted to test an effect, and needed a grid. Vopcop2 generator, snippet, thus:

float linewidth = 0.002;
float gridsize=0.04;
Cd  = P.x % gridsize < linewidth ?1:0;
Cd += P.y % gridsize < linewidth ?1:0;


Next was a chromatic aberration effect, which is basically a radial distort that is mostly 0 at the center, and increases at the edges, applied slightly differently to the r/g/b channels.

The core of the distort is this in a snippet:

float d  = distance({0.5,0.5},P);
d = smooth(0.2,2,d);
d*=0.05;
vector2 disp = set(d,d);
if (P.x<0.5) {
 disp.x*=-1;
}
if (P.y<0.5) {
 disp.y*=-1;
}

P +=disp;
P-=0.5;
P*=0.9;
P+=0.5;


That then drives 3 copinput vops as before (each beign run through an addconstant to slightly increase/decrease the effect for each channel, then combined.

That, plus some blurs, convolves, other hacky things, made something I was kinda happy with.

Using cops as a renderer

Cops render screenshot.png

Download hip: File:cops_render.hip

Cops lets you query info from sops via vex. The Sidefx Labs Maps Baker tool uses this by looking up uv's and querying sops normals, positions, other attributes, and bake them down into images in cops.

uvsample() is one vex call to do this; give it a uv position, ask it for an attribute to return at that uv position, go do stuff. In cops you can use the current pixel X Y as the uv location to query, fun abounds.

It then raised the question; if you can do this in uv space, could you do it in other ways, say camera space? I asked Paul Ambrosiussen, who said yep, you just need to get your uv values via whatever means, and you can do what you want.

This lead me to the intersect() and fromNDC() vex functions. The idea being this:

  1. For each pixel in cops, get its position in camera space
  2. Project that pixel into the scene, if it hits some geo, get the uv at that location

Early tests proved promising, I could then run those results to the copinput vop and see textured geo. While googling for vex help I stumbled across an odforce post and youtube vid by Konstantin Magnus:

https://www.youtube.com/watch?v=iDeE_XtaKk0&t=2s&ab_channel=KonstantinMagnus

Not only did he get the uv sample, Konstantin was running lights, shadows, reflection, occlusion... I was struggling to play with duplo blocks, then looked and seen he'd made the Eiffel Tower out of Technics. Ouch.

At any rate, his video let me get through the process much faster, and it all worked great.

One thing Konstantin didn't touch (or that I didn't see a solution for) was to link the houdini camera focal length to this setup. Konstantin had an arbitrary 'zoom' factor, I was hoping to find a way to make it be directly driven from the camera.

This is where fromNDC() came in; you give it a camera path, it will reformat the input point values as if they were rendered through the camera. Perfect, except... it doesn't work in cops. In sops and in a mantra render its perfect, but in cops it gave glitchy results, a throwaway line in the help said

"fromNDC may not be well defined outside of sops/mantra/light context."

Doh.

After a day of swearing, I found the workaround; the perspective() call. This will generate a matrix to do the perspective projection of a camera, but this too had a throwaway line that was annoyingly obtuse:

"If you want the world-to-camera matrix, it's simply worldToNDC = transform * projection"

Much swearing later, I got it work, helpfully by doing it in sops first, then matching colours between cops and sops until it did what I expected (hence this hip setup).

SO:

  1. Get your sample positions as a 0:1 square, centered at the origin
  2. Multiply the sample positions by the perspective() matrix, which scales the sample values the right amount for the focal length
  3. Move those values forward a bit in z, then multiply by the camera transform, which puts those sample positions in front of the camera, like a 'lens'
  4. Calculate a vector from the camera focal plane to each sample point. This is the ray direction to fire out into the scene
  5. intersect() to fire said ray from the sample point into the scene, return what prim it hit, what intrinsic uv is at that location
  6. Use primuv() to query the human friendly texture uv at that prim+intrinsic uv value
  7. Feed that to copinput to get a textured version of the scene

There's devil in the details (the focal length calculation needed fudging, I still don't think I'm exactly right with some of the stuff), but it works well enough for my needs, and more interestingly this is all open for play.

The comparison I keep making is this is similar to the scanlinerender node in Nuke, but completely programmable. It's also interesting in that it sits somewhere between the opengl rop and a mantra rop; its slower than opengl, but faster than mantra, its as programmable as a mantra material, and also sits within cops so all the silly 2d tricks you might want to use are available, the camera can be fudged as much as you to do whatever other crazy non-standard camera tricks you could desire.

Fun stuff.

Using animated cops in sops

Cops update anim2.gif

You can use cops directly with sops by just drag-n-dropping the cops node onto a texture path, and then prefix it with 'op:'.

A key annoyance is if the cops graph is animated, you often won't see it update in sops, forcing an annoying need to render the cops sequence out, and refer to an on disk path like c:/render/damnitcops.$F4.png.

Turns out this is by design. If you want the live path, append [$F]. So

op:/obj/geo1/cop2net1

becomes

op:/obj/geo1/cop2net1[$F]

You can also refer to any frame in the same way, eg pull in only frame 10:

op:/obj/geo1/cop2net1[10]

Extrapolateboundaries

Cops border.JPG

Edge smear, edge extend, max edges, edge padding, texture fill, fill borders UV alpha, expand image borders. Right, I think that's every possible combo of SEO terms I can think of for this very handy node.

Often when working with textures and uv seams, you need to smear colour beyond the uv shell borders to avoid artifacts. It also comes up when working with premultiplied vs unpremultiplied alpha, and you just want to extend the colour a little bit beyond the alpha edge.

I've done this too many times by using an edge detect to get the border on the alpha, copy that alpha back to the original source, blur, dilate, more work, then composite that result under the original image. That's whats shown in the image above, on the left, and the red nodes.

Dave Brown was doing a similar thing and put down a single node, extrapolateboundaries, shown above on the right, the single green node. Does all the stuff I was trying to do, faster, better.

Don't make the same mistake I did. Stash extrapolateboundaries in your brain, it'll save you one day.