From cgwiki

Get houdini : http://www.sidefx.com/apprentice

I thought my collection of SOuP walkthroughs was a nice gentle intro for the average maya artist. Now that I've been using Houdini for a little while, and knowing that SOuP is heavily influenced by Houdini, it seemed like a good idea to try and port those tutorials over.

I've become the Houdini evangelist I always despised, and I apologise for nothing. It really is a great piece of software, warts and all, and it's easier to get into than ever. Do it!

If you've landed here and you're totally new to Houdini, maybe head to HoudiniGettingStarted first, has a quick overview of how to use this site, and how to get comfy with Houdini as quickly as possible.


Attribute transfer and position

Attribute transfer hou.gif

Houdini scene: File:attribute_transfer_position.hip

Transfer the position of sphere to the points of a grid, with a smooth falloff.

  1. Create grid
  2. Create sphere
  3. Create attribute transfer node
  4. Wire grid into first input, sphere into second
  5. Enable 'Match P Attribute'
  6. On second tab set the distance low (say 0.1) and blend width up (say 2.0) to get a blendy warp like the soup version.

A few things to take note of here (or come back to later after you've tried a few more examples) :

  • P is the standard attribute for position, usually point position. In maya you'd say this are the vertex positions. While in maya you rarely modify these directly (usually you stay one level higher and manipulate object transforms). In houdini, all the fun stuff is down here, getting messy, pushing points around.
  • This setup reads the position of the sphere, and transfers it to each point on the grid, with falloff. If you had no falloff, the entire grid would disappear to a single point at the center of the sphere. (In fact you see this happen before you modify the distance and blend with parameters).
  • 'But hang on, you said we're basically manipulating verticies, how come this setup magically reads the transform of the sphere? And shouldn't the grid verts be warping to all the verts on the sphere surface?' Good questions. The answers are that a default houdini sphere (called a primitive sphere) isn't like a maya nurbs or poly sphere. It's more like a single maya particle rendered in sphere mode; ie it has position and scale, but no verts that make up the surface. Knowing that, this setup makes a little more sense. In maya-speak, we're reading a single particle (that we visualise as a sphere), read its position, compare to each vertex in a grid, and warp those verts towards the particle if they're too close.
  • Yes houdini has regular poly and nurbs spheres too, look in the 'primitive type' dropdown of the sphere SOP.
  • The 'match P attribute' is required because usually people _don't_ want this; they'll be transferring colour, or some other attribute, but they don't want the points to actually move.

Attribute transfer and colour

Attribute transfer col hou.gif

Houdini scene: File:attribute_transfer_color_and_position.hip

Same as before, just type 'Cd' in the list of transferred attributes. If you want more than one attribute transferred, separate them with a space. Eg, you want to transfer colour, normal, and pscale, you'd type "Cd N pscale".

I added a colour node to the grid to make it red, another to make the sphere green, so you can see what's going on.

Why 'Cd'? Houdini uses a lot of naming conventions derived from prman. Cd is 'colour diffuse'. 'P' is position, 'N' is normal, and so on.

List of recognised houdini attributes (the amount surprised me, I use maybe 10% of these): http://www.sidefx.com/docs/houdini14.0/model/attributes#idm140573562786112

List of renderman SL variables for comparison: http://renderman.pixar.com/view/shader-global-variables-tables

Point Sop

Point hou 01.gif

Download scene: File:point_sop.hipnc

Use an expression to deform a grid into a sine wave.

  1. Create a grid
  2. Create a point node
  3. Set the TY expression to be sin($PT*$FF*0.1)

Houdini, as well as having the renderman style parameters, often uses variables with capital letters and a dollar sign, called local variables to be used in hscript expressions.

Some nodes support only the absolute basics, eg, the current frame with $F, others like the point sop have many local variables defined. These can usually be found by looking at the help for that node, and scroll down to the 'local variables' section. Some older or esoteric nodes don't have any local variables defined.

Here, $PT means the point ID, $FF means the frame (expressed as a float).


Usually Houdini folk see these expressions and say 'ah, hscript'. Strictly speaking, houdini expression language and hscript are two different things (in the way that maya MEL and maya expression language are two different things), but no-one can be bothered to say 'houdini expression language' out loud, and HEL sounds wrong.

Similarly, people see '$VARIABLE' and say 'ah, local variable'. Houdini defines global, standard and local variables as different things, but again, local variable is the usual short-form.

Point Vop

Pointvop hou 01.gif

If you look in the scene above, you'll see next to the point sop is a point vop that does the same thing (in H13 and earlier called a vopsop, or an attribvop depending on what you needed to do). I explain vops a little more in Houdini Vops, but basically vops is a way to write very very fast operators.

Maya expressions, python, hscript are all interpreted scripting languages, and can run into performance problems as scene complexity rises. Vops are compiled multithreaded code, and scale much better. The closest Maya equivalent would be trying to write your own deformers in hypershade nodes, but more versatile.

Point sops and hscript are how people traditionally used houdini, but the more you can push stuff into vops and vex, the better you'll be.

In this case, it kind of helps to think of the vop network like a flowchart, reading left to right. First, the setup:

  1. Create a point vop
  2. Hit enter, or double click, to go into its sub network.

Here you find a new type of node graph, one that is also used for houdini shaders (SHOPS). They flow left to right rather than top to bottom. In a vops network, the houdini chant that 'everything is points' becomes very important. Anything you do in this graph is run on all the points of your geometry simultaneously, or at least as simultaneously as your computer will allow.

The leftmost node represents a single point, and all its standard built-in attributes; position, colour, normal, id etc. The rightmost node represents the final output of a point. With no other nodes in the graph, attributes are implicitly passed through, so input P goes to output P, input N to output N etc; ie, the geometry is unchanged. Once you start to wire in extra nodes, you change the state of the point. Because this graph runs not on one point, but ALL the points at once, the geometry too will be changed.

To replicate the behavior of the point sop, we'll do the following:

  1. Take the point id, current frame, and a constant of 0.002, and multiply them together
  2. Feed that result to a sine
  3. Get the point position, replace the Y component with the sine result
  4. Finally sets that as the new position of the point

Here's the workflow to set that up in vops. It's probably easier to just look at the image as a guide; tab complete the node names, drop them down, wire them together. Easy.

  1. Create a multiply node
  2. Create a constant node, float mode, set its value to 0.002
  3. Connect the global frame attribute to first input of the multiply node
  4. Connect the global ptnum (the point id) to the second input of the multiply node
  5. Connect the constant to the third input of the multiply node
  6. Create a sine node, connect the multiply to the sine
  7. Create a set vector component node
  8. Connect the global P attribute to the set vector component node
  9. Connect the sine to the value input of the set vector component node
  10. Set the vector component to 'component 2', ie of the XYZ of the point, we're setting Y, the second component
  11. Connect the set vector component to the global output P

Point Wrangle and Vex

Point wrangle.jpg

Vops nodes generate Vex under the hood, Houdini's native multithreaded language. You can write stuff directly in Vex if you want, and avoid Vops entirely. The syntax is pretty simple:

@P.y = sin( @ptnum * @Frame * 0.002 );

That multiplies each point id ( @ptnum) by the current frame (@Frame) and 0.002, and assigns it to each point's y-position ( @P.y ).

Hscript, Vops, Vex (and python), which to use?

Unlike the mel/python/pymel debate, this isn't so clear, each have pros and cons.

Hscript maps closest to mel. It's the original scripting language of Houdini, and pops up in a lot of places, largely due to legacy. Old tutorials tend to use it a lot, and I find its syntax hard to follow. Because it often appears where you might write a maya expression (ie, on an animatable channel), editing tends to feel a little cramped, and the end expressions feels like an odd combination of mel and tcsh scripts. Also, the older houdini nodes that only let you procedurally drive them with hscript tend to be single threaded, so don't scale well.

Hscript is still used a lot up at the transform level; setting properties on cameras, lights, objects, rops etc, so its useful there, but any time you see it used to modify points (eg, with a point sop), alarm bells should be going off, and its probably better to replace it with vops or vex.

Vop and vex (wrangles) are two views of the same thing; vex is the core language, vops is a node UI to generate vex code without you knowing it. Vex is designed to be multi-threaded and incredibly fast, so is ideal for manipulating geometry, or writing shaders, or making image filters... any time you need to run the same code on lots of things simultaneously. Can't stress enough how much vex is NOT mel or python; the only way you'd get close to what vex does in maya is to write your own shaders and deformers in C++.

Vops is great for starting out, and even as you get familiar with houdini, offer a drag-n-drop playground to experiment with. Bigger vop networks can get hard to follow though, and conditional operations (if/for/while) feel a little inelegant (though this has been made easier in H15).

Vex is quick to type and quick to run, but of course you need to know how to write vex, to write vex. Being able to right click on a vop network and choose 'view vex code' is a good way to learn, as are the docs, eventually you start peppering your hip files with nice little 1 or 2 liner wrangle nodes.

You can always combine them too; make a vop network, then inside create an 'inline code' vop, and write little custom bits of vex inside. Best of both worlds!

Python in houdini is interesting. Generally speaking it's used as pipeline glue, or for manipulating the UI. If you want to create/destroy nodes, read from a central asset database, parsing text files or curious data in interesting ways... typically pythony stuff. The base mapping is pretty good, and you can drive most things from python, but occasionally (like with pymel), you have to drop back to hscript to get some things done. Also remember that python in houdini suffers the same problem of python elsewhere; it's designed for readability and general purposeness, not for performance.

That said, these days I do very little python, way less that I used to in maya. I think the primary reason is that I'd use python in maya to work around a UI issue, or to repeat a workflow, or automate a process. In houdini, because the base methodology forces you to think in a procedural, generally reusable manner, there's way less scripting required.

So I'd say vops first, then hscript, then vex, then python.

Point Sop with if statement

Point if statement.gif

Houdini file: File:point_sop_if.hipnc

The houdini expression is a little more succinct than the soup one:

if ($PT % $F == 0, 1 , 0)

the syntax is 'if ( test_condition, value_if_true, value_if_false)', similar to nuke's TCL if statement.

Point Vop with if statement

The point vop version is a little less succinct, but again, performs better with more geometry. It's also in the scene file above.

Pointvop hou if.gif

Reading left to right:

  1. convert the point id to a float
  2. modulo it against the frame number
  3. see if it equals 0, get a true/false value
  4. pass this to a 2 way switch, where true sends the first value (a constant of 1), and false the second value (a constant of 0)
  5. set this as the Y value of the point position

"Wait, where's the 'if' vop?" You ask? There's a vop for that, all the control-flow vops (if/for/while), are subnets (like a group in nuke or a container in hypershade). If the incoming condition is true, the nodes inside are executed. This is fine, but if you're doing something simple like setting a value, its cleaner to keep the vop network flat. The compare vop and the 2-way vop function as an 'if' statement here, no subnets required, slightly faster workflow.

Point Wrangle with if statement

Put down a point wrangle node, and use this code:

if (@ptnum % @Frame == 0) {

I'll explain more of that on the HoudiniVex page.

Point rays, promote parameters

Rays and promote param.gif

Houdini scene: File:point_sop_rays.hipnc

Again, both point sop and point vop examples are provided.

This point vop uses promoted parameters. Because vops are compiled, each time you modify the vop network, the plugin has to be recompiled (houdini does this automatically for you). Even if you have a constant value and you just slide the number around, houdini has to recompile the vop network. Not too bad if you have a relatively small network and light geometry, but can get heavy quickly. In the animated gif above, you can see the lag as I modify the constant. Here's a breakdown of whats going on in that gif:

  1. Sliding the constant value is a little laggy. Lame.
  2. Disconnect the constant
  3. Middle-click on the attribute input, choose 'promote parameter'. Makes a little stub node.
  4. Right click on the stub node (usually have to zoom in a bit, tiny hit area!), choose 'expose input' to see the node
  5. Give it a nice name, eg 'Num rays'.
  6. 'u' to go up and exit the vop network
  7. The vopsop now has a new param, 'Num rays'. Slide it, performance is much better.

What's happened? The vop network has been recompiled, but this time it now takes an argument, 'num rays'. The argument is external to the vopsop, therefore it doesn't need to be recompiled when the argument changes. This equates to much better performance. You can (and should!) expose as many arguments as you need, and you're not limited to just float values; there's handy ramp UI's, dropdowns, toggles, whatever you require.

Using point to move edges

Scene: File:Move edges.hip

Move edges.gif

Bit flakey, but its the same as the SOuP one. An annoyance with houdini (and soup) is edges aren't first class citizens, so you have to always think in terms of points. Not a dealbreaker, but now and then you find yourself with an edge based problem, and think 'shit... wish I could access edges...'. You could probably make a group and work that way, but its still a workaround.

Of minor interest here was deleting every 2nd row from the grid, but in a way that my expressions would be happy. Double modulo to the rescue! If the grid is 20 polys across, I modulo the face id (primitive id, or $PR as a local variable) by 19, then again by 2 to get every second row. The delete expression looks like this:


To make it always work no matter what grid size you use, I use a channel reference to the grid rows (and drive the grid columns by the same thing, so it all stays aligned)


Also did a vop based one, like the previous examples it scales much better the more points you throw at it.

Point and attribute transfer together

Scene: File:point_and_attrib_transfer.hipnc

Point attribtransfer.gif

Again, point and vop examples. Realised the soup one wasn't quite right, so i tidied this one.

Houdini doesn't make a distinction between a single point, or many points, or objects, as the source for an attrib-transfer. In this case, if I set the input to be a line of points, the combined effect gives a capsule falloff region like the soup demo.

Beyond that there's nothing special. The main difference is that I create the same attr on the plane and a line called 'weight'. I set it to 0 for the grid, 1 for the line, then attribtransfer the line to the grid, choosing 'weight' as the attribute, with distance set to 0, and blend region wide enough to give a nice falloff.

Why set 'weight' on both the source and target? If you don't, no blending occurs. Try bypassing the attribcreate on the grid, you'll see the blend zone disappears, and you get a solid chunk of transfer that clips at the falloff distance.

Usual story, the vop node performs much better than the point one blah blah.

Point and attribute transfer with lag via solver sop

Solver sop.gif

Houdini version: File:attrib_transfer_lag.hip

The solver sop is a good way to do accumulation effects.

Most houdini nodes work 'in the now', similar to formula's in an Excel spreadsheet. Change any value or formula (or sop), and the rest of the network tries to update itself as fast as possible. Both regular sops and excel spreadsheets have no knowledge of older value changes, or previous frames.

The solver sop looks like a subnet, when you dive inside it gives you 2 interesting inputs; the 1st input as you'd expect of a subnet, and a purple 'prev_frame' input, that gives you the result of the previous frame.

Armed with this, you can do things that would otherwise be difficult. Take the previous frame and add a number to accumulate a result, or compare the previous frame to the current frame, or do other interesting things.

For this example, I do a attribtransfer of colour within the solver sop. Because this will add onto the result onto the previous frame, this leaves a trail. To make the trail fade, I use a point vop after the attribtransfer that blends the trails with the original grid colours. Altering the weight of the blend alters the length of the trail.

Outside and after the solver sop are 2 nodes to help visualise the trails a little easier. The pointvop drives the y-height of the points with the trail colour. The second sets the point colour based on the bounding box.

Solvers can be a little unintuitive at first, and have a few caveats. They're actually a simple dynamics simulation in disguise (a dopnet, discussed elsewhere on the wiki), so they need things like a number of steps per frame, a start frame, they need to be cached and simulated rather than working instantly if you scrub the timeline etc. They also have an odd trait that takes some getting used to; you can't edit and view from inside the solver sop; make sure to pin the viewport before you dive inside.

Now that you know the Point Sop, STOP USING THE POINT SOP

A lot of pre-v12 Houdini tutorials rely heavily on the point sop, using it to set point positions, normals, colour, velocity etc. It has a few problems though:

  • It doesn't scale well to high point counts
  • It has a lot of local variables that aren't used anywhere else, and are hard to remember
  • You have to type expressions in 3 times for xyz/rgb, irritating

I'd say in nearly every case where people would use a point sop, it's better to use a point vop, or a point wrangle, or an attribcreate, or a randomize attribute sop. They all rely on vex so they're multithreaded, the consistency of using just point attributes rather than point+local+global+standard hscript variables is much cleaner, and the knowledge you gain from these maps directly into modifying attributes for shaders, or particles, or dynamics.

Sidefx are trying to move everyone over to the vex way (the ability in H15 to use @var syntax in hscript and group fields is a great bridging tool). They can't just remove the point sop because there's a lot of older setups that still rely on it, but at some point, it'll have to go. The sooner you stop using it, the better off you'll be.

Remember kids: "Friends don't let friends use Point Sops."

Attribute transfer via object

Houdini scene: File:point_and_objects.hip

Point and object scale.gif

Again, Houdini doesn't care if you feed it a point, or line, or object, it just converts everything to points and does the attribtransfer on that. Dead easy. Only thing to keep in mind is to have enough points in your source geo so that to you the result you expect; if the points are too sparse, you'll get lumps and artifacts.

Group and delete

Houdini scene: File:group_delete_hou.hipnc

Group delete hou.gif

A group in houdini is a selection of components, with a name. The closest analogy in maya would be a selection set of verts or faces. Most houdini operators have a group selector, meaning they'll only affect those components.

Unlike maya sets, a group can be defined in a variety of ways; a user selection like maya (boring), a pattern (every nth point), an expression (all points with their Y value > 4), or several other ways. In this case, I'm defining them via the intersection with a teapot. I can keyframe the teapot, the group updates its membership on the fly. The delete node is set to use the group, meaning you get an animated teapot shaped delete.

This example also uses 2 transform nodes. A transform (in sops) is basically a cluster, ie, a transform handle at the shape level. It too has a group input, so if you wanted to use it to just translate a sub-section of geo, you'd group it first, then feed the group name to the transform.

Group and persistent delete

Houdini scene: File:group_delete_hou_accum.hipnc

Group delete accum2.gif

To make the delete accumulate over time so it leaves a trail, put the group inside a solver node. Here's how:

  1. Create a solver node
  2. Connect the grid to input 1
  3. Connect teapot to input 2
  4. ctrl-x the group node, go into the solver, ctrl-v.
  5. Connect 'Prev frame' to the group's input1
  6. Connect Input 2 to the group's input2
  7. Set the groups merge mode to 'union with existing'
  8. 'u' to go up a level
  9. connect the solver to the delete node
  10. set the display flag to the delete node, hit play

So what's going on here? As before, the solver gives you access to the previous frame. Sticking the group node within the solver means it'll add the current frame to the previous frame, building up a trail. Or it would, except the groups default behaviour is to always replace the previous contents of the group. By setting the merge node to 'union with existing', this ensures it adds to the group membership rather than replaces.

Solvers are a little hard to grasp at first, I've written a separate long form tutorial to help: The_solver_sop

Attributes as groups, or groups with @ syntax

While you can create groups by selecting points, you can also create groups by attribute expressions. Say you want a group of all the points below 0 in Y. You can use the group


What's powerful about this is you don't need a group node at all; in any sop that has a group field, you can use that expression where you'd normally type the group name, and it'll work.

Make sure you don't have spaces in the expression though! Group fields let you specify multiple groups like 'group1 group2 group3', so if you use the expression

@P.y < 0

Houdini parses that into 3 groups, one called '@P.y', another called '<', and one called '0', which of course won't work. If you need spaces (say you're matching against multiple point id's), enclose it in double quotes.

This is often used with a blast or delete node, eg delete all points that have full red in their colour ( @Cd.x==1 ), or their id is 5 or 10 or 23 ( @id="5 10 23" ). The help for this is kinda buried in the docs here:


Groups as attributes in VEX

The previous example uses attributes as groups. You can go the other way and use groups as attributes. Eg you have a (vex) attribute wrangle node, and you want to run some code only for points NOT in group 'mygroup':

if (!i@group_mygroup) {
    // do thing

Ie, you can refer to the group by using '@group_' as a prefix. Don't overthink the syntax btw, its literally @, type the word 'group', then underscore, then your group name, no spaces.

Calling it as a conditional here uses a little trick, cast it to an int, and negate it (that's the 'i' before the '@').

You can also create groups in a similar way with a wrangle node; just pretend you're creating an attribute and setting it to 1, houdini will make a group for you. Eg

if (@ptnum>chi('threshold')) {

Can see that in the following amazing animated gif:

Group wrangle.gif

I learned this ages ago, forgot, learned it again, forgot, learned again. If you're skimming this page for the first time, you'll forget too, and relearn it a few months later. :)

Very briefly covered in the docs here (along with lots of other handy vex tips):



Houdini scene: File:instancer.hip

Instance sm.gif

(Edit, update 30 sep 2015)

Only realised recently that this scene and description are more complex than required. Previous builds of houdini treated the copy sop and instancing as separate things; instancing for low memory, fast rendering, but limited edits, and copy sops for full editability, but memory and cpu hungry.

The copy sop is now capable of both. Its explained elsewhere in this site in a little more detail, but essentially the trick is the 'pack geometry before copying' toggle on the stamp tab of the copy node. This puts the copy sop into fast instancer mode. The output geo just has a single point for each thing being copied, and the things themselves aren't editable. You can push incredible amounts of geo this way, and its fast and easy to setup.

I'll tidy up this example at some point, but for now, I'll leave it as-is.

(end edit)

Instancing in Houdini is fairly straightforward. An instance node will take the geometry you specify as a parameter, and instance it onto the points inside the instancer node. A fresh instance node contains just one point, you're expected to delete it and make the points you want, or more likely, object merge the points from somewhere else.

The only other thing to set is to use fast or full point instancing (the default is off). Fast is good enough for most cases, full is if you need full control of each instances shading properties.

But wait! There's more!

The instance node (and its cousin the copy sop) look for attributes on the points. These can control basic translate/rotate/scale, as well as material assignment. The full list is in the docs:


For this demo I use an attrib vop (called a point vop in h14) to create 'pscale' and 'orient' attrs, and drive them with combinations of time, sin, pointid. Some key parameters are promoted to control the animation. Hours of C4D mograph style fun, perfect for making trippy animated gifs.

Instance with rotation

Here's another demo to placate a mate of mine:

Houdini scene: File:instance_onto_normals.hip

Instance with normal.gif

He asked if it was possible to instance onto geo, and have the instances rotate to match the normals of the surface. Yes, yes it is.

In this scene, I have a grid, fed to a mountain sop with animated offset, to get a jiggly thing. I then append a facet sop with 'post-compute normals' turned on. This generates normals for each point, and updates per frame.

Now if I feed that geo to an instancer, and the geo I wanna instance has its up-axis along +Z, it'll track with the surface.

For bonus points, I use an attrib vop after the facet to add a bit of extra random (driven by the point-id so it doesn't jitter over time), just because.

Enjoy Sandy!

Instance with effector


Download scene: File:effector.hipnc

Something that C4D is good at, and apparently ICE in XSI, thought to try it in Houdini.

Here I have a null and a grid. Houdini's null will give you a single point at its center, which is handy, makes it easy to incorporate into vop networks. I channel reference the scale of the null onto the point to use later.

In a point vop I bring in the null point, and get the distance from each grid point to the null point. I use this to drive the 'scale' attribute (I also link this to the scale of the null), which is directly picked up by the instancer.

Instance with effector waves

Cube waves.gif

Download scene: File:box_waves.hipnc

Answering this forum post.

Similar to the previous example, just doing something different after we calculate the distance of each point to the null. In this case I'm mapping the distance through a parameter ramp setup in a decaying sawtooth pattern.

Parameter ramps are a nice way to drive things when you don't wanna think too hard about maths n stuff. If you can get your points with an attribute that has values between 0 and 1 where you need them, stick a parameter ramp in and you can quickly edit the ramp in a nice UI to get what you need.

Effector for scale, rotation, colour

Effector img vex vops.gif

Download scene: File:effectors_vex_and_vops.hip

If you're gonna do it, may as well go all the way. Someone asked how to create a random effector C4D style in Houdini, the examples above didn't show how to add randomisation nor animation, so here they are.

Two different perfectly valid methods, one mostly node based with vops and attrib transfers, the other with vex wrangles.

The vops one goes something like:

  1. Create a cube of points, and a separate 'effector' point that will be used to drive the effect
  2. Set a weight attribute to be 0 on the cube points, and 1 on the effector point
  3. Attribtransfer weight from the effector to the cube points, using the attribtransfer parameters to control the blend falloff
  4. In a point vop network, setup a random colour (or scale or rotation) per point, and blend to it using the weight value
  5. Drive a copy sop to copy a small cube to each point.

The vex one is similar, but more of the grunt work is done in code:

  1. Create a cube of points, and a separate 'effector' point that will be used to drive the effect
  2. In a wrangle do the following...
  3. Measure the length between each point and the effector
  4. Fit that length to the required falloff distance, and pass it through a ramp to adjust the falloff shape, fit it to a value between 0 and 1
  5. Create a random scale/colour/rotation per point
  6. Blend to it using the processed weight value
  7. Drive a copy sop to copy a small cube to each point.

As per other examples, this takes advantage of certain point attributes that the copy sop will recognise and use to modfiy the copied geometry, in this case @Cd for colour, @pscale as a uniform scale, and @orient for rotation. The full list of attributes is here: http://www.sidefx.com/docs/houdini15.0/copy/instanceattrs

Even though the vops one is probably simpler on the surface, I'd end up using the vex one because... reasons? I guess I find having all the important stuff in one location is more useful to me, and the more vex I force myself to write, the more practice I get. :)

Instance with instance sop

Instance img randcolour.jpg

Download scene: File:instance_sop.hip

One way to drive different shapes to a copy sop is by feeding all the shapes to a switch sop, make the switch drive the left-input of a copy, and then stamp the switch number. This works, but I've never liked the messiness of stamps, and stamps don't perform well as the number of copies increases.

A recent odforce thread mentioned the instance sop, which will look for @instancepath attribute, and use that as the geometry. This could be some .bgeo on disc, or use the op: syntax to point to something in your network. That's what I'm using here, works great.

You can even assign colours to the points, and if you use one of the new pack aware shaders like the principled shaders, that'll show up in the renders, as I've done in this demo scene.

Here's the wrangle I'm using to set the instance path randomly:

string geo[] = {
int inst = int(rand(@ptnum,ch('seed'))*len(geo));
s@instancepath= geo[inst];

Amusingly, if you dive inside the instance sop, you'll find.... a copy sop, a switch node, and a stamp expression. It's doing exactly the same thing as my 'naive' method!

So ultimately it's actually not that much faster at all; testing a million points, a macbook pro takes about 30 seconds processing the stamps, and another 30 seconds outputting IFD before the first pixels appear in a render. Oh well. Still, neater than having to setup the stamp yourself I guess, but there must be a more efficient way....

Attribute from Texture

Attrib from tex.gif

Download scene: File:texture_attributes.hipnc

Setup is as follows:

  1. Take a poly grid
  2. Point vop to translate the points around with noise
  3. Uv project node to create uv coordinates
  4. Another point vop, this loads a texture via the 'texture' vop, applies colour to @Cd, and the luminosity to an attribute @pscale
  5. A delete to remove the polygons but keep the points
  6. A copy that applies a disk to each point. Because the points have colour and pscale attributes, the discs will inherit colour and be sized accordingly.

This setup keeps the texture static while the points move; if you want the texture to stick with the points, just move the uv project node before the first point vop.

I tried a similar thing when I first got into Houdini, I applied the texture via a shader, and got confused when I couldn't access those colours in sops/vops. I could see the texture in the viewport, why wouldn't it work?

The reason is that materials apply their properties at render time, not within the houdini sop context. This is also why you can sometimes apply textures in materials, and they somehow work in the render, but you can't do any uv manipulation in sops. Again, the renderer will create implicit uv's in certain cases where it can.

The viewport display of textures is an extra layer of confusion; that's the extra GLSL features of some materials. It's a convenient display feature, but that's it; for display. If you want to get stuff from textures, you need to explicitly load them like the example above.

Ball of Eyeballs with Copy and Packed primitives


Download scene: File:eyeball.hipnc

Fun silly thing to try. First I make a single eye, make it a packed primitive (so houdini treats it as a single point rather than a full poly mesh). Then there's a messy point vop network to do a few things:

  • Create a transform matrix that points each eyeball away from the origin (ie, along its normal if it had one)
  • Create another transform matrix driven by noise to do a skittish eye dart/saccade
  • Create a third transform matrix that makes all eyes look down the z-axis
  • Convert all matricies to quaternions, easier to manipulate that way
  • Add the normal-orient and the random-orient so that the eyes are all randomly darting roughly around their origin
  • Have another noise curve driven by time that does a random pulse between 0 and 1
  • Use that to quickly blend between the random-along-normal rotation and the look-down-z rotation
  • Use a few fit-range nodes driven by each eye's z-coordinate so that rather than all eyes switching at the same time, it goes in a wave from back to front
  • Take the final rotation, convert back to matrix, and push into the intrinsic 'transform' attribute

While the matrix->quaternion->matrix->intrinsic-transform trick is clever, I realized after the fact I didn't need it. I could have just pushed an orient and up value onto the points before the copy, same result with less hassle. Oh well. Still, handy to know how to manipulate packed primtives.

Explicit rotation/orient control for copy and instances

I breeze over this in a few examples on this page, probably worth writing it down as it's a common question, and used so often.

Both the instancer and the copy sop look for certain attributes on incoming points, with a certain order of precidence, listed on the following page:


The one that takes priority is @orient, and its the one I've come to use by default.

Orient is expected to be a quaternion ( a 4 value vector), which tend to be more stable than euler values (no risk of gimble lock or euler flipping). Quaternions are tricky to manipulate by hand, luckily we don't need to.

A transform matrix can be converted to a quaternion and back very easily, which is slightly easier to edit than a quaternion (ie, still not very easy to edit).

A rotation around an axis is nice and intuitive for humans, so that's what I start with, and use vex calls to convert that into a matrix, and then to a quaternion. Easier than it sounds.

Eg, say you have a flat ground plane, and want to scatter many copies of a tree, each with a random rotation around the y-axis. Here's what I put in a point wrangle before the copy sop:

float angle = rand(@ptnum) * 360;
angle = radians(angle);
vector axis = {0,1,0};
matrix3 m = ident();
rotate(m, angle, axis);
@orient = quaternion(m);

Breaking that down line by line:

 float angle = rand(@ptnum) * 360;

Define a variable 'angle', get a random number using the point-id as the seed. rand() returns a value between 0 and 1, so multiply it by 360 to get a full range of motion.

 angle = radians(angle);

Convert angle to radians, expected by the 'rotate' command later.

 vector axis = {0,1,0};

The axis we will rotate around, ie, the Y-axis. Often when scattering over geo you might make use the normal instead ( eg, vector axis = @N; ).

 matrix3 m = ident();

Create a matrix variable 'm', ident() returns a clean transform matrix, ie, rotate and translate 0, scale 1. A type 'matrix3' is a transform matrix, a 'matrix' matrix, ie, a 4x4 matrix, includes extra elements to deal with shear and perspective transformation, which we don't need.

 rotate(m, angle, axis);

This function applies the rotation 'angle' around vector 'axis' to matrix 'm'. Ie, 'm' is now rotated. Note that you don't have to assign it, the rotation is done directly to m.

 @orient = quaternion(m);

Create attribute 'orient', convert matrix 'm' into a quaternion, assign it to orient.

Simple right?

Ok, a few caveats. First, to define a quaternion attribute in a wrangle, usually you need the prefix 'p' to tell the wrangle this attribute should be a 4-value vector:


'p' is short for 'please tell me why it's p and not q'. I think.

But @orient, like @P, @N, @Cd etc are special, wrangles know what they are and what they should be, so you can skip the type definition.

Also, I'd usually not write it out this neatly, there'd be combined lines, and I'd probably add channels so I get sliders to drive the whole shebang. I'd probably be super lazy and use @attrs for everything, cos typing out 'type foo = blah' is boring. Eg:

@angle = radians( rand(@ptnum)* ch('range'));
v@axis = chv('axis');
matrix3 m = ident();
rotate(m, @angle, @axis);
@orient = quaternion(m);

Save that as a preset, blam. That said, I still haven't saved it as a preset; making myself type it out every time forces me to remember some vex, and that's good.

For-each node to make greebles and city blocks


Download scene: File:greeble.hipnc

The for-each nodes pre H15 were a little obtuse, even the Houdini docs now say 'meh... use a solver sop'. Still, folk at work say there's times they still come in handy, so figured it was time to try.

Similar to solvers, the for-node is a subnet, and it'll be looped over in various ways. It can iterate on:

  • each group
  • an attribute per point/prim that varies by a given threshold
  • a range of numbers
  • each primitive or point.

Inside the 'for' node you'll find an 'each' node. This is a placeholder that has parameters for group, attribute, or prim/point index. This will be updated each time the for loop runs, so if you channel reference these parameters from other nodes, it'll do a different thing each time.

In this example I create a plane with a few city-block like divisions. I split them so they're separate primitives, remove the ones that are too small, then feed it into a 'for' node, which is set to loop over each primitive.

Inside, I divide each poly randomly into building sized grids, then randomly extrude them up into blocks. By sticking a sort node just before the 'for' node, and putting it into random driven by time, I get a different layout of buildings on each frame.

This example uses way more hscript expressions than previous setups, for-each loops tend to lend themselves to this style.

Note that for-each loops don't multithread very well, so they're best avoided if performance is an issue, but sometimes you don't have a choice, and they're a good trick to keep in your back pocket.

The new H15 for loops are still kind of strange, but they don't make you dive into a subnet which is nice, and make it easy to test the loop at any single value, which is also nice. At some point I'll make some little examples of these new loops, but until then, the masterclass is probably the best way to get an idea of how they work:

https://vimeo.com/142534639 - H15 Masterclass | Loops with Jeff Lait

Slide points along edges

Houdini scene: File:slide_points_along_edges.hipnc

Slide points along edges sm.gif

So even though I said earlier that solver sops are good, the smug feeling you get from making stuff move without them is great. This example is totally procedural; no reliance on previous frames, no caching. Neat.

The aim here is given any geo, can points be made to slide along its edges like ants? They sure can.

  1. Create a grid, set its mode to 'rows' to make it wires (ie, poly edges that have no faces.. can also think of them as degree 1 curves, made of polys)
  2. Increase the number of points with a resample
  3. Wiggle it with a mountain sop (I inserted a jitter node, a mountain will have no effect unless the grid rows are jittered slightly)
  4. Create a u-coord attribute along the length of each wire with a point vop and a vex snippet (connect ptnum and numvtx) : u = vertexprimindex(0, ptnum) / float(numvtx);
  5. Colour it with a ramp based on u, cos it looks nice
  6. Scatter some points over the lines, they'll inherit the u-coord from the lines, as well as the ID of the line they're on (called sourceprim)
  7. Add a new random attribute, 'delta', between -1 and 1, for all the points
  8. Point vop to slide the points down the lines.

That point vop is where all the neat stuff happens:

  1. Take time, multiply it by delta. This means per point if the delta value is 1, they move at full speed, if delta is -1, they move in reverse at full speed, values in between are slower.
  2. Take the u-coord, add time to it, then modulo by 1. This will mean the u-coord will smoothly change, but if it hits 0 or 1 (ie the start or end of the line), it'll instantly warp to the other end of the line.
  3. Feed the sourceprim, updated u-coord, and the original line geometry to a 'primitive attribute' vop. This node will return the attribute of your choosing at the uv-coordinate you ask, on the primitive you ask. Here, we'll ask it to return P, the position on the line at our u-coord.
  4. Set that P position as the position of the point

In the example scene I feed this setup to a fast point instancer, to instance cubes onto the points. I also have a few other pieces of geo setup behind a switch node to show that it works for a variety of shapes.

Here's another variation based on this post, moving stuff along multiple paths in one direction.

Arrows curves.gif

Download scene: File:arrows_on_paths.hipnc

The main differences here are

  • The random delta is centered around 1 rather than 0, therefore everything moves in the one direction
  • At the very end it uses a trail sop to calculate v, which is the standard Houdini attribute for velocity. This is picked up by the copy and instance nodes, which will orient the copies in the direction of movement.

Here's yet another version:


Download scene: File:paths_attrib_interpolate.hipnc

Always nice to find a more elegant way to achieve an effect. The attribute interpolate sop is meant to help you stick scattered points onto deforming geometry. The scatter sop has options to store the primitive each point was scattered on, and the uv of that location. The attribute interpolate uses these to stick the point to the matching location on the geometry you give it it.

Here I use it for a different purpose; rather than have static uvs and deforming geo, I have static geo and shifting uv's. By setting each points uv to run in a loop between 0 and 1, and have them be scattered onto closed curves, they drive around the curves. Add some buildings, a simple car shape, its a barely passable car crowd sim. Fun!

Cubes marching


Download scene: File:cubes_marching_v02.hipnc

Cubes that are marching, not [marching cubes], what do I look like, some rendering wizard?

I hope this'll be my last 'moving things along curves' distraction, but its an effect I've had in the back of my mind for ages (I think since seeing a short film based around these sort of cubes at siggraph in 2001!), very satisfying to have finally worked it out.

Idea is same as the previous one, curves, scatter points, get uv's, animate their u attribute, interpolate on a curve to make them follow the path, copy packed boxes to the points.

To make the boxes rotate I get the point velocity, cross product to get a rotation axis 90 degrees from the velocity, and use the length of @v to determine a rotation speed. I then also scale it by the 1/@pscale of each point, so big boxes rotate slowly, small ones quickly.

A second wrangle is used to get the bounds of each packed box, and translate them up by the y-component of the lower bound. For some reason I couldn't do this all in the one wrangle. I know there's a few occasions where setting and getting related things in one hit won't work, vex will intentionally delay certain things to the end of the vex call to avoid conflicts. Seems setting transform and getting bounds might be one of those situations.

As an aside, the packed bounds is stored as a float array. How did I know which value in the array had the height offset I needed? Well, I didn't; I just started at 0, and kept going up in numbers until the boxes sat correctly. Lucky for me it was within the first 3 numbers. :)

Per primitive rotation with vops

Prim rot vop.gifPrim rot vop network.gif

Download scene: File:per_prim_rot_vop.hipnc

Fun challenge from the forums. The trick here is to repurpose the low-level instancer vops for our own needs. The 'make instance transform' does as it says, it'll make you a matrix that has the regular instancer attributes as inputs (orient, rot, scale etc). Once you have a matrix, you can multiply any points by that matrix, and they'll be duly transformed.

So with that as our end goal, now we need to feed it the right things, namely the pivot of the transform, and the orientation/rotation. The pivot will be the center of each primitive, so using the 'get primitive attribute' vop, we query the primnum the current point belongs to, and return the P position of the center of the implicit uv's (ie 0.5/0.5).

For rotation, if we just want to spin around the normal we can just extract the normal with the 'primitive normal' vop. But here we want to spin perpendicular to the normal (the tangent or binormal). One way to get this is to define a vector from 2 points on each face. Using 2 more 'get primitive attribute' vops, we query the point at uv (0,0), and at uv (0,1), subtract one from the other to get a vector, which we can use as a rotation axis.

Now we have an axis, and a rotation amount driven by the current frame, how do we get this as the orient quaternion needed by the instance transform? I cheated here and found an answer by someone else; the 'rotate' vop is the key. I was surprised to find that although there's a plug for a matrix, its not required. So I feed in the axis and rotation, this vop feeds to a 'matrix to quaternion' vop, and finally this is fed as the orient attribute to the instance transform vop.

Extra fun to be had by adding various multiply and add nodes with promoted parameters to control the fluttery mystique feather effect.

Unfolding polys

Folding prims.gif

Download scene: File:grow_polys_v06.hipnc

Seen it come up enough times on odforce, felt like it was worth having a go. There's a very thorough tutorial on vimeo for achieving a similar effect, was curious to see if I could do it without cheating and looking at that method.

This setup takes some geo, generates a point at the center of each prim, and connects them up into a new wireframe. The findshortestpath sop is used to generate paths from one point to all other points, and create some useful attributes. @prevpt is the id of the previous point on the path, and @cost is a measure of how many steps it takes to get from the start point to the current point. Points near the start have a low cost, points far away have a high cost.

With those attributes transferred back to the original prims, you can do the unfolding trick. Use @cost as a timer, delete all prims to start with, and undelete them sequentially over time based on @cost.

Once unhidden, lookup the prim position, the @prevpt position, use that to work out the rotation axis on the shared edge, and the midpoint to use as a pivot. Can then use that to rotate the prim with a method similar to the previous example. I suspect it can be done without the rot matrix->quaternion->instance matrix->out steps, but each time I tried the rotations would always be aligned to worldspace x/y/z axes; fine for a plane, but got ugly with other geometry.

This hip file uses a vex wrangle to do all the heavy lifting, but I did a lot of prototyping and debugging in a point vop before moving it all over to vex.

Works pretty well for the few cases I tested, but strangely not for a torus, not sure why. Changing geo requires some adjusting to the 'connect adjacent pieces' sop, mainly to ensure all points have enough geo to calculate a path to the starting point.

Folding objects (the transformers cube effect)

Pack fold img.gif

Download scene: File:pack_fold.hipnc

A rehash of a thing I tried a while ago, using basically the same technique as the previous example. This means its not solver based, so there's no simulation required, and by using a packed voronoi fracture, it can work on any input shape. Workflow is:

  1. Take a shape
  2. Use 'points from volume' to create an even grid of points inside it
  3. Voronoi fracture
  4. Assemble and convert to packed prims
  5. Isolate the points, use 'connect adjacent pieces' in point mode to create a gridwork of edges to connect everything
  6. Use 'find shortest path' to generate paths to the center point of the grid
  7. Create a u-coord for each path, create a point for each path, animate it down the path, delaying the timing per point either randomly, or by a ramp, or by distance
  8. Match up this new animating point with the original packed prim points, transfer the animation (bit more fiddly than expected)

The motion isn't quite as slick as I'd like, but it's a good start. I know if I needed this for an actual effect it'd be a solid base.

Create edges with vops (and vex)

Soot creatures.gif

Download scene: File:soot_vex_vs_vops.hipnc

Started as an attempt to do cutting edge plexus motion graphics stuff, ended as silly nervous soot spider things from Sprited Away.

This example calls on a few things; create primitives, point cloud lookups, and if/while subnets.

The vop network in this scene is a little hard to understand as its nested several levels down. The thing to keep in mind here is that vop networks work in parallel on all points at once, so you need to structure your logic accordingly. An 'if' subnet seemed the neatest way to keep everything ordered. So:

  1. if the current point is in a group called 'soot', continue, otherwise skip (ie, we'll assume for the rest of the logic we're working with the bodies, otherwise we'd grow edges from all points to all other points)
  2. open a point cloud (in this case not a pre-saved cloud on disk, but the live input geo), set the search radius fairly wide, and the maximum returned points to 8
  3. start a while loop that'll iterate over each point it finds. in the loop:
    1. create a primitive. this takes an array of points, and will draw edges between them
    2. add the body point
    3. get the id of the point found in this loop iteration (if it can't find a point, return -1, meaning no point will be added to the primitive)
    4. add the foot point
    5. run the next while loop, iterate to the next point found

I've revisited this setup, and realised this is a good example of when vops gets a little unwieldy. If you've followed this far, and like me quietly resisted vex and wrangles, look how much cleaner this is in code:

float maxdist = ch('maxdist');
int numlegs = chi('legs');
int pts[];
int prim;
int i;
if (@group_soot==1) {
  pts = nearpoints(0,@P,maxdist,numlegs);
  for (i=1; i<len(pts); i++) {
     prim = addprim(0,'polyline');

First it sets up a few variables, and then has a for-loop inside an if-block. The if checks if a point is in the 'soot' group. If it does, it uses nearpoints() to find the closest 'numlegs' points, with 'maxdist' distance. It then iterates through the points it found, but note that it starts with 'for(i=1...' rather than 'for(i=0...' which is the norm. This is because nearpoints will find the current point as one of the nearest points. Obviously we don't want that. Luckily nearpoints() stores the results ordered by distance, with nearest first, so we can just ignore the first point (which because of how vex stores arrays, would be referred to as pts[0]), and skip straight to the second ( 'pts[1]').

Anyway, in the for loop, we create an empty polyline primitive, and add 2 verticies to it; the current point (@ptnum), which is the body, and pts[i], which is the foot location.

If you skip the initial group test to allow all points to link to all points, then you get the plexus style motion graphics stuff. Make all the points wiggle, add colour in bounding box mode, you're on your way to vimeo motion graphics rockstar:

Edge grow anim.gif

Download scene: File:webbing_v01.hip

Handy as all that is, found out just recently that a 'connect adjacent pieces' sop, in 'adjacent points' mode, creates edges for you. If you peek inside, you can see similar code tucked in a wrangle node. Oh well.

Voronoi cluster and fracture


Download scene: File:voroni_cluster_v06.hipnc

An odforce post led to this video which led to these experiments.

Paraphrasing Vladimir Lopatin, normally a voronoi fracture is given scattered points, which gives you that standard voronoi cell look. If you feed it points with more ordered structures, a wide variety of fracture patterns are possible. Further, using the cluster option to guide the voronoi to fuse cells together leads to even more interesting patterns.

The patterns are loosely broken into 3 categories:

  • Geo patterns are derived from how the input points are structured; a grid, a staggered brick pattern, closely spaced triplets etc.
  • Texture patterns use vop textures to output a cluster attribute to drive the voronoi cluster+fuse.
  • Cluster patterns use the 'cluster points' node to do its own semi-random collating of points, which again are used to drive the cluster+fuse of the voronoi.

Lots of fun experimenting with various settings, seeing what the resultant patterns are. Most of these examples are also animated over time, usually something being randomised per-frame to show even more variations.

Tension and wrinkles

Tension wrinkles.gif

Download scene: File:tension_and_wrinkles.hipnc

The measure sop calculates perimeter or area per primitive. If you measure the perimeter of a primitive before its deformed, and again after its deformed, subtract one from the other, you can tell if its being compressed or expanded (it'll be positive or negative).

You can map this to colour to get pleasing compression displays, or use it to drive wrinkles.

This example uses a ripple sop to get the base warpy motion on a low-res shape, and a point-deform sop to lattice it onto a high-res shape.

Couple of things here:

1. Getting attributes from 2 connections into a vop network. The "get attribute" vop handles this; it has a drop down to choose the input (by default geo on disk, or the 1/2/3/4th input to the vop network), what the attrib name is (eg "rest_p"), the attrib type, and which point to lookup. Because the static and deformed shapes have the same point count and id's, you can use "ptnum" as the index, meaning it'll match each rest point to its deformed point.

Get attribute.png

2. Measuring tension on edges. As mentioned earlier Houdini doesn't really deal in edges, so how can we make sure this works with curves or polylines? Other similar tension examples use area, this one uses perimeter. The unroll sop near the top, if enabled, will remove the polys leaving just edges. To my surprise the measure sop still happily calculates the perimeter. I then promote this to points in average mode, which seems to work well enough. I'd be curious what'd happen if I were to just grab a single edge and scale it, but that's pretty unlikely to happen in practice. Right?

My version here is pretty low-tech, Sebkaine on the odforce forums has done a much better setup that can calculate stress in u and v, and other cool tricks:


Create curves with points

Curve create.gif

Download scene: File:curve_create_02.hip

Possible answer to this question. The add sop lets you create one or many points in one hit, and if you tap 'enter', you get a little translate gizmo in the viewport to help you place the points.

You can then group those points and use a for-each to connect the points into a line with another add node, this time in primitive mode. To convert the poly line into a smooth curve, use a convert node.

Chops to load audio and create animated waveforms

Chops waveform.gif

Download scene: File:chop_waveform.hipnc

The classic protracker/winamp oscilloscope effect. Chops isn't well documented, found a semi related example on odforce, which I modified into this.

  1. Create a line with lots of points, I'm using 1000 here
  2. Create a chops subnet, add a file node, point it to a .wav file. Connect a null, name it OUT
  3. Go up a level, create a per point time attribute, say 'ptime', that's small enough to capture the audio samples. Most audio is at 44000 samples per second, so to get an interesting output means you need this value to be in the thousandths-of-seconds realm.
  4. Create an attribute, say 'sample', read from the chops OUT null with the chopt expression: chopt('/path/to/chop/OUT/chan0', $PTIME)
  5. Use this to do silly things

I was warned that chops can be very slow, and got extra worried when there looked like there's no vex call to read from chops. There was no need to worry, this is plenty fast. Seems the slowdown is if you create lots of extra audio channels within chops, or do heavy audio processing. Used in this way, just loading a sample and mapping it to points, its realtime for 2000 points, and pretty good for 20000.

To make the curve bit smoother and useful for other operations, you could just resample it, or use the chops low-pass filter, and adjust the cutoff to remove the high frequencies.

You'll probably want to enable audio playback:

  1. Click the speaker icon in the bottom right
  2. Choose the scrub tab
  3. Either set the path to the audio on disk (I found this to work best), or to the OUT chop node
  4. Close the speaker tab, open the playback prefs (icon to the right of the speaker), and turn off 'integer frame values'

While this works, Houdini can get a little twitchy on Linux, and crashy on Windows, at least with the sample I used. Creating a flipbook, and setting the audio path in the flipbook tab options was stable, so I'd suggest that if you find crashes like I did.


Trails noise.gif

Download scene: File:trails_noisey_v01.hipnc

The trails node is basically a time echo effect like in nuke or after effects, but lets you access those echo's in interesting ways. It's simplest effect is just duplicates of the input geo in time. Its most often used effect is to calculate velocity for motion blur (get every point position at the previous frame, record the difference, feed that to mantra for blur). This example uses one of the other trail modes to generate curves.

To map colour along the length of the curves, I use a uv texture set in 'rows and columns' mode. This cheat just maps u+v along and down each contiguous primitive, exactly what we need. You can then just use a color node in ramp mode.

To do the wisping effect, a vop network drives flow noise based on the curve points, scales it based on the u-value, and adds it to the curve position. Near the sphere the noise is scaled down to 0, and its at full intensity at the trailing end.

Finally Alpha is also mapped along the length of the curves, and they're converted from linear curves to smooth nurbs curves.

There's some minor implementation details; all documented in the setup. Uv's are projected onto verts by defaults, we need them on the points, so they get copied from verts to points. Also they're created as a vector (uvw I guess), which seemed to confuse the color node. I create a single float attribute, 'u', and copy just the first value from uvw to u. Finally the trails themselves aren't curves, but meshes with a single edge. Houdini wouldn't let me convert these to nurbs curves directly, so I had to explicitly convert from meshes to polylines, then polyline to curves.

The final delete attribute is just being pedantic; the little hazard lines on merge nodes when there's different attributes on the incoming geo annoy me. :)

Add sop to make lines from points

Add sop screenscap.gif

Download scene: File:add_sop_examples.hipnc

A fundamental thing you'll want to do eventually is take a bunch of points, and wire them together. There's vex methods as outlined earlier, but the non-code way is via an add sop. The most basic example runs like this:

  1. Create a grid
  2. Scatter points, set it to 100 points
  3. Append an add sop
  4. Switch to the polygons tab, then the 'by group' sub-tab
  5. The points are now all wired into a polyline, the point numbering determines the order.

As well as joining all points, you can tell it to join every 2 points into a line by setting the add mode to 'every N points', and let N be 2.

With a scatter the point order is intentionally random, which can mean adding every n points together can be a little chaotic. Putting a sort sop before an add sop can be useful, so if you sort the points along x, for example, the resultant lines will be mostly perpendicular to the x axis. Similarly using a sort in 'spatial locality' will try and order points so that close points will have similar numbers, resulting in a less messy layout.

Another handy node to use with an add sop is the cluster points. This is like a sort in spatial locality mode, but it lets you define how many clusters of 'close points' you want created, and will give those clusters their own @cluster attribute. You can then use another mode of the add sop, to add by attribute. Set the attribute to 'cluster', now those clusters will be joined together.

Yet another common trick is to use @id with the add sop. Using the same scattered points, you could use a wrangle to make @id=@ptnum, then use a duplicate sop to make several copies of all the points that move up on y, and rotate on y by a given amount. Feed that to an add sop, set to look for the attribute 'id', you get a line per each original scattered point that is connected to all its transformed duplicates, as if you had extruded the points up and away.

That last one can be handy with particle systems. One thing I missed from maya was the streak particle type. That doesn't exist in Houdini, but its easy to make yourself. Particles always have an @id attribute, so if you append a trail sop after the particles, leave it in its default mode, then append an add sop, polygons mode, by group, add by attribute, using 'id', the trail particles will be wired together into lines. Yes you can use the trail node to do this directly (or better, don't do anything to the particles, just turn on motion blur and each particles @v make streaks for you), but its fun to know how to do it manually if required.


Now on its own page, HoudiniVolumes. All stuff about SDF, VDB etc is now there.

Attribute Wrangle and iq colour ramp


Download scene: File:iq_colour_ramp.hipnc

Came across this article by Inigo 'Shadertoy' Quilez about ways to generate interesting colour ramps with 4 input colours and a tiny expression. Attribute wrangle nodes are perfect for this sort of thing; ie, times when you don't need vops, just little bits of vex. A nice feature of wrangle nodes is lazy UI creation. Say I assign the value of a float slider called 'myslider' to a float value 'foo', even though the slider doesn't exist yet:

float foo = chf('wavelength');

...you can click the little plug icon to the right of the code window, and houdini will create the slider for you. Great way to quickly get interactive controls. Here's the code for this wrangle node:

vector a = chv('base');
vector b = chv('gain');
vector c = chv('wavelength');
vector d = chv('phase');
v@Cd = a+b*cos(2*3.1415*(c*@uv[1]+d));

all that does is create 4 sliders to control each parameter, then create/assign a Cd attribute, and apply the expression from iq's site. Super fun to play with.

To help me understand how the expression works, I create 3 copies of the curve, split it into the r g b values, and map the color to P.y.

Packed Primitives

Good overview at the start of this bullet masterclass if you'd rather watch than read: https://vimeo.com/80840429

Packed prims are a way to represent lots of geo by a single point. Eg, take the pigs head, which is 2800 points, pack it, and now its a single point. You could almost think of it as a shape transform in maya, or a node in the /obj network of Houdini, but down in sops.

Pack pig.gif

This is handy for many reasons:

  • It lets you manipulate loads more geo than you could otherwise; Houdini only processes the one point per pack, so while making 50,000 copies of the regular pig will slow Houdini down substantially, it barely breaks a sweat when you have 50,000 packed pigs.
  • Even for smaller numbers of shapes, its easier to manipulate 20 points in sops vs 20 transforms in the /obj context.
  • Mantra treats packed prims as instances, so can render more geo in less time with less memory
  • Rigid bodies can use packed prims, allowing you to push crazy amounts of geo
  • Packs can be hierarchical, so you can pack a leaf, put those on branches, pack the branch, put the branches on trees, pack the tree, scatter many trees, you get savings and ease of editing along the way
  • Packed can be unpacked if needed (generally frowned upon, but fine if you only need to unpack and edit 2 specific trees out of 4000 packed trees)
  • Hierarchical alembic files are treated as packed prims, so it all conforms nicely
  • There's increasing focus on tools to edit packed prims without the need to unpack them; eg you can assign materials to sub-objects of packed prims, and certain other render properties.

There's several ways to create packed prims:

  • A pack sop
  • A copy sop, on the stamp tab is the toggle 'Pack Geometry Before Copying'
  • The file sop lets you load geo as a packed prim via the 'Load' drop down
  • The object merge sop has an option to pack before merging
  • The alembic sop has several options for loading as packed, or hierarchical packed, or other
  • The assemble sop, often used to bind fractured geometry back together, has a 'Create Packed Geometry' toggle

The assemble sop can be emulated with a few steps, nice to know how it works under the hood. Eg, if you run a voronoi fracture, here's how to manually convert those pieces to packed primitives:

  1. Create a connectivity sop in primitive mode to identify the islands
  2. Append a prim wrangle sop, enter this code: "s@name=itoa(i@class);"
  3. Append a pack sop, enable the 'Name Attribute' toggle.

The connectivity sop creates a 'class' attribute, which will be 0 for all prims in the first island, 1 for the next, 2 for the next etc. The pack sop expects the unique identifier per-pack to be a string, so the vex wrangle casts the 'class' attribute to a string, and stores it in 'name'. Finally the pack sop looks up 'name' to do the packing.

While you'd think you could just get the pack sop to lookup the 'class' attribute directly, it seems it requires a string attribute, hence the middle step to cast from int to string.

Why is a int-to-string function called itoa() and not itos()? Beats me.

Manipulating Packed Primitives

If you think of packed prims as a point that represents a shape, that means if you move the point, you move the shape. Use an edit sop for direct manipulation, or a transform sop, or a vop network, translating packed prims around is easy.

Rotate and scale is a different matter. A point by definition has no rotation or scale, so rotating or scaling a packed prim has no effect. How can it be manipulated?

If you look at the primitive view in the geometry spreadsheet, there's an 'intrinsics' drop down you probably haven't noticed before. Mostly these are read-only attributes like curve length, bounding box etc., but for packed prims there's a few more things. Of interest to us is 'transform'. Enable that from the drop-down, and you can see its a transform matrix. This is both read-and-write capable, so we can use this to modify the packed prim rotation and scale.

Put this in a primitive wrangle, and it'll rotate packed prims around their Y-axis:

matrix3 m = ident();
float angle = @Time;
vector axis = {0,1,0};
rotate(m, angle, axis);
setprimintrinsic(0, "transform", @primnum, m);

Similar to examples listed earlier, we first setup a default transformation matrix as 'm' ( called an identity matrix, hence the call to ident() ), and setup an angle and axis to rotate around. Then a trusty call to rotate() which will rotate our matrix m. Then we push this matrix into the intrinsic-transform attribute.

Ideally you'd just write something like '@transform = m', but intrinsics can't bet set this way, hence the call to setprimintrinsic(). LIke other similar vex functions, it needs to know the geo to manipulate (0, or the first input to the wrangle), the attribute name ('transform'), the primitive to manipulate ( @primnum ), and the value (m).

Seems fiddly, but scales really well, the performance gains are well worth it.

Note that this is all for modifying packed prim transforms after they've been created. More often you'll be setting up points to feed into a copy sop, and have the copy create the packed prims for you. In that case, its usually easier to define rotation and scale in the standard copy/instance way, ie, use @orient, @pscale, @rot etc as also explained in earlier examples.

Cycle keyframes

If there's one thing that drives me into a quivering rage, its setting out-of-range keyframe behaviour in houdini. The ability is there, but its so buried and hidden I always forget, and flail about for 20 minutes trying to find it. This is me, reminding me, that its alt-E. Alt-e matt, to cycle keyframes it's alt-E. Yeah? Remember? Cycling? It's alt-E.


Or to be all menu-esque about it, r.click, channels -> Edit Displayed Channels Properties...

Cycle keyframes.gif


On its own page, HoudiniPython.


Read this far? Got a feel for the format? Think you could make some sweet mini tutorials too? Help me out!

Get LiceCap to make animated gifs, make a .hip or two, and email me to join the wiki and spread the Houdini love. Email: matt@tokeru.com