Soup

From cgwiki

Get SOuP: http://www.soup-dev.com/index.html

Discuss SOuP: http://soup-dev.websitetoolbox.com/

Get houdini (you know you want to) : http://www.sidefx.com/apprentice

Quick start

Soup is a suite of nodes to help you modify components in a procedural way. It does other things too, but that's the core of it.

If you're impatient, you need to know 5 things; the 4 most common soup nodes, and a soup shelf button.

  1. A bounding object is a soup gizmo that can be spherical, rectangular or capsule in shape. It has various ramps that can control various parameters (colour, weight, transform, normal), that by default have no effect at the gizmo border, and full effect at the gizmo center. It has an additional mode, object, so that you can use an arbitrary shape to drive values.
  2. An attribute transfer takes a shape and a bounding object, and lets you transfer attributes into the shape. Usually this will be weight, which you'll then use in nodes downstream. If you're familiar with nuke, its kind of like using a copy or shuffle copy node to push channels into a stream to be used later.
  3. A point node lets you modify components using particle expression syntax. Eg, if you want to move points on a mesh 2 units in $Y, you can use the expression $Y = $TY+2. Like the attribute transfer node, you can modify several properties like translation, normal, radiusPP etc.
  4. A group node defines selections of components to be fed into other things. This can be based on a bounding object region, or simple expressions, or advanced poly selection style criteria (surface area, edge length, normal and so on). These selections can be passed to non-soup nodes like delete components or deformer sets.
  5. The plug button. Soup is all about connecting things to things, using maya's connection editor gets very boring very quickly. The plug button helps you avoid this. Select 2 nodes and click the plug button, a menu appears of best guess connections you want to make, saves a lot of time.


Note that when I say 'shape' in nearly all these examples, I mean anything that has components; so that can be a poly mesh, a nurbs curve, a nurbs surface, a particle system.

I also assume that you:

  • have installed soup
  • are comfortable with maya nodes (ie you've done a bit of hypershade, deformers, particles, peered under the hood a little)
  • can use hypershade or the node editor
  • can use the connection editor


Also, for my benefit and yours, I'm starting to put together the equivalent Houdini quickstart guide. Have a look!

Attribute transfer and position

At position.gif

Download maya scene: File:attributetransfer_position.ma

This isn't a very useful example, but its the fastest way to see soup in action.

  1. Create a polymesh with 20x20 divisions
  2. Select the mesh, graph it in the node editor, name the mesh inMesh
  3. Duplicate the mesh, name it outMesh
  4. hide inMesh
  5. Select inMesh, create an attribute transfer node from the soup menu. As a convenience if you have a mesh selected when you create this node, it'll be automatically connected.
  6. Create a bounding object
  7. Select the bounding object, then the attribute transfer, use the plug button on the soup shelf to connect them
  8. Select the attributetransfer and outMeshShape, use the plug button to connect them
  9. Select the atributetransfer node, enable the 'position' checkbox
  10. Now if you move the bounding object, you'll see the points get warped towards the center of the gizmo.


Attribute transfer and colour

At colour.gif

Download maya scene: File:attributetransfer_colour.ma

Slightly more useful example.

  1. Take the previous example, on the attributetransfer node turn off position, turn on colour
  2. To see this, select the outMesh, open the 'mesh component display' section of the attribute editor, and turn on 'Display Colors'
  3. The default is for the bounding object colour is pure white. Find the colour ramp on the bounding object node and make it a more obvious change (I've done a simple black+white gradient)
  4. To render this, you need to use one of the color-per-vertex nodes of your chosen renderer, eg for vray you'd need the vray vertex colors node


Point node

Point basic.gif

Download scene: File:point_basic.ma

Simple example of using the point node. Here I move each point on Y based on it's id and time.

  1. Again using the previous example, create a point node
  2. Connect inMesh -> point
  3. Connect point -> outMesh (you'll disconnect the attribute transfer when you do this)
  4. Select the point node, open the position section, enable 'enable', and set the expression to be $Y = sin($ID * $FF * 0.002)
  5. Hit apply, you should see a sine wave applied through the mesh. If it doesn't animate over time, select the currentTime attribute of the point node and enter '=frame'.


Point with if statement

Point if statement.gif

Download scene: File:point_with_if_statement.ma

Simple example of a point node using if statements. Uses modulus to compare the $ID of each point against the current time ($FF). If it divides cleanly, move it up in Y by 1, otherwise leave it at 0.

Because the vertex id's of most simple meshes are laid out in a simple grid pattern, it follows that manipulating points based on their id will make pleasing geometric patterns.

if ($ID%$FF==0) {
   $Y = 1;
} else {
   $Y = 0;
}


Point rays

Point sine rays.gif

Download scene: File:point_rays.ma

This one was testing if the radial rays expression from my nuke tutorial would work here. The adding of 0.001 is to avoid any divide by zero errors. Who needs mathematica to do graph plots eh?

$rays = 8;
$rate = 0.1;
$Y = sin((atan($TX/($TZ+0.001))+$FF*$rate)*$rays);

Using point to move edges

Download scene: File:weave.mb

Trying to work out if I could move edges with the point command. I couldn't work it out, but came up with a simple workaround instead. For each point, test if the its $ID divides cleanly by 2, if it does, move it by sin($ID). If not, move it by sin($ID -1), ie, move it the same as the previous point, meaning I'm moving edges.

The source meshes here are planes where I'd deleted every second row, so I could concentrate on the weave/ribbon effect.

if ($TY % 2 = 0) {
   $Y = sin($ID * $FF)
} else {
  $Y = sin( ($ID-1) * $FF )
}

With a second copy of the mesh rotated 90 degrees you get these soothing patterns.


Point and attribute transfer together

Point and weight.gif

Download scene: File:point_and_weight_example.ma

Time to combine what we know about the point node with the attribute transfer node. This will limit the point node to the region of a bounding object.

  1. Create a source mesh, target mesh, point node, wire them together as explained earlier
  2. create a bounding object and attribute transfer, wire them together as explained earlier (remember inMesh -> attribute transfer.inGeometry, boundingObject -> attributeTransfer)
  3. Connect attributeTransfer.outWeightPP >> point.inWeightPP
  4. Modify the expression so that it also multiplies against $WEIGHT. This will be 1 at the center of the bounding object, and 0 at the edges, so you get a falloff of the point node.
  5. In this example I modified the expression so it was easier to see, changed the bounding object type to capsule, and did a more showy animation.


If you're too lazy to open the scene, the expression here is

$Y=$TY+(sin($ID*0.005*$WEIGHT)*0.4);

Point and attribute transfer with lag via array data container

Point lag.gif

Download scene: File:point_array_data_container.ma

The arrayDataContainer node can be inserted between the attribute transfer and the point node for trails and persistent effects. The node has attributes to define how much time it should buffer, used in this way its the classic 'footprints in snow' effect. If you set the sink parameter to non-zero, data will fade over time, leading to disappearing trails, like this example scene and animated gif demonstrate. To use it's quite simple.

  1. Crate an arrayDataContainer
  2. Connect attributeTransfer.outWeghtPP >> arrayDataContainer.inArray
  3. Connect arrayDataContainer.outArray >> point.inWeightPP


Attribute transfer with via object

Attribue transfer by object.gif

Download scene: File:attribute_transfer_by_object.ma

As well as using a sphere, cube or capsule, you can connect another mesh to the bounding object, and each point on the mesh becomes a spherical bounding object.

Note that because the effect is driven directly from a mesh, the mesh's transform is ignored. If you want to transform the geo, cluster the entire shape and transform the cluster instead. Yet again we'll continue from the previous example.

  1. Create a polycone, use the polyCone attributes to lie it on its side and give it a highish number of divisions
  2. Connect cone.outMesh >> boundingObject.inMesh
  3. Set boundingObject type to 'input geometry'. Note that as soon as you do this any transforms you've applied to the bounding object are ignored, only the input mesh is used.
  4. The default radius per vertex is usually too small, go down to the 'Initialize point attrubutes' section on the bounding object and increase the point radius so that the sphere's overlap as you want. By now you should start to see the cone outline affecting the plane.
  5. To move and rotate the cone, apply a cluster, and transform the cluster.


As the name of the 'point cloud' section implies, this isn't limited to mesh inputs. You can drive it with curves, scatter nodes, particle systems, most things really, albeit with some extra nodes in-between to help.

Bounding object with geo input looks messy

If you use geo to drive a bounding object, you'll get a bounding sphere displayed for every vertex. To hide this, disable 'display bounding volumes' under the point cloud section of the attribute editor.

Bounding object with geo falloff radius

On the boundingObject, scroll down to 'initialize point attributes', and change the point radius. You'll probably want to turn on display bounding volumes again to see what's going on.

Bounding object radius.gif

Group node

A bounding object defines a region, an attribute transfer applies properties to points within those regions, and a point node modifies points. Next thing is a group node, which is used to select components to feed to other nodes.

Group and displayComponents

Group components.gif

Download scene: File:group_components.ma

The group node selects components, but doesn't give you a way to visualise that selection.

The displayComponents node can help here. It's designed to help you debug soup setups, it can display point data that you generate with attribute transfer or point nodes, and it can also display component selections that you create with group nodes.

It needs 2 things, an inMesh (or curve or particleSystem) which it can hijack to show show points over, and then whatever data input you want to visualise, eg your weights, or component list, or rgbPP. I was confused initially when I would only connect my data and see nothing. Don't make the same mistake!

Here's a walk-through of visualising components selected with a bounding object and group node.

  1. Create a poly plane with 20x20 divisions, name the shape inMesh, graph it in the node editor
  2. Select the mesh, shape, create a displayComponents mode. This will automatically connect it as the base for displaying data, and show blue points all over your mesh (you can manually override the input mesh via the .inputGeometry attribute)
  3. Create a boundingObject, intersect it with the mesh
  4. Create a group node
  5. Connect boundingObject.outData >> group1.boundingObjects
  6. Connect inMesh.outMesh >> group1.inGeometry
  7. Connect group1.outComponents >> displayComponent.inComponents
  8. On the displayComponents node, change 'use components' to 'input component list'. You should now see points hightlighted in blue that represent the group selection.


Later when you want to display weights, or rgbPP, or any other value of the points, you connect the attribute you want to inDataRgbPP, enable colour display, and turn OFF the 'constant color' mode. This is explained in the 'driving deformer weight with bounding object' example further down.

Group node to delete components

Group deletecomponent.gif

Download scene: File:group_delete.ma

The simplest (semi useful) example of a group node is to drive a standard 'deleteComponent' maya node. Usually deleteComponent is based on user selection, here we'll drive it procedurally.

  1. Usual starting steps: Create a plane, name it inMesh, duplicate, name the duplicate outMesh, hide inMesh
  2. Create a bounding object, intersect it with outMesh
  3. Create a group node
  4. Create a deleteComponent node; in a mel window, run 'createNode deleteComponent'
  5. Connect inMesh.outGeometry >> group1.inGeometry
  6. Connect bounding object >> group1
  7. Connect group1.outGeometry >> deleteComponent.inputGeometry
  8. Connect group1.outComponents >> deleteComponent.deleteComponents
  9. Connect deleteComponent.outputGeometry >> outMesh.inMesh
  10. There should be no change. Turns out you can't delete verts or edges, and expect to get valid results. It works best on faces, so we'll fix that.
  11. On the group node, change 'component type' to 'mesh faces'
  12. Success! You can scroll down on the group node and flip the 'invert' toggle to invert the delete if you want.
  13. Move the boundingObject around, feel clever.

Group node and animation

Group animated delete.gif

Download scene: File:group_animated_deletion.ma

By default the group node only tags components within the bounding object on the current frame. It also has the ability to remember previous frames, so if we were to continue with the previous example, faces can stay deleted after the bounding object moves somewhere else.

  1. Make sure invert is disabled on the group node
  2. Move the bounding object away from the plane, and animate it passing from the top right corner to the bottom left corner
  3. On the group node, scroll down to the 'Animation Control' section
  4. Enter '=frame' to current time so that its connected to time
  5. Set start frame and reset frame to 1
  6. Play the animation, you should now see the bounding object leave a trail of deleted faces behind it.

The soup examples use this for a foot-steps-in-mud effect; group nodes attached to the feet of a character drive a transform component node, which pushes faces down in Y by a certain amount. By using the animation feature of the group node, once the faces are pushed down, they stay down.

Group range

Group range and delete.gif

Download scene: File:group_delete_with_range.ma

A trick that I always liked in houdini. To make this clearer you might want to move the boundingObject away from the plane, or disconnect it from the group node.

  1. On group1, change its 'operation' mode to 'range'. You should get some form of pattern.
  2. The range values define the start and end, so '3 -4' would start 3 from the start, and end 4 from the end.
  3. The step values define gaps in the range, so '1 5' would skip 1, include 5, skip 1, include 5 etc.

Animating these values yield fun 'shapes building from nothing' effects.

Point and group range

Point1.gif

Download scene: File:point_and_range.ma

Here I have a group feeding into a point node. I've animated the range values so that at first there's no components given to the point node at all, then the the components 'grow' on by animating the end value, then using step to select rows, then animate the start value to make the components reset back to none selected again.

You can see the connection setup is simple in the example scene, but for completeness, here's the workflow:

  1. Create a poly plane
  2. Graph it in the node editor, name the shape 'inMesh' to make it clear what it is
  3. Duplicate the shape, name it 'outMesh'
  4. Create a point node
  5. Select inMesh, then the point node
  6. Click the plug button on the soup shelf, this tries to guess what kind of connection you need. Choose inMesh.worldmesh -> point.inGeometry
  7. Select the point node, then outMesh
  8. Soup's plug shelf button, point.outMesh -> outMesh.inputMesh
  9. Select the point node, find the 'transform' section
  10. Click 'enable'
  11. Type in the following expression:


$X=sin($TX+$FF*4+$ID*0.01)+$TX;
$Y=0.4*sin($FF*4+$ID*0.01)+1;
$Z=cos($TZ+$FF*4+$ID*0.01)+$TZ;


If you know Nuke, this is kind of equivalent to the Expression node (which I wrote a tutorial about).

Attribute transfer colour to particles

At particle capsule256cols.gif

Download scene: File:attribute_transfer_particle_capsule.ma

Enough meshes. Lets try particles. This is basically the same trick as the attribute transfer and colour earlier on this page, but with a few extra things we need to do for particles. There's 3 things which make this different from meshes.

  • Particles can't be fed directly to an attribute transfer, they need to be converted to a geometry type. The pointCloudToCurve node does this.
  • Because this is maya particles, you need to use particle expressions to get the colour from soup into the particles
  • The attribute transfer node outputs rgba, maya particles prefer rgb, so we need to split that out. The rgbaToColourAndAlpha node does this.


Ok, lets go:

  1. Create a particle grid (particles menu -> particle tool, enable grid, set the spacing to 1, click at -10,-10, then 10,10, hit enter)
  2. Create a bounding object, attribute transfer, connect those two together
  3. Create a pointCloudToCurve node, connect particle.position >> pointCloudToCuve.inArray
  4. Connect pointCloudToCurve.outCurve attributeTransfer.inGeometry
  5. Set attribute transfer mode to color
  6. Set a nice colour ramp on the bounding object
  7. Create an rgbaToColorAndAlpha node
  8. Connect attributeTransfer.outRgbaPP >> rgbaToColorAndAlpha.inRgbaPP
  9. Rename rgbaToColourAndAlpha to 'rgb', (its easier to type in the particle expression)
  10. On the particle shape, create a per particle rgb colour attribute
  11. Set the creation and runtime expression to be rgbPP = rgb.outRgbPP


If you switch to a solid shaded mode, you should see the colours transferred to your particles, and they should update if you translate/rotate/scale your bounding object.

If you're curious, create a nurbs curve (create -> nurbs -> circle is the easiest way), and connect pointCloudToCurve.outCurve to nurbsCircleShape.create. You'll see that the node has created a curve that passes through every particle, in order of particleId. This is the most efficient way to create geometry for soup to work with, which becomes important if you have thousands of particles being emitted.

Project colour from texture into a particle grid via textureToArray

...assuming your particle grid isn't too dynamic. I explain this caveat below.

Tex to rgbPP.png

Download scene: File:tex_to_rgbPPscene.ma

The previous example used the built-in colour ramp on a bounding object. What if you want to transfer colour from a texture? Ages ago I used the solidTexSampler plugin, more recently you'd use colorAtPoint which always felt a bit hacky, here's how you do it in soup.

This example doesn't use attribute transfer, it's a mesh with a texture, feeding textureToArray, which is connected to rgbPP through a particle expression. When I first tried this, using Peter Shipkov's examples, mine never worked, when his clearly did. Mine looked like a corrupt texture; you could sort of see the texture being transferred, but it was misaligned and in black and white, like a raw image that had been debayered wrong. Why?

What I didn't understand is that textureToArray doesn't do any magical positional based transfer of data; I assumed that because the mesh and the particles were overlapping it was. Instead, it does what it says; it stores the texture into an array. But what defines this array? The points of the input mesh. That is, it uses the vertex id's, and at each id, stores a colour. For this to work with particles, they'd need exactly the same particleId's as the vertex Id's. Because my grid and mesh didn't match, the colours were misaligned.

There's a way to guarantee that a mesh and a particle setup align exactly; make the mesh into a softbody, and unparent the particles from the shape.

That said, my colours still looked weird. This was due to the problem outlined in the previous example; textureToArray outputs rgba, particles expect rgb. Using an rgbaToRgbAndAlpha node to convert the data fixed this.

Project colour from texture into moving particles with attribute transfer

Benny particles.gif

Download scene: File:attribute_transfer_tex_to_particles.ma

So here we have exactly what the previous example said we couldn't have; dynamic particles, getting their colour from a nearby mesh. Hopefully if you've followed all the previous examples you'll guess part of the solution. Jimmy explains this in great detail on his blog:

http://mayaspiral.blogspot.com.au/2012/06/soup-attributetransfer-color-basic.html

There's 2 things happening here:

  • Transfer colours from a texture into vertex colours of a mesh with textureToArray and arrayToPointColor
  • Use that mesh as a bounding object, and transfer its vertex colours into the particles via the pointCloudToCurves trick explained earlier.


Quite a few connections and stuff going on in this setup, I found it best to concentrate on the first part, make sure I ended up with a mesh that had the texture converted to vertex colours, once I knew that was working, concentrate on transferring those colours into the particles.

In the setup both the textureToArray and arrayToPointColour take dual inputs, geometry and a texture for the first, geometry and point-colour info for the second. I was curious if this was really needed, and found I could disconnect geometry from one or the other. However as soon as I changed the divisions on the input plane, it'd break the effect, This is because the arrays are defined from the vertex-id's. If either node doesn't keep its array length matching its inputs, it stops working.

This also links to the quality of the texture transfer. Because its driven by vertex colour, if you want more accurate colours, you need more polys in your input mesh. Note that in a production shot bumping this number up (I used 200x200 from memory) massively reduces performance.

Also, Lego spacemen are cool.

Attribute transfer and particles

Just to be clear, remember to insert a pointcloudToCurve node!

Replicating one of the demos, wanted a bounding object to affect particle radius as they traveled through it. The main trick to remember is that an attribute transfer needs 2 things: a bounding object, and geometry to transfer attributes onto (the clue is in the name I guess). With particles, you achieve this by using the pointcloudToCurve node. If you don't, nothing gets transferred.

At particles.png

TextureToArray and Accurate Sampling

If you're driving soup nodes with textures, and it does't appear to be reacting to your changes, try enabling 'Accurate Sampling' on the texture to array node. I've found several times that more complicated texture setups won't work as I'd expect (eg modifying the colour gain, or using layered texture nodes, or image sequences etc), changing that toggle fixes it. And unlike the name implies, it doesn't seem to affect performance as much as I'd expect.

Transfer fluid data to particles

Fluid to particles 4x.gif Fluid to particles network.gif

Download scene: File:fluid_attrs_to_particles.ma

The fluidAttributeToArray node does what it says; lets you transfer fluid attributes into array that you can transfer into whatever you want. I think I have this wired up correctly, the minor trick I had to was to wire the fluid in via 2 things, the fluidToArray, and to a bounding object.

In this example I'm transferring the fluid density to both rgbPP and radiusPP. Particles are being emitted from an omni, but particles that don't exit where there's fluid density get coloured black and scaled to 0.

Soup vs deformers

At this stage, worth a little mention of how soup nodes work compared to maya deformers, it can be a little confusing to start with. Interestingly I found soup makes more sense than maya deformers after some research. Anyway.

To draw analogies to nuke (I hope you know nuke), nearly all nuke nodes can be viewed and have output, but in practice you explicitly tell nuke what you want to see; either by connecting it to the viewer node, or by appending a write node. Houdini is kind of similar, there's an implicit final output node which is the end of your sop network, its clear that that's the final result.

Maya has no such concept. Any nodes both upstream or downstream could potentially be making visible shapes in the scene, and conversely, other nodes might never make visible shapes themselves. As such, when using soup you need to make sure you hide all upstream shapes that might make things confusing, and often at the end, you need to append a mesh, which will take the result of the (mostly) non visible soup nodes, and give you something to look at.

At its most basic, you'll usually have a soup graph like this:

inMesh(hidden by user) -> soupnode1(invisible) -> soupnode2(invisible) -> soupnode3(invisible) ->  outMesh(visible)

The outMesh can be any shape type you need, all you do is take its input type (inMesh for polys, create for nurbs), and connect the output of the soup graph.

Compare this behaviour to deformers. Maya deformers by default appear to be magically working in-place, so you have your mesh, assign a sine deformer, and you now have that mesh with a sine deformer applied. If you look under the hood however, maya is doing quite a few things:

  • A copy of the mesh is made named yourMeshOrig, and its tagged as an intermediate object to hide it from both the outliner and viewport
  • That feeds into a group node to define an array of components
  • That feeds into a deformer set
  • That feeds into your deformer
  • The deformer might also have a deformer-handle for you to manipulate in the viewport
  • That feeds into your mesh
  • Probably several other nodes are created depending on what type of deformer you've created.

As such, Soup nodes appear to be more labour intensive to start with, but the data flow is much clearer. When you want soup nodes to start affecting deformers, it can get a little messy. Best to leave that stuff until later when you're more comfortable with soup.

Finally, to make things even more confusing, some soup nodes are deformers, like the peak node and the morph node. Confused? Good. :)

Arrays vs components vs dynArrays etc

Last bit of theory before more examples. Soup has to work within the maya API, which represents components in various ways.

Components are sub objects like verts, faces, edges, CV's. , if you want to drive deformer membership, or a delete component node, or things that feed back into the maya domain, you're talking components.

Arrays are a soup specific thing. They're like per-particle attributes, but soupy.

DynArrays are maya dynamics, per-particle arrays. If you're fiddling with stuff to feed back into a particle system or nucleus, you'll be using one of these at the end of your soup network.

Multis are arrays of components used by deformers. If you want to limit the affect of a deformer with soup, one of the last steps is to use an arrayToMulti to feed to a deformer weight list.

Where it gets confusing is starting with one thing, converting it to something else, then converting it back again.

Eg, I wanted to drive a copier node, but only make copies onto a few vertices, not the entire mesh. The copier input expects arrays. However to do selections you use a group, and groups use components. What to do?

A pointAttributeToArray node lets you take components and convert it to an array. So in this case the flow was

( mesh + boundingObject ) -> group -> pointAttributeToArray -> copier

When you're starting out with soup its not easy to know what type of data you'll be manipulating for a certain effect, you're best option is to try and find an example in the huge examples pack that comes with soup, or these examples here, or Jimmy Gunawan's blog, or the soup forum.

Right, back to examples!

Driving deformer weight with bounding object

Deformer weight.gif

Download scene: File:weight_deformer.mb

Network deformer weight.gif

This is covered in several places on the soup forum, soup examples, Jimmy's blog, but figured can't hurt to have one more. As I warned earlier, this gets a little more complex as we're dealing with maya's roundabout way of using deformers, but its not too bad.

  1. Create a polysphere
  2. Add a sine deformer, make it wobble
  3. Create a bounding object node, intersect it to one side of the sphere.
  4. Create an attribute transfer node, set it's mode to weight
  5. Create an arrayToMulti node
  6. Connect boundingObject.outData to attributeTransfer.boundingObjects
  7. Connect sphereShapeOrig.outMesh to attributeTransfer.inputGeometry. This is the undeformed copy of the sphere created when you made the sine deformer, we'll intersect the bounding object against this shape rather than the output shape
  8. Connect attributeTransfer.outWeightPP to arrayToMulti.inWeightPP
  9. Connect arrayToMulti.outWeightList[0] to sine1.weightList[0]. The deformer should now be limited to where the bounding object intersects the sphere.


Struggled for ages to display the weights, turns out its pretty simple.

  1. Select sphereShapeOrig
  2. Create a displayComponents node. You should get blue points in the viewport.
  3. Connect attributeTransfer1.outWeightPP to displayComponentsShape.inDataPP
  4. On the displayComponents node, turn off 'Constant Color'. You should now see the weights.


Rayproject

Is good fun, and way faster than it has any right to be. Contain a mesh in another mesh, and you get a slider to project one mesh to the other. You get a few options to control what drives the projection vector, but its very easy to get quick morph effects without tedious hand modelling. The example shows that you can drive the weight into negative values to get an inverse project; projecting a sphere to a cube with a negative weight makes what looks like a spherical harmonics graph, kinda nifty.

Timeoffset

Works like a nuke timeoffset node, but on mesh data. Feed a mesh in, pipe it back out to another mesh. The time attribute on the node itself isn't automatically connected to time1 (what'd be the point?), so you have to either keyframe it, or if you just want a straight offset, use an expression like '=frame+3'.

A nice trick is to connect multiple meshes and timeoffsets together, then have each timeoffset be shifted by an ever increasing amount for motion graphicsy things. It can get tedious to setup more than 5, beyond that point you might want to look at the copier node instead, which has the ability to timeoffset each copy, and drive it with point attributes.

Copier

Copier scale.gif Copier network.gif

Download scene: File:copier.ma

You might have heard of houdini's copy/stamp operation, this is the same thing. It's like instancing geometry onto particles with one major difference; the final output is a single mesh so you can apply deformers and whatnot onto the end result. This also means the end result can make a very heavy mesh very quickly, so it's a trade-off; more control, but heavier setup.

In this example I've copied a bending cone onto each point in a poly grid, offset their time by their point $ID, and driven their scale with a bounding object.

I was just about to type 'the setup is self explanatory', but that's cheating and lazy, so here are the steps I used. There's probably tidier and quicker ways to do this, but this worked for me at 1:30am. :)

  1. Create a grid, name it inMesh, create a duplicate, name it outMesh, hide inMesh
  2. Create a poly cone, apply a bend deformer, keyframe it swaying back and forth, set its post curve type to cycle, hide the cone
  3. Create a meshToArrays node, connect inMesh.outMesh >> meshToArrays.inMesh. This will be used to create positions for the copies.
  4. Create a copier node
  5. Connect polyConeShape.outMesh >> copier.inputMesh[0]
  6. Connect meshToArrays.positionArray >> copier.posArray
  7. Connect copier.outputMesh >> outMesh.inMesh. You should now see the cone copied over the grid.

At this stage, adjust the cone height/radius, grid size/divisions so it's easier to see what's going on. Also, it softens all the vertex normals by default, if you don't want this, toggle 'toggle soft/hard' on the copier node.

Controlling time offset per copy

  1. Create a point node, enable the weight section, and set the expression to "$W = $ID;". This will copy the vertex id's to a point weight attribute to drive a time offset.
  2. Connect inMesh.outMesh >> point.inGeometry
  3. Connect point.outWeightPP >> copier.timeShift
  4. Enable "Toggle time shift" on the copier node. Now each copy is shifted in time by its $ID (in frames).

Controlling scale of copies using a bounding object

  1. Create a bounding object and attribute transfer, connect them together
  2. Connect the inMesh to the attribute transfer
  3. Enable color mode on the attribute transfer. We'll drive the scale of the copies with the boundingobject, using colour.
  4. Set the colour ramp on the boundingobject to be white on the left, black on the right
  5. Create a rgbaToColorAndAlpha node. Like previous examples, attribute transfer outputs RGBA, we only need RGB, hence this conversion node is used.
  6. Connect attributeTransfer.outRgbaPP >> rgbaToColorAndAlpha.inRgbaPP
  7. Connect rgbaToColorAndAlpha.outRgbPP >> copier.scaleArray.


This works, but we can't see any copies outside of the bounding object. This is because they have their scale set to 0, making them invisible. Lets fix this.

Fix copier scale outside of a bounding object

  1. Create another bounding object, attach it to the attribute transfer node
  2. Scale it so that it covers the grid, you should see all the cones reappear
  3. You can either set its colour value lower so that the minimum scale is, say. 0.2, or go the other way, leave it at 1, and on the other bounding object, set the first point of its ramp quite high, say 4. That's what I did in my example.

Add random rotation to the copier

Bonus example scene! File:copier_rand_rotation.ma

This uses a feature of the meshToArray node to create random values, which should be more efficient than using rand() in a point node to do the same thing.

  1. Create another meshToArray node
  2. Connect inMesh.outMesh >> meshToArray2.inputMesh
  3. On meshToArray2, set type to 'random', and set count to as many copies as you have (or just type in a big number, I chose 400)
  4. Set the min random vector to (0,0,0) and the max random vector to (0,360,0). Ie, a random rotation around the Y axis
  5. Connect meshToArray2.randomVector >> copier.rotArray
  6. On the copier, set orient to 'oreint by Rot'. You might need to nudge the timeline to see the copies update.

Wave deformer

Wave soup.gif.gif

Download scene: File:wave_point_and_peak.mb

I've become mildly obsessed with these wave deformers shown on vimeo here and here. After I asked about it on the soup forums, Mr.Soup himself, Peter Shipkov, created a great example scene using a softmod.

Meanwhile my friend and co-worker Josh Nunn went and learned soup over the weekend, and came up with a pure soup solution. This is a variation on his method.

The creator of the SeExprMesh version explains his setup on his blog (its in Japanese but google translate does a pretty good job), seems he and Josh came up with basically the same technique. The SeExprMesh version is a lot faster, but still, seeing it in pure soup, controlled by one node, is kinda nice.

My version linked above isn't as good as any of the others, its hard-coded to travel in one direction. I'll keep playing with it, see if I can make it any better once I learn some vector maths...

Rather than a step-by-step, I'll just explain what the point node is doing.

In the start block:

  1. Use xform to get the translation of the bounding object, store it as $origin
  2. Use xform to get rotate.y of the bounding object, store it as $angle
  3. Create vector $rotationAxis as <<90,0,0>>, which hard-codes the wave to roll around the x-axis


For the position block, here's the important points:

  • We want to rotate points by $angle around $rotationAxis. There's an existing mel function called 'rot' that does exactly this.
  • We also want the rotation to be scaled by the $WEIGHT given from the boundingbox, a simple way to do this would be to multiply $angle by $WEIGHT, but this looks a bit too soft and fat.
  • To get a nice crisp wave we need to apply a curve to $WEIGHT, so its much steeper. The SeExperMesh method applies an exponential curve, we can do the same using 'pow($WEIGHT, exp)', where 'exp' is whatever value we want to raise it to, I found 6 looked good.
  • The 'rot' command assumes you're rotating around points around a rotateAxis centered at the origin, but we need that rotation centered around the bounding object. To correct this, we move the point as if the rotateAxis really were at the origin ( $P - $origin), rotate it, then put it back to its original position ( + $origin).


With all that in mind, here's the position block:

// copy the point values into a temporary vector
vector $P = <<$X,$Y,$Z>>;
 
// multiply the angle by the exponentially rescaled weight
float $weightedAngle = $angle*pow($WEIGHT,6);
 
// rotate!
vector $FINAL = rot($P-$origin, $rotAxis, $weightedAngle)+ $origin;
 
// copy the result back into the points
$X=$FINAL.x;
$Y=$FINAL.y;
$Z=$FINAL.z;


Finally a peak deformer is put after the point node to help sharpen the tip of the wave even more.

To make things really cool, you can grab Nico Rehberg's hotOceanDeformer, apply this as the first deformer on a mesh, and stick this wave setup after it. Here's a playblast of it in action:

File:wave_with_hotOceanDeformer.gif

Next steps

You read this far? Wow. I'd say next steps would be to download the massive amount of example scenes from the soup website, search the soup forums, get playing. If there's still stuff that is a mystery let me know (the soup forums would probably be the best way).

At some point I might try and tackle the pyexpressions node and maybe some fluids stuff, but in terms of a soup-101 guide, I think this page covers that. If you feel differently, get in touch. :)

Also, learn houdini, it's worth the effort. Really.