HoudiniKinefx: Difference between revisions
MattEstela (talk | contribs) |
MattEstela (talk | contribs) |
||
Line 270: | Line 270: | ||
# Make a 'parent constraint' vop, put it in the middle. | # Make a 'parent constraint' vop, put it in the middle. | ||
# Setup the inputs; Connect ear xform to xform, head xform to newparent | # Setup the inputs; Connect ear xform to xform, head xform to newparent | ||
# Setup the outputs; outxform to the xform of the setTransform on the right, ptnum from the ear | # Setup the outputs; outxform to the xform of the setTransform on the right, ptnum from the ear gettransform to pt of the settransform. | ||
# click 'update offset' to set the offset of the ear to the head | # click 'update offset' to set the offset of the ear to the head | ||
Revision as of 16:01, 16 March 2021
Kinefx
Kinefx covers a lot of ground, but there's a pleasing DNA share with a lot of established houdini workflows. Basically if you've used packed prims before and wrangles, you get the core of kinefx.
Rig vs deformation
Or the alt title "Why doesn't my character move when I load it?"
Unlike most other 3d apps, kinefx defines a clear split between rigging and deformation. What does that mean? If you grab an fbx from Mixamo and load it into any other app, you'll see the character moving.
In Houdini, you'll see this:
60fps gif of complex dance. Damnit Paul, move!
If you connect a null to the last output you'll see the bones dancing away, but to actually see Paul shake his booty, you need to connect a bone deformer sop, which as implied, deforms the skin, using the skinweight attributes on the points, with the bone animation.
Oh wait a sec, there's no anim defined in this. Silly me. If I animate the bones with say a rig pose (similar to a transform sop, but designed to work with kinefx joints and respects parent/child relationships), he'll move:
Or better, swap for some mocap I saved in another fbx:
'Yes yes, but why all this hassle?' Well quite often you don't need to see the skin deformation. Eg if exporting to a game engine, you save the skin weights and the joint animation to fbx, but you don't need to pay the cost of the expensive skin deformation. The updated fbx export rop has you covered:
Or if you're gonna do a crowd, then you export the skin+rig+animation through a different agent process. The separation of deformation from rig seems needless at first, but it's actually quite powerful, and opens the door to whole new ways of processing and applying animation procedurally.
Motion clips
Use Houdini for long enough and you get very attuned to time dependency; do your nodes need to recalculate every frame, or can they just cook once? Knowing how to control this becomes an important trick for efficient setups.
But here we are on the Kinefx wiki page, surely 'kine' implies motion, there's nothing to be done right?
WRONG!
Kinefx steals a trick from chops, and lets you freeze all the frames of an animation into a static moment, called a motion clip.
Now that this is done, we can do whatever silly modelling operations we want, treating it like a big bunch of curves (cos its a big bunch of curves). When done, we can convert it back to animation with a motion clip evaluate.
What's cool is that you can do quite drastic modelling operations to the motionclip. Delete frames. Delete entire limbs. Apply smooth operations. Swap from each-frame-as-a-skeleton to each-bone-as-a-motion-path, do stuff, swap back. Super powerful stuff. Chops always had the promise of 'treat time as a modelling operation', but never really delivered, this delivered. It's also worth pointing out it's not limited to kinefx rigs, lots of potential here!
Some ideas to think about:
- Transporting hips with animation has always been tricky unless you can take bgeo sequences or alembics with the hip. Now you can convert to a motionclip, and stash it in the hip.
- Fix ground intersections with a modelling operation; convert stuff to a motion clip, ray stuff to the ground, convert back
Localtransform and kinefx wrangle
For me one of the most exciting things is the core ability to treat curves as joint chains. I've experimented with doing this in the past (see CurveUnrollTutorial ), but it's quite a lot of work. Now all that stuff comes for free!
When you get into kinefx, the sections of a line are treated as joint chains. Each point gets a @localtransform matrix4 attribute. If you rotate it, it is treated as a FK rotation in a joint chain, ie, rotate the elbow, and you'll have the wrist, hand, fingers all come along too in a FK style.
This means if you animate all the rotations of all the joints, you get easy wiggly waggly setups. So:
- Make a line with 10 segments
- Append a skeleton sop which will create @localtransform for you
- Append a rig wrangle
- Try something like this:
rotate(4@localtransform, @Time, {1,0,0});
When you scrub the timeline, you'll see the line curl up as each 'joint' is rotated over time. What if you increase the amount of rotation from the start to the end of the curve?
rotate(4@localtransform, @Time*@ptnum*0.1, {1,0,0});
or drive with a sine wave and tweak the values a bit?
rotate(4@localtransform, .2*sin(-@Time*3+@ptnum*0.2), {1,0,0});
or just be really silly, do this, and copy some lines to a sphere:
rotate(4@localtransform, .4*sin(rand(@primnum)-@Time*2+@ptnum*.05), vector(curlnoise(@P+rand(@prinum)+@Time*0.2)));
Download hip: File:kinefx_starfish.hip
FBIK
Download hip: File:kinefx_fbik_hips.hip
Not sure if this is the right way to use it, but its fun.
Bring in a rig, here I've loaded mocapbiped3, chose a walk, and imported it with the scenecharacterimport node. Split off the feet and hips, move the hips, feed those to the second input of the fbik sop, and the original rig to the first input.
FBIK will do its best to push the rig to match the positions of the bones you specify.
It's not perfect, things like knees will wobble everywhere, but like I said, its fun.
The two things everyone notices when you first do full body ik is that the ankles don't lock, and the hips often don't track well with the target hips. To fix:
- Add a configure multiparm, select the ankles, give them a higher weighting, say 10.
- The default damping of 0.5 is often too high for the hips, so the system is trying to blur/soften the overall solve, which means the hips don't hit the pose you want. Try lowering damping until the hips track better.
Also for you young folk, Shynola did a fantastic take on this vector skeleton style for Beck 17 years ago! SEVENTEN YEARS, OH GOD I'M OLD: https://www.youtube.com/watch?v=RIrG6xBW5Wk
Proximity skinning
Download hip: File:kinefx_skin_simple.hip
The launch docs gloss over this a little bit, you can work it out by reverse engineering some of the later examples.
Captureproximity sop is what you want. Geo to the left, rig to the right, feed that and your animated skeleton to a bone deform.
Play with the weights tab on the capture proximity to boost the number of influences, smooth out the reuslts.
If you want to refine the weights, use a capturelayerpaint.
Biharmonic skinning
Download hip: File:kinefx_biharmonic_skin.hip
Several people asked 'Why did you do proximity skinning? Why not biharmonic?' The answer was 'I didn't know how'.
Luckily several nice people shared their setups, so here it is, and the results are impressive. No wonder everyone thought I was silly.
I'm trying to commit these steps to memory:
- Make your skeleton and a reasonably high res skin
- BoneCaptureLine to define capture regions for the joints
- Tetembed, skin to first input, capturelines to second; this creates the tet mesh and weightings
- BoneCaptureBiharmonic, skin to first input, tetembed to second, this transfers the weight to the skin geo
- BoneDeform, previous node to first, skeleton to second, skeleton via rigpose to third
Manually drawing a skeleton
The skeleton above was drawn directly in the skeleton sop, it has lots of options to help you draw joints, mirror them, constrain to planes or inside geo etc, detail are in the sidefx docs.
Take 2 mins to go over the shortcuts and various modes, its pretty good fun. In the mp4 I'm doing the following:
- Mostly in 'freehand' mode to click-click-click the core hips-to-head joint chain
- Hit enter to swap to edit mode, tweak on, child compensate on, drag on joints to fix placement
- Select a joint, hit enter again to draw arm, leg, tail
- r.click, split to create elbow
- select shoulder/elbow/wrist, r.click, mirror and duplicate to create opposite side.
Rig from labs straight skeleton
Download hip: File:kinefx_straight_skeleton.hip
Takes a bit of cleanup, but it works. The key thing is for the curves to have their orientation correct, ie if you were to follow the vertex ordering, the joints must flow like joints. No child joints pointing back up to the root or backwards joints, most of the errors I had were due to this.
A fix here after chatting with Henry Dean is to select the 'hips', use edge transport to calculate distance to the hips, sort by that distance attribute, and polypath to force a rebuild of the vertex order based on point/prim order.
Rig doctor to help debug curve direction
When you get warnings of cycle errors, that implies some of your curves are backwards. Append a rig doctor, turn on 'show parent to child', and you'll see a little arrowhead to show how the curves are flowing. Red is bad. In the gif the red arrows appear if I take the resampled straight skeleton. The good one is using the edge transport, sort, polypath trick outlined above.
Procedural weights
Download hip: File:kinefx_procedural_weights.hip
Skinning sounds like an artist thing yeah? All that falloff stuff and painting weights? Ew. You left all that behind when you joined the houdini party.
If you know ahead of time exactly what your weights are, you can 'fake' what the capture proximity sop does through vex. It's a little tricky, Edward Lam from SideFx gave some great pointers here.
The weight attributes made by capture proximity are awkward to manipulate with vex, so there's 2 helper sops, capture pack and capture unpack to convert stuff into vex friendly attributes and back again.
This setup fakes what the capture unpack node does, creating arrays of the joints and their weights per point. It also creates a tricksy detail attribute used by some skinning related nodes.
There's 2 setups in here, the first takes some circles, makes the same number of joints, and procedurally weights the skinned circles to the joints and wiggles them.
The second setup could potentially be more interesting (but ultimately does what the labs rigid bodies converter does if you need a full solution). It takes packed animation, creates a joint for each, transfers the packed animation to joint animation, and skins the unpacked shapes back to the joints. It's slightly off, you can see the cubes are rotated 45 degress compared to the packed source, something I'll need to work on.
Noël Froger noticed the misalignment that happens in the conversion, and kindly offered a fix. He adds:
In the RigDoctor sop enable 'Convert Instance Attribute' as well as 'Initialise Transform'. And before that you have to use the packed intrinsic transform to generate orient.
Thanks Noel!
Kinefx to agents to lops to arkit
Download hip: File:kinefx_to_agents.hip
First, a series of yay/boo points to build tension:
- ARKit on iOS uses USDZ.
- Houdini can export USDZ!
- ARKit doesn't support arbitrary shape animation.
- ARKit does support skeletal animation!
- USDZ doesn't support arbitrary standalone skeletal animation.
- USDZ does support USDSkel, which was originally designed to handle crowds!
- Houdini can export crowd agents to USDSkel via Lops!!
Exciting right?
This hip takes the previous animation, defines an agent from the static rig joints, imports the skinned geo to the agent, creates a motion clip from the rig animation, attaches that to the agent, and exports that to lops.
Like the previous example there's a few rough edges I want to sort out, but it's almost there. Again thanks to Edward Lam at SideFx for helping with some of the tricky details. In particular the line earlier about 'tricksy detail attributes'? This is where it's used; the agent sop requires that attribute.
Rig vop
So this is a new way of working, only just got my head around it after watching a few videos.
Where kinefx is concerned, forget what you know about vops. Paralellism, magic of all-things-at-once, stop it. The default mode of rig vops is closer to maya rigging with the network editor, or hypershade if you're old like me... remember hypershade?
The biggest clue here is that it's set to detail mode by default. Why? Well in a 'proper' character rig context, you're doing specific things to specific parts of a character. Eg:
- Make a 2 bone ik solver.
- Make a reverse foot setup.
- Make a clavicle/shoulder correction rig.
Rig Vops are designed for these kind of operations; joint specific stuff that's more finnicky than sops, but doesn't involve running on every joint in parallel.
Easier to explain with an example.
Floating parent constraint
Download hip: File:Kinefx rig vop parent.hip
Fancier rigs in Maya or Houdini might have bones parented to other bones, but not via a straight parent/child bone link. Maybe its a null in between, or a group, or a parent constraint. All fine if you're dealing with obj style transforms, but how can you replicate this in kinefx if ultimately its all about points and lines and how they're connected?
In this example I've parent constrained some bunny ears to Paul. Animating Paul is not interesting, nor are modelling the ear or their lag animation (though I quite like it), whats interesting is the workflow to setup the constraint animation.
First, lets get the rig vop and viewport ready:
- Make a rig vop
- Connect the ears to the first input, Paul's skeleton to the second
- Set the display flag to the vop, hit enter in the viewport to active its state (the joints should get dots), dive inside.
Now imagine you're back in Maya, about to make a ribbon ik or something. You'd drag joints from the viewport or outliner into the network, add some other nodes, wire it all up right? It's the same here!
- Drag the head joint into the vop network. Yes really. It will make you a 'get_transform' vop for the head.
- Click the root of one bunny year. It splits in 2. Drag the upper one to the viewport, above the head transform vop. This will make a 'get transform' for the ear.
- Click the root of the ear again, it splits, this time choose the lower dot. Drag it in and put it over on the right, this will create a 'set transform' for the ear.
- Make a 'parent constraint' vop, put it in the middle.
- Setup the inputs; Connect ear xform to xform, head xform to newparent
- Setup the outputs; outxform to the xform of the setTransform on the right, ptnum from the ear gettransform to pt of the settransform.
- click 'update offset' to set the offset of the ear to the head
Done! See? Feels very 'I'm rigging in Maya'. Repeat for the other ear, hey presto, parent constraints.
Why did selecting and dragging the head just work, while the ear gave an option? Well, the head is from the second input. If you know vex and vops, that means its read only, so you it'll just give the option to read its transform. The ear is connected to the first input, meaning you could set or get attributes, hence you get the choice.
Quick gif summary:
Misuse of a rig vop
So if you want to manipulate rigs in a more traditional vex/vop way, make sure you set the mode to 'point' on the top of the vop network, it's set to detail by default.
Also when you're inside use the getpointtransform and setpointtransform vop nodes. They're convenience vops to help you get and set what you need quickly instead of needing a lot of bind and bind export vops. Also note it has passthrough for the point number, to help make the networks a little tidier. Nice one kinefx team.