Difference between revisions of "HoudiniCrowd"
|Line 448:||Line 448:|
=== Tips ===
=== Tips ===
Revision as of 20:49, 9 November 2020
- 1 Intro
- 2 Agent setup
- 3 Locomotion
- 4 Locomotion over bumpy ground
- 5 From a single agent to a crowd
- 6 Add foot locking to terrain adaptation
- 7 Bonus feature: Agents aim along @v
- 8 Bonus bonus: mixamo agents with variable clip speeds and offsets
- 9 Crowd simulation
- 10 Add variation and forces to crowd simulation
- 11 Crowd sim to run around a circle
- 12 Stop agent intersections
- 13 Ragdolls
- 14 Ragdolls and simulation
- 15 Make clips loop cleanly
- 16 On disk agent
- 17 Bonus lazy tip: use the default mocapbiped names
- 18 Multiple clips
- 19 Smoothing on disk clip loops
- 20 Clip transitions
- 21 Locomotion with rotation
- 22 Locomotion from cycles locked to the origin
- 23 Locomotion from cycles locked to the origin, a better way
- 24 Kinefx to crowds
- 25 Tips
- 26 Todo
Crowds covers the most ground of any Houdini system:
- (Agent) Packed primitives
- bones+rigging (but not much)
- Packed rigid body dynamics
Considering how complex it is, it's amazing SideFx could make a one-click shelf preset and it mostly works. But as is my style, I wanted to understand these at a more fundamental level, and I assume if you're reading this, you do too. As such, you can dive in with shelves, but I would suggest having a reasonable understanding of the above topics first before getting into crowds.
Download scene: File:agent_setup.hip
The building block of crowds is a packed agent. Similar to a packed primitive, it's a self contained node that can treated as a single point, but it includes geo, a skeleton, animation clips, animation metadata, and a few other things.
For performance when working with big crowds it's assumed all those sub elements are baked to disk. For initial simple testing we'll keep it all in memory and 'live', and swap to a on-disk workflow later.
Here's a simple example to make a walking agent:
- Create a mocapbiped3, set animation type to 'walks and turns', rename the obj node 'walk'. Disable in-place animation, make note of the number of frames (its 25 frames), turn off its display flag.
- Make a new geo container, dive inside, make an agent sop. Set the input type to 'Character Rig', set the Character Rig parameter to /obj/walk. Set clip name and current clip to 'walk'.
- Locomotion section, enable 'convert to in place animation', set locomotion node to point to the hips of the mocapbiped3 (so the path will be /obj/walk/Hips)
- Set frame range to 'use specific', the start and end to be 1 and 25, enable 'apply clip locomotion', reload button to bake.
If you hit play now, you'll see the biped happily walk away. Look in the geo spreadsheet, note that its a single point (ie, its a packed prim), and if you swap to the primitives tab and look at the intrinsics, find agentclipnames, you'll see it has a clip named 'walk'.
What was all that stuff about locomotion and in place animation? A few things to expand on there.
The mocap clips provided with Houdini aren't locked to the origin, they move in worldspace. As a convenience the mocapbiped HDA's cancel out this worldspace motion by default.
Turning off 'in place animation' restores the worldspace motion. Why do that? The agent node expects clips to have worldspace motion. When the locomotion features are enabled, the worldspace motion is removed from the clip, and stored in extra locomotion channels within the agent.
If you turn off 'apply clip locomotion' at the top of the agent sop, you can see that the agent does indeed now have its forward motion cancelled, and it's stuck at the origin. Enabling that toggle is now applying forward translation from the locomotion channels, and it keeps the feet locked correctly.
You can set a constant speed if you want (that's an option when you get into crowd simulation later), but motion capture clips often slightly speed up and slow down through the cycle, causing feet to slip. When you get into clips with substantial speed changes like 'walk_startle' (select that walk from the mocapbiped, set the new frame range, rebake), that really falls apart if you give it a constant speed.
Locomotion over bumpy ground
Download scene: File:agent_terrain_adapt_simple.hip
- Create a noisy heightfield for our agent to walk on. Heightfield node, size 5x20, division mode 'by axis', grid samples 50. Append a heightfield noise, amplitude 10, element size 20.
- Append a agent terrain adaptation to the agent, enable terrain adaption, disable simulation. Connect the heightfield to the second input, the agent will track the ground (feet will slip a little, fix this later)
Again, the forward speed is being set by the locomotion channels. Now how do we go from one agent to a crowd? Pretty easily actually:
From a single agent to a crowd
- insert a crowdsource before the terrain adaptation, give it the same heightfield for the second input
- display the final node, hit play, now have a crowd walking. hooray!
- try swapping from random layout to formation, and try the randomisation parameters on the second tab. Some work, some don't, will fix this later.
If you peek inside the crowd source hda it looks a little complicated, but all its really doing is scattering agents over a ground, and setting a few attributes. Easy.
If you look closely, you'll see the feet are sliding over the bumpy ground. Ew. How to fix that?
Add foot locking to terrain adaptation
Download scene: File:agent_foot_locking.hip
Slidey feet, no-one likes that. The terrain adaptation sop (and its equivalent feature in dops later) can lock feet to the ground when they're meant to stay still, which can help crowds look more natural in most cases (and look really funny when terrain gets too uneven, all goes very Monty Python Silly Walks...)
It does this by enabling a simple IK setup for the legs when the feet are touching the ground. How does it know when the feet are touching the ground? Via another chop network that looks at the walk cycle, determines when the feet are still (ie, not moving in worldspace), and creates a graph that is 1 when the feet are planted, and 0 when they're in the air. The graph can be generated manually, but it's easier to use an agent prep sop to do this for you.
This process is very character rig specific of course, and will rely on the names of your foot bones. Lucky for us there's parameter presets for mocapbiped1/2/3, we'll use that.
- Append an agent prep node after the agent.
- Use the parameter preset to fill out the values for mocapbiped3.
- Go to the 'additional channels' tab, click 'create foot plant chop network'. This does as it says. Dive inside the chopnet, can see that its using a 4 foot plant chops, each pointing to the ankle and toe of each foot, and uses a velocity threshold to guess when the feet are locked or not. These are merged together to make a graph of 4 true/false graphs of 'Am I locked?', and added to the existing agent graph.
- This chops graph is then referenced back on the first tab of the agent prep sop; scroll down to the 'lower limbs' section, can see there's a 'additional channels' section, these now point to channels in the chops graph.
- On the terrain adaptation sop, turn on 'enable simulation', and double check that 'enable foot locking' is on. Agents should now lock feet to terrain (and do crazy silly leg stretches if the terrain is too high).
- On the guides tab, turn on 'show guide geometry' to see the feet turn red when they're locked, and green when they're free.
Bonus feature: Agents aim along @v
Download scene: File:agents_with_v.hip
Keyframe the agent with a transform sop, or wiggle it around with noise, use a trail sop to calculate @v. Like particles or copy-to-points, the agent will aim itself along its velocity, neat.
This means you can get setup lazy background crowd shots with (almost) no simulation. You can use a copy to points sop to put agents on points scattered on a path. If you animate their uv and attrib interpolate the points, then trail v, the agents will face in the direction of their movement. Adding the tiny bit of simulation in the terrain adaptation sop with foot lock enabled, you get pretty good results for little effort. Note that it won't retime the cycles, so if you push the agents too fast or too slow for the clip, the rest of the body won't be affected, but the feet will do weird things. Fun though.
Also note that in this example hip, the bulk of the work is modelling nice procedural paths, the agent setup is a handful of notes at the end.
Bonus bonus: mixamo agents with variable clip speeds and offsets
Download hip: File:mixamo_agents.hip
Question on reddit got me thinking, revisited this non-simulated agent stuff for the first time in a while. I grabbed a walk fbx from mixamo, loaded it with the agent sop. It worked pretty much as expected, just have to choose the clip (which mixamo unhelpfully names 'mixamo.com'), and set the locomotion stuff to target 'mixamorig_Hips'.
Once defined I used a stash sop to make a self-contained hip I could upload here. The disconnected agent sop is still in there if you want to see how I set it up.
The rest of the setup is similar to the previous example, moving points on a path, then copying agents to the points. The difference here was using pscale to change the size of each agent, then needing to work out how to alter the cycle speed so the feet don't appear to slip.
To work this out I had a look at the geo spreadsheet, figuring there'd be some attribute changing over time. I found it at the prim level, an intrinsic called agentcliptimes. I was surprised that I couldn't set this directly with a setprimintrinsic call, but found a vex function to do this, setagentcliptimes. It takes a time value wrapped in an array for some reason. Anyway, this meant I could set my own clip time per agent, also allowing me to define a starting offset to help randomize the clips a little more. The prim wrangle looks like this (I've already set a @startoffset attribute using an attrib randomize sop earlier):
float t; t = @startoffset + @Time/@pscale*2; setagentcliptimes(0,@primnum,t);
The speeding up based on pscale is not scientific at all, just eyeballed. Multiplying by 2 seemed to do the right thing, if I had time/interest I'm sure there's a proper way to fix that. Had a go at making the foot lock stuff work too, but didn't quite behave. A problem for another day...
Download scene: File:crowd_sim.hip
Just letting agents walk in straight lines or along paths is cool, but at some point you need more than that, at which time we need to enter dops. At its core crowd simulation is a fancy pop sim; look inside the crowd solver, crowd object, crowd source, they're pops. The crowd source node doesn't even try to hide it, it just comes in as a pop source with some of the default parameters changed to suit crowd. So, starting from the crowd source node:
- Append a dopnet to crowdsource node
- Dive inside, and setup the usual dops/pops style workflow but this time crowd nodes; make a crowd object, crowd solver, crowd source, connect them together. Note the hints on the solver inputs; input0 takes the crowd object, input1 takes the crowd source.
- set the crowd source to use the first context geo. That's it, you're done!
Oh, agents have exploded? Whoops. Ok, crowd sims need at least one more node, a crowd state sop, that tells it information about the motion clip:
- Create a crowd state dop, name it 'walk', connect it to the input2 of the solver. That's it, you're done! It uses the $OS convention to use the name of the node as the clip name.
- Well, almost. To be truly correct, you have to explicitly tell the state node that this state has locomotion channels available, so it won't try and make the agents walk too fast or slow for this clip. On the walk state dop, clip playback, set type to 'locomotive', and enable variance if you want this clip to be able to be re-timed within a percentage you choose.
Add variation and forces to crowd simulation
So after all that effort, we have something that added a subnet with 4 more nodes, but doesn't give any extra functionality vs the sops terrain adaptation. Boring.
But that's not true! Like pops (cos it is pops, remember that), we can now start throwing forces into the mix, as well as crowd specific attributes, get some more interesting stuff happening:
Download scene: File:crowd_sim_steer.hip
- All agents walking at the same speed? Boring right? Go to the walk state node, enable speed variation, dial in the amount you want.
- Terrain adaptation is built into the crowd solver. Enable it, magic.
- Crowd forces (and pop forces) can be inserted after the crowd source like you'd do in pops, or after the state if you want specific forces to only act on certain states. Some forces are meant to be built into the crowd solver like agent avoidance, but doesn't like to work for me. I disabled it on the node, and used a pop steer avoidance, pop steer wander, pop steer seek etc. Make sure to set all their modes to crowd steer rather than pop force.
- Whats that? Agents still walking through each other? Yeah, that sucks. There's meant to be a steer solver built into the crowd solver to integrate all those forces and do magic, but it also doesn't work for me. If I stick a steer solver into the forces stream, it starts working. ¯\_(ツ)_/¯
- The defaults for the wander node aren't great. Make sure to set it to 2d mode for this sort of crowds-walking-on-ground sim, and importantly, change the plane to XZ. Otherwise you'll get agents starting to wander into the air or under the ground!
- Also try adjusting the orientation updates section on the particle motion tab of the solver; the defaults feel a little low, and the agents are very slow to turn, causing collisions and weirdness.
- If you don't need to see the full skinned geo, drop back to wireframe mode, and you'll just see the agent skeletons, much faster to display.
Crowd sim to run around a circle
Download scene: File:crowd_sim_steer_circle.hip
Rather than have the agents walk on paths like rails in the earlier example, this time the path will be used as a force.
- Back up in sops level, swap the walk biped clip to a run (make sure to set correct frame ranges for the agent and chop inputs)
- Make a circle, polygon, open arc, 40 divisions, scale it to be the size of a car park relative to the agents, connect to dopnet
- Put down a pop steer path, opinputpath it to the circle, set mode to crowd steer, sim.
- By default the agents will barely register the curve, and run away to the horizon making only tiny attempts to turn. Hmm.
- 'Max turn rate' on the crowd solver is whats limiting this. 90 is too slow, try 400 for this run cycle.
- Now they turn, but get really confused following the path. Why?
- It's 'anticipation' on the steer path node. The default of 1 makes them take into account too much of the path, and turn too early, or even turn the wrong way. Set it to 0 and they follow correctly, but move like erratic robots. 0.01 seems to balance well.
Stop agent intersections
That's all well and good, but if you only use the pop steer path, the agents run through each other like ghosts. We have to add some forces to make the agents detect and avoid each other. This is where we get into standard simulation territory; now you have to plan time to balance forces, run lots of tests, swear at physics, and think about if you could approach this procedurally instead. ;)
Pop steer avoid and pop steer separate are the extra forces used in the above example to fix agent intersections. As I mentioned before they're built into the solver, but I prefer to disable the built in one, and add my own, cos I'm a luddite who fears easy-to-use things.
- Steer Avoid is a repulsion, like a pop interact, or another way to think of it is agent personal space. Turn it up too high and it behaves like pop grains; agents separate too quickly and too uniform, it loses the natural chaos of a crowd. It's required of course, but at small values.
- Steer Separate is a more subtle effect, and allows agents to speed up or brake to avoid collisions. Not as much as I'd like though. It also includes a sense of FOV for each agent to determine how aware they are
- Getting good behavior is a balancing act; steer forces are normalised, so playing with the weights is key. My tests so far is path is 0.6, avoid is 0.2, separate is 0.5
- Avoid does motion prediction, which I think is probably just projecting along current @v, see whats nearby, calculate accordingly. The default 'anticipation time' of 2 had a mild effect, so I boosted it to 6 thinking it would do super prediction. In fact it got worse, probably because my circle-running chars would project current @v 6 seconds away, which is miles away from the circle, no agents there, it doesn't care. When I reduced the time to 0.5, the sim got noticeably slower, but the results much better.
The agent configure joints sop will look at the joints in your agent, and create a matching RBD setup using capsule shapes as collision geo. For this setup to ragdoll in a realistic way, you have to configure the rotation limits per joint, so things like the knees can't bend the wrong way.
That process can take a bit of time, luckily there's presets for the mocap bipeds to save time. Sidefx also provide a handy instant-ragdoll-sim sop to make sure its all behaving properly.
- After the agentprep node (we're back in sops btw), append a agent configure joints. Use the preset for mocap biped 3 (or whatever one you're using)
- Branch off a test simulation: ragdoll sop, watch, laugh
- The legs and upper body will probably fall apart in a horrible way. Go to the constraints tab of the ragdoll sop, enable 'Pin Root Collision Shapes'. That binds the two parts together.
- On the ragdoll tab of the ragdoll sop, enable 'Display Collision Layer' to see the capsules that are used.
Ragdolls and simulation
Download scene: File:crowd_sim_ragdoll.hip
I stole all this from looking at the shelf 'ragdoll run example'.
Needs another pink node for a ragdoll state, and more crowd specific nodes; a trigger to tell agents when to change, and a transition to tell them what to change from and to (run to ragdoll in this case).
Crowd specific ragdoll nodes
- make a new state node, name it ragdoll, use a merge node so both walk and ragdoll can feed to the solver
- make a trigger, append a transition, connect to the last input of the solver
- duration node, set type to 'time (current)', units to seconds, time 1, comparison greater-than (>), random offset 0
- transition node, set input state to 'walk', output state to 'ragdoll', duration 0
- play, agents will run for half a second and... freeze?
More stuff required! Just setting the state name to 'ragdoll' isn't enough. Surprisingly while there's a bunch of stuff built into the crowd solver, ragdoll support isn't one of those things. As such we have to delve into the slightly scary world of multisolvers. A crowd solver and a rigid body solver will be combined, and the state nodes need to define what their rbd behavior should be:
- pink nodes, set the walk rbd ragdoll mode to 'animated static' (so they'll impart their velocity when rbd takes over)
- set the ragdoll rbd ragdoll mode to 'active', so the agent limbs become all ragdolly when required
- disconnect the crowd object from the crowd solver
- create a multisolver, connect the crowd object there instead
- connect the crowd solver to the multisolver (to the purple bar)
- create a rigid body solver, also connect it to the multisolver
- play the sim, witness zero gravity cronenberg body horror. Agent joints can fly apart from each other, there's no gravity, no ground plane. Lets fix the easy things first.
- append a gravity node after the multisolver
- create a groundplane (for now), merge after gravity
- sim, agents now fall to the ground, but they're still falling apart into a stretchy mess.
Last thing needed is some constraints! A crowd sop will make these, you have to pull them in manually as objects alongside the crowd object. No wonder there's so many shelf setup tools.
- In sops, append a 'agent constraint network' after the crowd source
- it has 2 outputs, the left is the agents as we've already been using. Append a null to the right output and look in the geo spreadsheet, can see its a bunch of 0 length polys like the ConstraintNetworks tutorials, with the constraint_names of Pin and ConeTwist. Remember that.
- Connect this to the 4th input of the dopnet (or tidy up if you're keen, vs lazy like me)
- Inside the dopnet, append a constraint network to the crowd object, set its geo source to 'fourth context geometry'
- Create a hard constraint relationship and a cone twist constraint relationship, merge them, connect to the 2nd input of the constraint network
- On the hard constraint, at the bottom of its parameters set its data name to Pin
- On the cone twist, set its data name to ConeTwist
- Be amazed at the sea of constraints that have just been made, now on the constraint network go to guide options and disable 'show guide geometry'
- Yay, the bodies fall to the ground and stay connected!
- Boo, wait, the feet are getting left behind, why?
- Onnnnee more thing to do, up in sops again, on the agent constraint network, enable 'pin root collision shapes'
- Sim again, should now behave
Finally! Now to refine this up a little:
- On the transition (blue) node, enable 'max random delay', the default of 1 second means the agents now randomly transition to ragdoll within a 1 second period
- In sops add some heighfield noise (or re-enable your noise if you made one earlier), swap the ground plane dop for a static object, set collision type to volume, heightfield. mmm.
- joints going a bit crazy when they ragdoll? the cone twist defaults are too wide. on the cone twist set the limits down from 180 to 1 (if they work like most dops attributes, it'll multiply that number with the incoming values, which i assume are already set by the agent constraint node)
Make clips loop cleanly
Download scene: File:crowd_clean_loop.hipnc
The walk I've been using so far has an annoying click in the head and back at the loop point. Fixing anim loops sounds like a job for chops, but first we'd need to get the clip from the agent into chops. Amazingly, there's a chop node for doing this, called an agent chop. A chop clip can be smoothly looped with a cycle chop. To read this modified clip back onto the agent, you use a agent clip sop.
Doing this live, I can see it starting to slow down houdini, so its probably worth writing all this info out to on-disk-agents soon.
- The foot plant chopnet already has an agent chop to load in the walk cycle, may as well use it. Dive in there.
- Append a cycle chop to the agent chop. Go to the blend tab, start playing with the blend parameter. I found 0.3 blended the start and end well.
- Get back to sops, append an 'agent clip' sop after the agent prep.
- In the clips multiparm section at the bottom, set the name to 'walk', the source combobox to 'CHOP', and set the path for the chop, in my case dragging the chopnet onto the parameter worked, using the path '../foot_plant_channels'.
- at the top of the parameters make sure the current clip is 'walk', let the timeline play, should see it smoothed. Nice.
- enable 'apply clip locomotion', let it play, uh oh. The agent warps back to the origin on each loop. This is because the cycle chop cycles everything, including the locomotion channels, which we don't want. We'll split the chop flow so that we can branch off the locomotion channels, cycle the rest, then merge them back together afterwards.
- go back inside the chop network, append a delete chop after the agent chop, set its channel name parm to *locomotion*
- duplicate the delete chop, change its delete mode to 'non scoped channels', so this one keeps the locomotion channels and deletes the rest.
- connect the first delete to the cycle chop, append a merge chop after the cycle, and merge back in the locomotion channels.
On disk agent
Live non-disk baking agents are fun for quick experiments, but can feel like I'm fighting how the agent pipe wants to work. It's designed around writing things to disk, reading it back, probably time to conform.
Based on what's covered so far it makes sense, the main thing to be aware of is that it doesn't save a single 'agent' file to disk, but a collection of subcomponents that define the agent. That means the anim clips are saved as standalone .clip files, and there's a naming structure based around variables so its all fairly procedural.
I started use the shelf tools to help, (gasp! I know!!) , but to my surprise they're not as automated and helpful as I'd expect. You'll see why:
- Fresh scene, mocapbiped3, choose a walk, note the cycle length, turn off in-place animation, rename the biped 'walk'
- Save scene, crowds shelf, bake agent
- Select the walk biped, hit enter in the viewport
- It will ask for the agent name, i used bp3
- It asks for the clip name, type 'walk' (why doesn't it copy this from the geo name?)
- Now it helpfully brings up a dialog for the foot plant stuff, choose leftfoot, lefttoebase, rightfoot, righttoebase
- Thats the shelf tool done.
Its made 2 extra nodes, a ropnet, a setup geo container, and its turned off the display flag for the original biped. Lets look inside.
- In the ropnet is an agent rop and a chopnet. The agent rop is similar to the agent sop, so requires the same modifications to set the loop range, to enable locomotion, to tell it to use the hips to measure distance travelled etc. The main difference to the agent sop as we've used it so far is the output paths it setups up, lots of pre-configured variable based paths for agent this, clip that etc.
- the chopnet contains the foot plant channels. which is nice.
- looking at the setup geo container, its an agent sop and agent prep sop like we've used before, but now its set to read from the on disk definition.
- the agent prep is empty, use the parameter preset to fill it out for mocapbiped3.
- on the top of the ropnetwork are 2 handy buttons to run the rops internally. also nice.
so thats the overview, how do we get back to a looping locomotion based agent like we had before?
- on the agent rop, set the frame range to be 1-25. Despite the biped clip saying its 26, it looks like there's an extra frame, it makes the loop misbehave.
- turn on 'convert to in place animation', and set the locomotion node to the hips
- jump up, hit the bake buttons
- dive into the sop network
- display the agent sop, click the 'reload' button at the bottom of the parameters, enable 'apply clip locomotion', hit play, should see the agent walk away.
- display the agent prep sop, use the mocapbiped3 preset, fill out the 'additional channels' for the ankle and toe channels
Bonus lazy tip: use the default mocapbiped names
If you rename the agent to 'bp3', you have to manually select the ankle and toe joints for foot planting. If you leave it as the default 'mocapbiped3' (or whatever mocapbiped you're using), the relevant joints will be pre-selected when you get to that dialog. Lazy FTW!
This is where I'm surprised its so clicky. I must be missing something.
The idea is you create multiple bipeds, use an agent rop for each to bake them to disk. As long as you use the same agent name for all of them (so following from the steps I wrote down earlier, 'bp3'), they all get written to the same location on disk, meaning that when the agent sop looks in that folder, it magically finds all your anim clips. Cool right?
So to add to the above, duplicate the walk biped, swap it to a run, rename, click the bake shelf button, go through the steps. You'd think that if you share the same agent name, it would pre-fill out a lot of the shared steps. Unfortunately it doesn't, so you click the hips again, choose the foot plant nodes again. It's not hard, just boring.
But then I thought maybe if I make a bunch of clips first, shift select them all, THEN run the bake tool, it will add them all in one hit right? No, it just does the first selection.
Hence, I feel I'm missing something.
I've heard this has been address in H18, must find time to take a look...
Smoothing on disk clip loops
Same as before, but with the added steps of reading a clip from disk, and writing it to disk again.
- Lazy way to pull all the clips into chops is via the agent prep node again, second tab, click the 'create foot plant chop channels' button
- dive inside the chopnet, find the agent chop for the clip you want to modify, walk for example
- append a cycle as before, blend of 0.3
- In a ropnet (I put this within the chopnet, making the cardinal sin of too many nested networks, create a channel rop
- point it to the cycle chop, and set the path to the original walk clip. I cheated by mmb on the agent chop that reads the clip to get an explicit path, and copy/pasted that
- write it out, go back to the agent, reload the clips, it should use the smooth loop now
Except.. it doesn't? The agent moonwalks back to the origin at the end of the loop. Ugh. Same problem as before, you don't want to cycle the locomotion channels. Lets do that dance again:
- Go back to the agent rop network, re-bake so we get the original glitchy walk again
- go to the agent sop, click reload
- go to the chopnet, create a delete sop, make the channel name be *locomotion*, and delete non scoped channels (so it keeps these, deletes the rest
- r.click on the delete sop, actions -> create reference copy
- on the copy, delete the channel reference on the delete combo box, and change it to scoped channels. So now, this delete sop has deleted all the locomotion channels, kept the rest.
- cycle chop, blend on that second delete chop
- append a merge, merge the first delete and the cycle
- go into the rop subnet, change the chop path to point to the merge, save to disk
- back up to the agent sop, reload, now you have a smooth cycle chop, for realz.
And now do this for every clip with a bad loop! Easy! Again, must be an easier way...
Download scene: File:crowd_walk_jog_hop_transition.hipnc
Once you have multiple clips defined, the crowd tools can work out how to cleanly blend from one clip to another. This is done using the agent clip transition graph sop, which tries to work out where foot positions and locked points of anim clips match, and for how long, so they can be blended. To visualise the matching of these clips, the transition graph sop will generate a polywire network at the origin with the verticies representing clips, showing how clips can blend from one to another.
You can then connect your agent and the graph to a test sim crowd transition sop, which will let you define the start and end clips, and show you how they blend together in a groovy colour coded fashion.
To use this in a sim requires adding a reference to the crowd transition graph on the crowd object, and telling the transition dops to use the graph.
Locomotion with rotation
Download scene: File:crowd_locomotion_turn.hipnc
Mocapbiped3 has lots of clips, quite a few are walk-turn-45-left, walk-turn-90-right. The locomotion stuff can handle changes of direction, but needs more information than just where the hips are. There's a second locomotion parameter on the agent sop for orient, but I couldn't understand why it didn't work when I assigned it the hips again. After chatting with sidefx (and re-reading the docs), this field isn't to read the rotation of the hips. You give it another joint, it constructs a vector from the hips to that joint, and uses that vector to cancel out rotation. In this case I used one of the upper leg joints (LeftUpLeg_To_LeftLeg).
Locomotion from cycles locked to the origin
Download hip: File:crowd_derive_locomotion_v03.hipnc
Been bugging me for a while; there's lots of fbx clips around that don't move the character forward in worldspace, but are locked at the origin. To make these clips work with locomotion you have to manually keyframe them moving forward, generate locomotion from that. I was sure chops should be able to calculate this procedurally, here's an attempt.
The idea is to look at just the toes of the character, and isolate to their ty (up-down) and tz (forward-back) motion. You could try and detect when the toes are on the ground, and only at those times read in the tz motion. Do this for each toe and add them together, then invert it (if the toes are moving 'backwards' on the ground, the character must be moving forward), and use that for the locomotion tz.
I've done this in a roundabout way using chops, I'm sure there's a cleaner way. I found I couldn't get the end of foot bone (ie the toe) with chops, so cheated and object merged the last bone in, which strangely/conveniently gave me exactly what I need, a point at the end of the toe.
This point is read into chops with a geometry chop, and I split into the ty and tz channels. Ty is fed to a logic chop, which will return a 0 or 1 based on a condition. Here the condition is 'off when zero or less'. I can then invert it so that its 1 when the toe is on the ground, and 0 when its in the air.
But, issues. The toe ty never goes below 0, so before the logic chop I add a manual offset to force the lowest point to be just below zero. This is a fudge factor so that ideally with other characters with other crappy foot placement, I can still kindasorta identify when the foot is on the ground.
Meanwhile with the tz channel I've inverted it so the foot-moving-backwards motion is now a body-moving-forwards motion. But this has both the foot moving foward and foot moving back, ideally we only want the foot moving forward bit.
We've identified this with the logic chop though! So if we multiply them together, when the foot is in the air, the logic chop is 0, so when that is multiplied against the tz channel, that motion is cancelled, leaving only the forward motion.
Do the same with the other toe, and add the results. If we're lucky when one foot motion is 'cancelled' cos its in the air, the other foot is on the ground and will take over.
That's the theory. In practice the channel has all sorts of noticable kicks and pops in it, such is the imprecise nature of chops motion editing. However we can do more fudging. By adding an offset to the tz per foot, we can help smooth out the curve so it looks more like a 'real' locomotion channel.
Finally we have to push this new locomotion tz channel back onto the agent. 2 parts to this, which I covered earlier. An agent chop reads in all the channel data. I remove the existing locomotion tz, rename my channel to the right thing, and merge. This output is then applied back onto the agent with a agent clip sop.
Well, almost final. When 'apply locomotion' is enabled, the motion looks pretty good for the first cycle, then snaps back to the origin. Looking closer, the locomotion tz I generated stops short, and resets back to 0. Ugh. To fix, I got really cheaty; I used a trim chop to cut off the bad data, then a cycle chop to extend the end of it juuust enough to match to the original walk cycle.
Like I said, hacky, not perfect, but intriguing. It won't work with a run (when both feet are off the ground it won't know what to do), when both feet are on the ground it'll get doubled up translates (fix with a max chop maybe?), and no doubt many other little details. But still. Fun.
Locomotion from cycles locked to the origin, a better way
Someone found a better way! Elovikov from the sidefx forums posted a much better answer than me. The summary:
- Use the foot plant chop to detect when on the ground. Last I checked it only worked based on if the foot was stationary(ish) in worldspace, looks like it has an extra option to detect based on proximity to the ground. Silly me.
- Uses a slope chop to calculate velocity rather than just accumulated distance
- Uses a max math operation rather than adding the two feet so it won't get double speed kicks when both feet are on the ground
- An envelope chop to help fill the gaps where the velocity can't be calculated
I had a hunch about some of these fixes, but it would've taken me weeks to sort all that out. Very glad that Elovikov shared his solution!
Kinefx to crowds
Download hip: File:kinefx_agent_v03.hip
It's less straightforward than you'd think, but makes sense in a roundabout way.
Agents are assumed to be skeleton based, the rest rides on top of that. So the workflow is
- Make a skeleton
- Convert to an agent with agentFromRig
- Add your animation with a motionClip and agentClip
- Add your skin with agentLayer
In the network you can see that ordering; while for a single kinefx setup you go joints, skin, animation, deform, with the agent you almost work backwards again, working in layers.
The end result is pretty fun though, using a crowdsource with no simulation, the entire waggly thing can run and playback 2000 agents in realtime on a little macbook pro.
- I've never used this full crowd setup in production (we used bits and pieces of it for Lego Batman, but with a lot of custom stuff), so I can't vouch for its reliability for an end-to-end delivery. Others seem to have gotten nice renders out of it though, so it seems production ready.
- Finding that agents can do things without simulation was an eye opener. A lot of crowds work I've seen in production have mainly been pushing crowds along paths, so if you can get away with that, do it.
- Dropping into wireframe will hide the skin and just show the skeleton. This can speed up the viewport substantially.
- I tried to pull a WoW character that Lorne had exported into an agent, and oh god the pain. The main issue is the agent workflow expects a straightforward fk chain, no funny stuff, while the WoW rig was a chain of nulls, with joints parented off to the side, aim constrained to the nearest child null. The agent sops get hopelessly confused, so the rig needs to be cleaned up. It's a work in progress, but something to watch for.
- If you use locomotion in a sim, that becomes the strongest force in the entire setup; agents will be super reluctant to change their speed. A simple thing I've been wanting to work out is to have agents run towards each other, sense they're going to collide, and stop (ideally blending from a run clip to a run-to-stop clip). This has eluded me, largely because I want the triggers to be driven by a speed threshold, but the agents refuse to slow down, so the trigger never fires. Any advice appreciated!
- What I've explained so far is a bare bones overview just of the sim side, and doesn't cover any of the important things like rendering, surfacing variations, all that jazz. Andreas Giesen gave a great talk at FMX, worth watching: https://vimeo.com/275663484
- Andreas also has a course on pluralsight, if I end up doing crowds on a job, that's the first thing I'm watching! https://www.pluralsight.com/courses/crowds-houdini-15-2387
- This one by Mikael Pettersen looks great: https://www.cgcircuit.com/tutorial/crowds-for-feature-film-in-houdini
- Great presentation by Dan Yargici and Adam Droy at The Mill: https://vimeo.com/313875722
- record some clips with the studio mocap suit, integrate
- non bipeds
- crowds on complex geo, walking up walls, ceiling etc
- standard busy city block bipeds setup
- crowds for car traffic? spinning wheels, steering front wheels, indicating/braking, all that?
- try some example motions inspired by that 'witch doctor' music video.. swirling vortex, cheer on a cue, avoid each other, stagger etc