Difference between revisions of "HoudiniLops"

From cgwiki
Line 514: Line 514:
  usdview myfile.usd &
  usdview myfile.usd &
=== Primitive selector click vs control click ===
The parameter field that looks like the group field in sops is to let you enter primitive names. If you click the arrow selector button you can select in the viewport, or select from the Scene Graph Tree. There’s a few cases though where neither of those are right, and you’d prefer something like the pop up node lister you get with say an object merge sop.
If you control-click, you get a mini pop up scene graph selector as you want, neat.
== Todo ==
== Todo ==

Revision as of 15:46, 28 June 2020



The simplest take on Lops is that it's a procedural hierarchy editor. At the school where I teach we'll be using Lops this year for doing layout, creating sets, all that stuff, so this quickstart is heavily focused on that. From that perspective there'll likely be a lot of 'ahhhh, is that all there is to this?' moments, as this side of Lops is relatively straightforward. Lops and USD are capable of lots of other things, will cover those when I get to them!

If you want a video intro, myself and Ben Skinner recorded a video, its 24 mins, and is a good overview of USD and LOPS:


Background context

In broad strokes of Houdini vs USD vs hierarchies, you could say:


  • PRO: Great node editor, great procedural workflows
  • CON: Bad at editing hierarchies, scene manipulation


  • PRO: Great at editing hierarchies, scene manipulation
  • CON: Needs high level python coding skills to use effectively

Lops/Solaris attempts to combine the positive aspects of both these things:

  • PRO: USD's ability to manipulate hierarchies and scenes
  • PRO: Houdini's great node editor and procedural workflows

A bonus unfortunate con of USD is terminology, read the official docs and you're rapidly drowning in tech jargon. Once you get familiar with Lops and USD it's not too scary. I've written these notes for people who are familiar with Houdini and have never touched USD, and only introduce the jargon as needed.

To get started, make sure to set your desktop to 'Solaris', so you can look at the scene graph tree and see what's going on with your object hierarchy. This should drop you to a new context, so in addition to obj, shop, mat etc, you have a new one, stage.


Credit where it's due, Ben Skinner did most of the work here, I just wrote it down. Ben developed a lot of the USD stuff for our pipeline at UTSALA in 2018, then was first to dive in and play with Lops and PDG in 2019, so many thanks to him. He has his own website of more coder focused tips at http://vochsel.com/wiki/ , and is now in Toronto working at Tangent Anmation. If you see him at a Toronto Houdini user group, make sure to buy him a beer.

Mark Tucker has also been very patient with my idiot questions, has offered valuable advice and edits, thanks Mark.

Also Lewis Taylor has been a great sounding board though all this offering great feedback and advice. Huzzah.

Right, lets go!

Lops basics

Define a top level folder

Selection 132.png

Create a primitive lop. Look in the scene graph tree (SGT), you can see you have a tree with 2 things, HoudiniLayerInfo and primitive1. The parameters for the primitive lop set its primitive path as /$OS. In other words its at the top of the hierarchy, and $OS means its named after the node itself. Rename the node itself from 'primitive1' to 'set', and you'll see in the SGT its been renamed to /set.

For anyone familiar with Maya, this is the equivalent of making an empty group, naming it, and putting it at the top of your Outliner.

Worth pointing out early on what 'primitive' means in USD vs in regular Houdini. In sops a primitive is a renderable thing, like a polygon, a curve, a volume. In USD a primitive means a thing in the Scene Graph Tree. So in that outliner style way of thinking, a folder is a primitive, a shape is a primitive, a transform is a primitive. Pretty much any element you see in the Scene Graph Tree is a primitive. Remember, hierarchy editor, we're thinking in those terms....

Add a sphere to the scene

Create a sphere lop. View it, you can see its made you a sphere, and its location in the SGT is /sphere1.

Merge the sphere and set

You can do the houdini thing, put down a merge node, and connect the set lop and the sphere1 lop to it. Look in the SGT, they're now both in the hierarchy.

Merge the sphere and the set, Lops style

Selection 133.png

Merging is fine, but you can also connect nodes inline, like Katana. Delete the merge, wire the sphere after the primitive. Look in the SGT, you've done the same as the merge but with 1 less node.

Remember, lops aren't sops! Sops is about manipulating geometry, lops are about manipulating hierarchies. Lops nodes can carry through what's in the previous node, and add their own stuff. Takes some getting used to, but you quickly get the hang of it.

Merge the sphere and make it a child of the set

Sphere parent.gif

The set primitive and the sphere are sitting side-by-side in the SGT, we probably want the sphere to be a child of the set. A manual way for the moment is just to set the path for sphere1 to where we want it to go. Select the sphere lop, change its primitive path from /$OS to /set/$OS. Look in the SGT, its now a child of /set.

Bring in a pig from sops

It's unlikely you'll just have a scene full of spheres and nulls. Jump over to sops and make a pig, and append a null named OUT_PIG so you can find it easily. Get back to the /stage network. Append a sop import, set the sop path to find your pig. Look at the SGT, ugh, ugly name, its called it sopimport1 over there. Rename your lop to 'pig'. Now it has a nice name, but a bad location, it should be under /set. Change the Import Path Prefix to /set/$OS.

Move the pig

Can't just have the pig at origin, that's silly. Select the pig in the SGT, and choose the translate tool from the viewport left-hand options. Drag it away so it's no longer blocked by the sphere. Now look in the node view, see that its created an edit node for you. This works like an edit node in sops, so you can select the sphere, move that, back to the pig, move that, etc, all these general changes are stored on the single node. Works, but sometimes you'll want more explicit control. I wonder if lops has something like the transform sop?

Move the pig with a transform lop

Of course it does. Append a transform lop, and at the top where you'd expect to find a group, there's a field expecting you to give it a name of a SGT location. Clear the expression and start typing /set/pig, you'll see it has tab completion like usual groups. You can now move stuff more explicitly. That's nice. Also note you can move /set, and the children move as expected. That's a trick you can't easily do in vanilla houdini.

Edit lots of things with a stage manager

Say you have lots of usd files on disk, and you need to do lots of making folders, parenting stuff, getting initial layouts correct. This is easy in Maya with its Outliner cos you can just directly grab groups, rename, do things, but the SGT is view only. You don't wanna go use Maya do you?

No you don't. Append a stage manager instead. The parameter pane now looks like a simple version of the SGT, but this is fully editable. R.click, make folders, double click stuff to rename things, shift click and drag and drop stuff, go craaaaazy. Further, click the little folder icon, it brings up a file browser, so you can find all those usd files on disk, drag them into the graph view, or even into the viewport. Click the little transform icon next to things to move them directly from this one node. It's amazing.

Fancier combining with graft and reference

Say you had a castle set, and had gone through with the stage manager and defined locations for moat, drawbridge, castle, courtyard etc. Meanwhile you had another chain of lops nodes to make a bedroom. Once you have that whole chain, how would you insert that bedroom scene graph into the correct location of the bigger castle scene graph?

A graft is the simple way. It takes 2 inputs, and reparents the right input to a SGT location in the left input. By default it has an expression to find the last defined primitive from the left input, and parents all the stuff from the right input under that primitive. You can override that and put it wherever you want, but that's base idea.

A reference is a fancy graft. As well as 'parent all the right inputs to somewhere on the left' input, it can also directly load usd files from disk, and parent them to a location (this is its default behavior).

(Mark Tucker points out flaws in my simplification here, and rightly so, but we'll get to those later)

Reference vs payload

The reference lop has a few modes, with alternate between 'reference' and 'payload'. What's the difference?

When something is referenced, it HAS to be loaded.

When something is a payload, it CAN be loaded, or not.

You can imagine this being handy for big heavy scenes; eg say you're merging a huge forest full of detailed tree models. Lighters will require the full forest for final lighting, but if an animator loads the shot, they can 'unload' the forest payload, it's never brought into memory, the animator gets a nice fast load time, and doesn't affect the needs of lighting downstream.

Wherever possible (and wherever it makes sense), use payload.

Lops level 2


In regular rendering setups you need to create a material, name it, assign it. It's the same with USD, with the extra step of saying where the material will live in the scene hierarchy.

A material library node does all this work. Append one, by default it looks for materials inside itself. Dive inside, you're now in a mat context.

  • Create a few principled materials, name them nice, jump up again.
  • Click the 'auto-fill materials' button, look at what it's done; it's made a /materials folder in the SGT, and put all the materials under it. From the parameters pane it will have made a multilister for each material, each has a 'geometry path' parameter.
  • You can drag geometry from the SGT into this parameter, or use the tab completion stuff, or use wildcards.

The material assignment will appear in the viewport if the viewport understands your material. The binding of a material to geometry is tagged on the primitve. Select a primitive in the SGT that has a material assigned, and look in the scene graph details pane. There's been an attribute for a material binding, linking to the chosen material.


For our current project we'll be in a forest fire. Some trees will be on fire, others won't. I remembered a siggraph talk by MPC on The Jungle Book fire sequence, where layout had fire assets and props they could put in the set, seemed like a good thing to try in Lops.

To be specific, I would like a tree asset, and have an option to have the tree on fire or not. Variants are the USD mechanism for this.

The Lops skin on top of variants is kind of a fancy merge, kind of a fancy switch.

First get your geo ready. I've sop imported a tree, assigned a material, and used a graft to put it all under a nice top level SGT transform '/testTree01':

Tree variant prep1.gif

I did a quick pyro sim in sops, made it loop (the sidefx labs loop sop is awesome), wrote a vdb sequence on disk. I imported that with a volume lop, assigned a material, grafted that under /treeTest01 as well:

Tree variant prep flame.gif

But we don't want to choose between tree or flame, we want to choose between tree, and tree+flame. No big deal, lets just merge the tree and the flame to create our tree+flame, ready to feed to our variant setup:

Tree and fire merge.gif

Now the variant magic. We have a tree, a tree+flame, and connect them to a variant lop. I create a 'add variants to new primitive' lop, and connect the tree and tree+fire to the second input.

When this is all done, variants are presented as a drop down selection, so we need to define a name for the drop-down option, names for each of the drop downs, and what thing in the SGT this is all applying to. Here I'm telling the variant thing (the primitive) is /testTree01, the name of the drop-down will be 'fire_toggle'. To name the options within the drop down, double click and rename in the second column of the multilister:

Variant setup.gif

Now we can select which one to use with a 'set variant' lop. Append, choose the variant option ('fire_toggle'), choose a variant, see the SGT and the viewport update to flip between fire and no fire. Neat!

Variant set.gif

Oh wait, the thing is called something silly (or was when I set it up), the variant node uses /$OS as the name for the new variant. Silly node. Change that to /testTree01, and it all works as expected.

This can now be duplicated (try a duplicate lop the equivalent of a copy and transform), and set variants on a subset of the trees. It's pretty cool.

Why do all this when we could've just used a switch? Remember, when we save this USD asset out to disk, all that variant magic is now inside it. So we could choose variants here in Houdini, or in USDview, or in Maya, or in Katana, or in any package that supports USD. If we get to final lighting and the lighter realises they need to have more or trees on fire, they can do it, it won't involve a kickback to fx or layout.

A cleaner way to prepare those variants

Ben Skinner pointed out the merge isn't necessary, I could just chain 2 grafts. He's right of course.

Variant cleaner prep.gif

An even cleaner variant setup

Download hip: File:lop_variant_fire.hiplc

Mark Tucker pointed out it's wasteful to include the tree in both variants, when the only thing that's changing is the fire. He suggests it's better to just add a variant that only choose between the fire and a null, makes sense. This doesn't have the VDB nodes I should be using if this is to work outside of Houdini, but it's nice and clean.

Variant fire 0.png

Check that scene graph! So clean!

Variant sgt.gif

Instancer for trees on a groundplane

Lops instancer.png

Download hip: File:lops_forest.hipnc

The core idea is very copy-to-points, but with some extra stuff to deal with USD requirements.

An instancer will need at least 2 things, points for the locations, and shapes to copy onto those points.

A 3rd requirement is what we touched on earlier. We covered how lops nodes can be connected one after the other, and the new node appends whatever it's doing onto the existing stream. So this system has to be able to take in an existing scene, and add the copied stuff onto that.

3 inputs. A variety of ways to expose this to the user. Who will win?

The answer is 'yes'. Lops exposes several different methods for this, I'm going with the one that's most intuitive for me, you can play with the other methods when you're more comfortable.

The instancer lop has 2 inputs. I've set it up so the existing scene flows through the left, and the shapes we'll copy are on the right.

Where do the points go then? You can double click to dive inside, and this is a sops network. Define any geo in here, those points will be used as the instance locations.


  1. sop import a tree, connect that to the right input
  2. on instancer, 'prototypes' is the shape to be instanced. So set prototype source to 'second input'
  3. on instancer, 'target points' is the point locations. Default mode is 'internal sop', we'll use that
  4. dive inside, create a groundplane (or object merge in something), append a scatter
  5. done!

There's some tidying up to do here though, names and stuff should be better. The USD convention is for the objects you're instancing to go in a prototypes folder, which the instancer does for you. Generally name things as nice as possible. Starting this from scratch I've made a primitive lop named set, to get a /set at the top of my heirarchy. I've named the instancer 'forest', and put it under /set/$OS. The tree via the sop import is named 'tree' and its primpath is /$OS. When the instancer grabs it, it gets moved underneath to be at /set/forest/prototypes/tree.

As I said this is one of many ways to set it up, start typing 'instanc' in the tab menu, you'll see a few different options, but most are just the same instancer lop in different configurations. You can use the inputs for different things, refer to external geo for the locations, it's pretty flexible.

Instancing and variants

Instance with variants.PNG

Download hip: File:lops_point_instance_variants.hiplc

So having learned about variants, and having learned about instancing, it seems obvious that we should be able to combine the two right?

If you think about what's being requested, it doesn't really work like that. Instancing is about taking one primitive and linking it to lots of transforms, so they all share the one primitive. A variant is a change to a primitive. If you want each instance to be able to use a different variant, then fundamentally you can't; you change one, you change them all.

A possible fix is to take your primitive, duplicate it as many times as you have variants, set each duplicate to a different variant, then instance those.

This setup provided by Mark Tucker does exactly that. Here's the core of what's going on:

  • A duplicate lop is like a copy-and-transform sop, so we duplicate the shape 4 times, because we have 4 variants.
  • A set variant lop comes next, which uses the @prim lops attribute so that each prim gets a variant matching to its prim number ie, 0, 1, 2, 3.
  • Now that we have 4 prims which each have the variants we need, these can be fed to an instancer in 'random' mode, giving us our shape variations.

Lops level 3

RBD to lops as a point instancer

rbd export version 1

(skip down to v2 for the 'correct' answer, I'm leaving this here as reference as to why v2 is better!)

Lops rbd instance.gif


I chatted with Lewis Taylor aka tinyhawkus about USD and RBD. He had a workflow pre-lops that involved writing out one huge static USD file containing the heavy RBD geometry, and a super light animated USD file that would contain only animated points and a point instancer referencing the big file.

Here's an attempt to recreate that workflow in lops.

The sops network is a simple RBD setup. The static rbd pieces go to a null called OUT_CHUNKS. The RBD solver in sops has multiple outputs, handily one of them is the points that represent the sim, they're connected to a null called OUT_POINTS.

In the lop network, the chunks are brought into lops with a sopimport and run to a point instancer. The source of the points internally is just an object merge to the points mentioned above. To get the correct chunk put onto the correct point requires 2 things, to set the list of shapes to copy, and how to match shapes to points. That's done by setting the following parameters on the point instancer:

  • prototype primitives: /chunks/Prototypes/*
  • prototype index : name attribute

With that done, we can just write out the usd file.

This setup creates a single usd file, which isn't quite what I want. I've set the Layer Save Path parameter on the sopimport, which from what I understand should then be implicitly written to disk when the final rop is run, but it's not. I got stuck here, but Mark Tucker provided a solution:

rbd export version 2

Mark Tucker read the above, and offered the following handy improvements:

To write the chunks to a separate file from the animation, you just need to turn on the "Load As Reference" option at the top of the chunks node.

Sopimport load as reference.gif

Because each chunk is unique, he suggests changing the chunks node's Primitive Definition -> Packed Primitives option to "Create Xforms".

Sopimport packprim type.gif

Watch the SGT in the above gif, before the change there's a prototypes folder and all the pieces are blue, implying they can be instanced, after the change the prototype folder is removed, and the pieces are no longer blue, ie they're regular geometry.

Why make this change? Now we have a cleaner hierarchy, and the pieces don't need to instancable, as they're going to be used as unique prototypes in the point instancer.

Because we removed that unnecessary 'prototype' location, the instancer will need its prototype primitive path updated; change the instancer1 LOP's Prototype Primitives to "/chunks/*.

Here's the old scene graph tree vs the new:

Rbd lops v2 compare.gif

Layers and references and save paths

Why did enabling 'load as reference' in the above section fix the file save stuff? How does the final ROP know to save that file? What are layers? What magic is this?

There's a few bits here that tie together, I'll explain as 3 high level axioms, and explain more afterwards:

  • Layers are containers.
  • References always exist in their own layer.
  • Layers define their own location to save or load a USD file on disk.

Layers are a core idea in USD, so far I've managed to avoid them. Layers are like groups in Photoshop; its a container for stuff to live in, and layers can be composed and combined.

Layers also define how things will be saved. Generally each layer will be saved to its own file, which can be handy when you're trying to save your USD scene in a modular way.

References implicitly live in their own layer, meaning they need their own location on disk.

When reading USD with a reference that's self evident, obviously you're reading from that unique file location.

When writing USD with a reference, and that reference is generated within houdini from a sopimport, its implied that you've made a new layer, and that you'll want that reference saved in its own location. In the previous example by swapping the sopimport to a reference, and setting the layer save path, we've explicitly said we want the chunks saved in their own file.

Cleverly the USD ROP is aware of this. So even though you only define the path for the 'final' USD file on the rop, it will detect if you have other layers defined, look at their save path, and write those layers to those locations.

If you don't specify a save path, you'll get a warning that Lops will just vomit whatever incoming geo to the output usd file. Sometimes you want this, eg when delivering standalone fx, its likely that your output USD is your delivery, you don't benefit from modularizing that.

Eariler I mentioned a fundamental difference between grafts and references which I glossed over, this is the key difference. a graft merges geometry into a single layer, a reference implicitly creates a new layer, and expects that you'll define a save location.

The Scene Graph Layers panel can be useful here, you can see the layers being made, references sitting within them, and the implied save type and save location per layer.

Usd layer editor.gif

Incidentally, this is what the glowing coloured borders in the node network represent, a different random colour is chosen for each layer.

Usd layer colours.gif

Thanks to Ben and Mark for helping me understand all this, and offering handy analogies!

Here's the end result:

Rbd chunks 0.png

Looping clips and vdb sequences

Lops clip loop.gif

Download hip: File:lops_vdb_loop_valueclip.hip

This can't be right. And yet, it works so... ?

  1. Write out a 'myvolume.usd' to disk which is the length of your loop, say 42 frames.
  2. Read it back in with a file lop (which is really a sublayer lop)
  3. Append a value clip lop, create 2 clips, point both clips to the same 'myvolume.usd'
  4. Make the first clip be 'length-1' frames (so 41 frames in this case), the second clip be 1 frame
  5. Set 'loop end time' to the the total length you want the loop to run for, say 2000 frames.

Bullet point notes about that process:

  • Volumes are just wrappers around vdb files on disk.
  • The contents of that wrapper is very lowbrow, internally its just a list of strings, one for each frame of the vdb sequence
  • Because its ultimately just a list of files, Mark Tucker suggested an expression to loop over the files you have.
  • I'm stubborn, that seemed too simple, i had heard usd had clever support for clips and looping and stuff.
  • The way to do this is a value clip in lops.
  • You define clip files, which point to usd files on disk. No, it can't do timewarp style tricks of existing upstream stuff, it has to be on disk (I'm guessing for performance)
  • You set the global 'loop end time', give it clips, tell it how long each clip runs for, and it'll loop them until it hits the loop end time.
  • If you inspect the written out usd its basically an even longer list of frames on disk, in the loop order you specified. Basically the same as Mark's suggestion to just use an expression on the filename. :)
  • The base behavior doesn't make sense. I have a 42 frame vdb sequence on disk:
    • If I use single clip, I don't get any animation.
    • If I used 2 of the same clip, both marked as 42 frames long, the animation would play forwards for 42 frames, backwards for 42 frames, forward again.. weird!
    • If I used 2 of the same clip, the first marked as 41 frames long, the second as 1 frame long, it works.
  • The valueclip requires an input, which must EXACTLY match the hierarchy of the value clips. Any variation, like names don't match or node types mismatch, the valueclip can't overlay itself properly, no animation. I found the easiest way to ensure it all matches up is to just load the usd first with a file lop, then valueclip after that.

Value clip parms.PNG

USD and big files

Download hip: File:stitching.hip


'USD is amazing!' you're no doubt saying by now, 'is it good at everything?'

Well, it's very good at making big files. Really good. So good it's actually bad. USD is like alembic in that it prefers a single file as output. If you have a 100 frame animation, USD expects you to write out a single 'animation.usd' file. Fine in most cases, but it'll happily let you cache out a 500mb per frame hair cache for a 1000 frame shot until you run out of disk space or crash your machine, or both.

We faced exactly this issue on the UTSALA short film last year. Our main character had 2.5 million hairs, about 170mb per frame. Most shots were about 4 seconds long and our fur caches were ok, but a couple of 15 second shots would consume all the memory we had and crash Houdini. Yuck.

File per frame

One way to fix this is to simply write out a usd file per frame. On the USD rop, enable the separate file per frame option, easy.

Usdrop sfpf.PNG

But now you have 100 usd files on disk. If you're wanting to pass this downstream to other departments, they'll need to know how to read that sequence. Maybe they will, maybe they won't. And what about motion blur which expects to be able to read previous and next frames? It's a mess. What to do?

Usd Stitch

There's a USD command line tool that can take a sequence of usd files, and stitch them back into one megafile, so you've avoided the export issues with memory, and you can pass a single file downstream. Solaris a wrapper for this, the usdstitch rop. Create one, point it at your USD sequence using $F4 in place of the frame number, give it a output file location, you now have a single file again. Great!

Oh hang on, but that file is huge! Man this problem is tricky.

Usd Stitch Clips

Ideally we want the best of both; a single file that lighters and other departments can point at, but also a single file-per-frame so that we don't have a monolithic single file. Can we achieve this impossible dream?


That's what the usdstitchclips rop is for. Similar to how USD volumes are really a thin wrapper around a VDB sequence, you can make a thin wrapper around a USD sequence.

Create a usdstitchclips rop, point it at a USD sequence, give it an output name, and do some fiddly work with the extra parameters, and you get a USD wrapper around your USD sequence. The fiddly bits are the same as the value clips in the previous tip, it needs a primpath and a name for the stitched clip. The problem with this being in a ropnet is that it can't inspect the file to give you a name, so make sure you get the names exactly right!


Note that I've heard issues with this and motion blur, if that'll be an issue for you, you better do some testing!

Inline USD lop and channels

Inline usd lop2.gif

You can directly create and manipulate USD with the inline USD lop. Eg, drop this code in, you get a primitive sphere:

# Simple ball
def Xform "geo"
    def Sphere "Sphere"
        double radius = 1;

What's fun is to exploit Houdini's behavior to evaluate hscript expressions in a UI first before sending result to the rest of the system. So here we could create a slider to set the radius. Change the code to look like this, so we have a ch call in a backtick expression:

# Simple ball
def Xform "geo"
    def Sphere "Sphere"
        double radius = `ch('radius')`;

There's no button to create the channel slider yet, so make that manually via the gear menu, and add a float slider called radius. Finally toggle the bypass flag, and now the radius is controlled by the channel.

Usdz and iOS

Ian desktop crop sm.PNG

A friend (hi Ian!) got a 3dscan with a texture, asked if I could help him reduce it. I figured this would be a interesting challenge, and a chance to follow in the path of Ben Skinner who had done some fun AR tests with USD and iOS.

Basic import convert and export

First step was to import the obj. It was 900mb, more than Houdini could handle, but I could load it into Blender and immediately export as alembic. Obj is a super old format, alembic is more recent and designed to handle high polycounts, once converted Houdini could load that happily.

Once that was in Houdini, I could run a polyreduce and bring it down to about 20,000 polys.

I used a sopimport to bring it into Solaris, and a usd rop to export a usd. Once that was on disk I used the command line tool 'usdzip' which is part of the USD package to convert it to a usdz file.

Upload that to google drive, download from google drive to my phone, click it, and it opens automatically in AR view and.... its enormous. Like Ian's head is the size of Mount Everest. And it's got an ugly pink and purple preview material. But it works!

Fix scale and material

Scale and rotate usd.PNG

Back in sops I appended a transform sop after the polyreduce, and set uniform scale to 0.01.

To fix the pink+purple look Ben told me I have to add a usd preview material. In Lops I put a material library lop after the import, dove inside. I created a usdpreviewsurface material, set the basic parameters, jumped up a level, assigned it to the head, export. Run the usdzip -> gdrive -> phone process, its now the right size and a uniform gray material, but facing the wrong way. Rotating the transform sop 180 degrees fixed.

Add a texture

Lops arkit matnet.PNG

The head scan came with a diffuse texture, time to add that too. It was massive (16k x 16k), so I used cops to reduce it to 2k, and save as a PNG, as Apple only supports PNG textures.

In the material library subnet I added a usduvtexture and filled in the path to the PNG. I thought I'd see the texture in the viewport, but nothing. Ben pointed out the network needs to bind the @uv attributes, in Lops that is done with a usdprimvarreader. Create it, set signature to float2, var name 'st', connect result to the st input of the usduvtexture node. Again, no result.

Last thing to do is to tell the sopimport to convert @uv to @st. Jump up, Select the sopimport node, expand the 'import data' section, scroll to the bottom, enable 'translate UV attribute to ST'. The texture now appeared in the realtime viewport! (This checkbox is now enabled by default, it wasn't when I wrote this little guide)

Export that USD, and convert to usdz again. This time usdzip needs to be told to pack both the model and the texture, which you do with the --arkitAsset command:

usdzip --arkitAsset ianhighrestexhead.usd ianhighrestexhead.usdz

Again send that to gdrive, that to the phone, hey presto, textured usd model on iOS!

Bonus fun trick that I used for a twitter post was again thanks to Ben. He pointed out that Apple have a free app called Reality Composer, which lets you quickly prototype AR setups and bind USDZ assets. Loading up the face tracking template, pulled in Ian's head and moved it off to the side, job done.


Stage manager UI toggle

Stagemanager parms.gif

I think Mark Tucker mentioned this in passing in a forum post, its a game changer. The stage manager lop lets you bring in lots of stuff and lay it out, but it felt a little hands-off in that you couldn't see explicit transforms or names of objects.

Turns out its there, just hidden. Click the little slider thing in the top right, you swap to a tab view where each operation you've done is displayed like the paint sop; you see explicit paths to usd files you're importing, transforms you've done, renames, super handy.

Enable realtime subdiv

Lops subdiv fix.gif

The viewport can handle realtime subdiv very nicely, but needs a bit of a kick to make it work sometimes.

  • Hover over the viewport
  • Hit d
  • Go to the geometry tab
  • Set Level of Detail to 2
  • Swap to the Karma renderer
  • Swap back, mmmm, smooth,

Inspect usd source

At any time you can right click on a lops node and choose Lop Actions -> Inspect active layer. You'll now see the usd code under the hood, a great way to get context on what's being constructed by Solaris.


A nice bonus of lops is that it comes with a lot of the pixar usd utilities. They should be implicitly available on linux and osx, while on windows its easiest to access from a bash prompt.

I use git bash, as long as you source the bash script to import the houdini environment, you can run usdview and all the others. At some point I'll incorporate this into my bash profile, but for now I:

  • find my houdini bin directory in windows explorer
  • right click on that bin folder, 'open git bash here'
  • source houdini_setup_bash

That's it! Now you can open a usd file with

usdview myfile.usd

Well, sort of. Usdview is called via a python wrapper, it seems windows git bash gets confused and needs the first wrapper to exit before usdview can run. Lazy fix is to just run it in background mode with

usdview myfile.usd &

Primitive selector click vs control click

The parameter field that looks like the group field in sops is to let you enter primitive names. If you click the arrow selector button you can select in the viewport, or select from the Scene Graph Tree. There’s a few cases though where neither of those are right, and you’d prefer something like the pop up node lister you get with say an object merge sop.

If you control-click, you get a mini pop up scene graph selector as you want, neat.


  • load shotgun metadata? shot start/end? handles?
  • lops and the farm/tractor/pdg
  • what vex wrangle tricks can we do in lops?
  • scene import, pitfalls
  • cameras and lights
  • controlling render settings
  • usdskel stuff for crowds
  • usdshade, loading shader networks that exist in usd files, make overrides


Why is USD interesting if I'm not a big studio?

I'll link to this in a few places, you can read this wall of text, or watch this 24 minute summary, has practical examples and stuff, probably better:


A rant I did on discord, in the pub, to my family, copied here and tidied up for your benefit. Nice images, practical examples etc will come later.

Short version: It lets small studios punch well above their weight.

Long version:

Big studios have lots of big things. Big farm, big teams of artists, big IT and infrastructure. All of those things are important to get big shows done, but a key factor is allowing people to solve systemic problems that aren't purely tech and aren't purely art. Pipeline TDs, department TDs, RnD, there's enough people hired and they're given enough space and time to allow a big studio to function more efficiently. Small studios generally can't afford this.

6 years ago

Take a film I worked on about 6 years ago, wall to wall photoreal cg, crowds, environments, the works. In the start of a show like that it feels like a small studio, a small team of people who each have a specialty, just experimenting and sorting things out. As the show progresses more artists are hired, the work expands.

At a certain point the scale of the project starts to have an effect. The quantifiable stuff is fine; number of shots, number of assets in shot, get a metric of how long it takes an artist to make a certain asset or finish a shot, multiply that out to get x thousand days for a single artist, look at how much time you have left, divide one by the other, thats the number of artists you need. Oversimplifying, but that's the idea.

What happened as more assets were completed, more stuff got shoved into shots, was the tools got slower and slower. So slow that it began to affect artist productivity. In a smaller studio you shrug, maybe panic, but you can't do much more than that, everyone is busy doing the 'arty' work assigned to them.

In a big studio, the TDs and RnD folk kick in. They can analyse the tools, identify bottlenecks, rewrite slow things, adjust stuff to get time-to-first-pixel more quickly, time to final comps faster. One of the things that really slowed us down was assembling big shots; tools 6 years ago could handle 1 asset fine, probably 10, maybe 100. But 1000, 10000, things get slow, and thats when you want an army of TDs with you to solve stuff.


Jump to now, whats changed? Machines are faster, renderers are better, cloud computing is a thing. Some tools are better, some have made things incredibly efficient. Megascans, Substance, Houdini improvements, means making individual assets and fx is much faster.

Big assembly still sucks. Maya is still miserable at handling lots of transforms, Houdini when working in /obj is clunky and lame. I just finished watching The Lion King, and was blown away by how good the completely digital environments were. Looking at them critically, you could probably make a single element of those environments easily enough (a tuft of grass, a rock, a tree), but to assemble thousands of them into a set, ugh, a nightmare.

A big sequence in a small studio without USD

Say you were crazy enough to do that with a team of 7: 2 modellers, 2 surfacers, a layout artist, a lighter, a comper. You have 30 shots in a Savannah to do. Run a quick breakdown, thats 7 grass models, 8 rocks, 4 trees, ground, 4 mountains, twigs, pebbles, 5 bushes. Each has 3 surfacing variations. Modellers and surfacers work away on that as fast as they can, save it all on disk. Layout artist gets started, pulls all these models into maya via references, lays them out, animates a camera. Lighter gets started, uh-oh, there's a non manifold edge somewhere that causes the render to crash.

The lighter flags it, can't tell exactly which model it is, but its in the lower right corner of the shot. Layout artist tries to identify the asset, its rock07v01. Modeller fixes it, saves as v02. Now what? The layout artist has to find every instance of rock07, and update it from v01 to v02. Meanwhile the lighter finds the texture used for grass03 is too high res, while tree04 roughnessmap is too low res. They get kicked back to surfacing, version up, again layout person has to find those materials and update in the layout file. Then director notes, more changes. Also in shot 30 the tree needs to be moved for a nicer composition. Oh, and this all now has to be moved to katana, cos maya just can't handle this anymore.

All of those things are distressingly common, and are maybe 10% of the daily churn of shots and assets. All those changes need to be updated, rolled into shots. If you're working across multiple DCCs, how do you handle this? Alembic is ok for geo, but doesn't store material definitions. It still requires a hard bake at some point, if assets get updated, someone has to open the maya scene, update, republish. Maybe you can write a python script to automate it, or a batch job on the farm. But then how do you ensure lighters are using the right file? And now the alembic is blowing out into bigger and bigger filesizes, so big maya and katana are having problems loading it....

And so it goes. At this point you'd be wondering why you ever bothered, and surely if we're suffering through all this, others are too, and why are we all solving it alone?

Enter USD

Well its not just you, and not just the small studios, big places have the same issues. Even Pixar. Luckily Pixar have lots of smart people, and are keen on open source, so have shared their solution, and its USD. USD solves lots of things, lets run through how it handles the issues outlined above:

  • A usd file can point to other usd files, which can point to other usd files, which can point to other usd files. Like maya references this means you could have a file, say shot050.usd, which is actually only 5 lines of text, as internally it points to camera.usd, fx.usd, props.usd, char.usd, set.usd. If you went and looked at set.usd, it might refer to hundreds of usd files within it, so grass01.usd, tree03.usd etc, with their transforms defined for the set. Dive into grass01.usd, that'll have the polygons that define the model, but it can also have a shader graph defined in it. So usd at this point can be thought of as fancy alembic, which can reference other alembics.
  • Those references to other usd files, they can be hard paths on disk, so /assets/setpieces/grass/grass01_v002.usd, but it can also be an abstracted path. In our case at UTSALA we use an abstraction that points to shotgun queries, so the path looks like type='setpiece'&name='grass01'&version='latest'. When the file is loaded, the usd libraries know 'ah, I better ask shotgun what this means so I can get a path on disk', gets that path, loads the thing. THIS IS HUGELY POWERFUL. No more relying on the poor layout artist to update versions of things. No double checking lighting renders to ensure assets are correct. No writing of code in your DCC to compare disk paths to what shotgun says should be in your scene. This awareness of version and disk path abstraction is built into the usd file format itself.
  • usd works cross platform. originally for maya, then katana, then houdini, and now rapidly spreading into most dcc's. This means a lot of the difficult interop and translation between dcc's is gone.

But also...

  • usd comes with a standalone viewing tool, usdview. Think of it as mplay or rv, but for 3d scenes. Want to check whats in a shot? Load it in usdview, its blazingly fast, plays buttery smooth, it's a great QC tool
  • referencing lots of usd files that in turn reference more usd files isn't just appending files together like maya referencing; you can be really clever and have 'stronger' usd files selectively update, tweak, modify 'weaker' usd files. So you could have char.usd for base character anim caches, but a later charfx.usd file could insert fur, hair, cloth caches into the hierarchy of the character setup, so lighting don't even have to know they're coming from another department.
  • usd has its own fast method to talk to render engines. Almost all the major players have announced support for this (called 'hydra delegates'), meaning you don't even need a DCC app to send usd files to a renderer; they can function like .rib, .ass, .ifd, and be rendered directly.
  • USD has built in support for lots of things you need but don't realise you need until you think about it. High performance instancing, LOD, variations, volumes, lights, geo attributes, curves, particles, crowds, motion blur, cameras, it's all in there. Pixar have been using USD for their films for a good 5 years, and USD's predecessor for many years before that, they've thought of and solved most of the use cases!

Enter Lops

This was all amazing, but still required knowledge of USD's python api to tie it all together. What didn't exist was an artist friendly interface to all this incredible power. That's what Lops/Solaris is. Sidefx have made a new network type that represents USD primitives as nodes, USD operations as nodes, and lets you do all the stuff you'd want to do.

We've been using USD and Lops for a couple of years now at UTSALA, recently we made a video covering a lot of the points above, have a look:


Compare to Clarisse and Katana

One could question how does this compare with the two apps known for solving big shots, big numbers of lighters, namely Katana and Clarisse.

Katana set out to solve the question of big shots and lots of them many years ago, and does it broadly by letting lighters import huge amounts of geometry from multiple locations, merge them together, create and modify lights, material assignments, object properties etc, then send all that to a render engine. By design it won't actually try and load geo unless you explicitly ask it, meaning lighters can work in katana quite quickly. It has good high level support for switching chunks of a network based on rules, so you might turn on a bunch of lights for shots facing characterA, vs a bunch of different lights for shots facing charB.

Katana's pro is also it's con; it can feel very hands off, you're generally operating on your scene in a kind of abstracted state, making rube goldberg machines trying to catch names of objects in your shot, do things if found, hope that it all falls out the bottom as you'd expect. It's also a little beyond the reach of small studios, both being quite expensive, and needing substantial TD work before it can even run effectively.

Clarisse tries to solve similar problems to Katana, but by being tightly coupled to its renderer is much more direct and hands on. It's faster to get going with less technical expertise, and was quickly adopted by matte painters as a way to generate massive shots with thousands of trees, buildings, stuff.

It's cons are that it's developed a reputation for being unstable, and that it isn't really designed to talk to existing renderers, you're buying into a unified lighting tool+renderer.

Both Katana and Clarisse work on the core idea that they're the final stop; 3d assets are brought into them, images come out. Also the ability to edit the scene is limited to what lighters require, and in Clarisses case what matte painters want; you can create lights, cameras, modify materails, some object attribtes, but thats's it. You can't really model geometry, or do fx, or do uvs, or animate characters, anything that you'd traditionally do in maya, you do in Maya.

Compare to Lops.

Lops by itself should cover most of what Clarisse and Katana do. Import big shots, create and modify lights, material assignments, object properties, send to a renderer. But being built around USD, you get all the I-can-see-all-the-geo from Clarisse, combined with the I-can-render-to-whatever-renderer from Katana.

But Lops isn't by itself, it's in Houdini! There's nodes to allow you to send stuff from Lops to other contexts in Houdini, and to go back the other way. So select a building in lops, take it over to be destroyed by and RBD solver, bring it back in. Create a standalone volume, pull that into your shot. Realise this certain model needs better uv's, fine, bring it into sops, uv away, bring it back.

PLUS, it's not just to and from Houdini. Save your final setup as USD, send it to katana if you need to. Or back to animation. This is the U in usd, its universal, you can bounce this back to any usd compliant dcc, it should be able to use it.

Proceduralism for scenes

A final sell for existing Houdini folk is the difference between sops and /obj. Once you've used sops for a bit, you get comfortable with the idea of copy-to-points a ton of pig heads to points, or creating volumes from scratch, or selectively deleting faces who's normal faces directly along +Y. Yet we jump up to /obj and its same old manual cameras, manual lights, manual visibility states, manually dragging and dropping objects into rop object lists.

USD and Lops brings that concept of sops proceduralism to /obj contexts. Bringing in caches from disk can be rule based. Make lights based on procedural things. Scatter and layout stuff as manually or as automatic as you want. Have all that stuff change instantly based on the shot name. Save an entire lightrig to a cache on disk, bring it back later via a HDA menu, or through shotgun. Proceduralise all the things!

State of USD and Lops late 2019

The above is the sell. What's the reality? What should you be aware of? Bullet points for me to fill in later:

  • USD is rapidly evolving. Base geometry, layering of geometry is solid. Storing shading networks in USD is relatively new, as are volumes. USD crowd support is bleeding edge. USD for realtime and mobile is very bleeding edge and changing all the time.
  • Lops as a node UI on USD is very very new. So some parts are a new thing sitting on a new thing, expect some stuff to not be fully working. Some things don't update when you expect, need a bit of a kick to work.
  • USD terminology can be confusing. To me it feels like synonym bingo, lots of stuff to avoid maya specific or katana specific things, takes a little getting used to.
  • Hydra render delegate support is very new. PRman has probably the best support (its a pixar product, go figure), the rest are all at v1 or even v0.99 support for hydra. Karma is still in beta, other stuff is in a state of flux. That said, everyone seems to agree that USD is the obvious choice moving forward, and are investing heavily in supporting it.
  • USD to generate final frames is pretty new. Up until recently USD was sort of used like alembic++, in that it ultimately was brought into Maya or Katana as a file format, but then to be sent to the render it would use Maya or Katana native behavior for that. This idea of pushing USD right through to the render engine itself is pretty recent, even stuff as seemingly fundamental as defining AOVs or render quality options is very new and still being actively discussed, expect changes.
  • Lops as a katana replacement is still WIP. To be explicit about the last 2 points, if 'proper' support for renderers via Hydra is new, and support for generating final frames is new, then using Lops as a Katana replacement, who's entire reason for being is to talk to renderers and generate final frames, is pretty bleeding edge. Ironically USD and Lops is probably more foreign to Houdini users than it is to Katana users. Katana folk will find a lot of the concepts and workflows familiar, even a lot of the terminology is kind of the same, while Houdini folk will have some head scratching, questions raised as to why this is more complicated than Rops. My take on it all is that H18 is v1, they've done the heavy lifting of getting most of the USD concepts translated to nodes, the framework is in place. Next steps from both Sidefx and the community is to wrap up these nodes into HDAs, streamline the workflow, so that its easy for both veterans and new users.
  • Lops as a tool for layout artists and pipeline folk is awesome. All the stuff that used to require loads of python, asset wrangling, runtime procedurals, effort and pain, bah, its all gone. Just go slap some nodes down, do a happy dance.
  • USD support in realtime engines is super new. Unity got support in the last 6 months, UE4 got proper support in the last 6 days. Expect changes.
  • USD is largely cache based, not rendertime procedural based. Requires some changes of thinking; if you're used to render-time procedurals to grow fur, generate crowds, do things, change your thinking. A core principle of USD is speed, render time procedurals screw that. USD now supports render procedurals, but Pixar are strongly advising folk to be careful if going down that path.
  • No version control out of the box. When you specify a path to a file in USD, it isn't loaded directly, but gets handled by a module called the asset resolver. This is a plugin architecture to allow you to specify file paths in different ways. USD ships with a single asset resolver, which is basically just a pass-through for files on disk; if it recognises a path your give USD is a 'real' path on disk, it will load it. But what you really want is an asset resolver that talks to your asset management system, like shotgun. This gives you the ability mentioned earlier, to just use shotgun queries like asset name and version number, and the asset resolver will ask shotgun for the path on disk. As mentioned before this is really powerful, giving version control at the filesystem level rather than in the DCC. Unfortunately, you don't get any of this from the USD distribution or Pixar, you have to write this yourself. But hey, are you using Shotgun? Well you're in luck! The clever folk I work with at UTSALA wrote an asset resolver for shotgun, it's called Turret, its open source, go get it! https://github.com/UTS-AnimalLogicAcademy/turret

Which renderers have Hydra delegates

Hydra is the USD module that handles sending the scene description to a renderer. The end goal is that render developers don't have to write separate translators and importers for Maya, Katana, Houdini, Mac, Windows, Linux etc, they just write a single Hydra plugin, and it will work everywhere. Similarly for any new fangled geometry formats that USD hasn't covered yet, as long as they write the plugin for that correctly, render engines should support it directly.

When renderers develop support for Hydra, that's called a Hydra delegate. Delegates can be offline renderers or realtime, GPU or CPU, support some features or all features of USD. It's handy that when you have it all running, you can swap between different renderers as easily as swapping between wireframe and solid shaded mode in Houdini. Here's a quick list of names, what they are, what they support:

  • Storm - Pixar's "fast interactive viewport-style renderer" in their words. Think of this as the realtime preview, it's the default in usdview, and good for checking animation, fur, camera layouts. Doesn't support volumes, doesn't support complex shading. Storm used to be called Hydra, which caused confusion with the Hydra module itself, hence the rename.
  • HoudiniGL - Sidefx's realtime viewport delegate, used by Lops by default. Supports volumes (as I understand it, it's the default Houdini viewport renderer ported to Hydra), most of what you're used to in houdini viewports.
  • Karma - Sidefx's offline renderer, in beta, early days. More or less an update to mantra, so think of it in those terms (vex based shading, principled materials, volumes fur etc, but ingests usd rather than ifd). Good breakdown of where Karma is relative to Mantra: https://www.sidefx.com/faq/karma/
  • Embree - Example fast raytracer from Intel. It's basically an ambient occlusion renderer with basic surface colour support, motion blur, not much else. Lops doesn't support it internally as its more of an example implementation, doesn't do volumes, but can be handy in usdview in a pinch. https://www.embree.org/
  • OSPRay - Intel interactive (but not realtime) CPU raytracer. The followup to embree, support more things, bigger, better, new, but they state clearly its not competing with Arnold and Renderman and the like, its an intermediate quality renderer. https://www.ospray.org/
  • Renderman - Pixars CPU renderer, supports all the things, full materials, motion blur, volumes, several different integrators. Really handy in USDview to see things exactly in context, the debug integrators to let you see wireframes on objects, or shader complexity, or a ton of other things, really useful. Amusignly there's not much info or screenshots about the renderman usd hydra delegate in action, even though we use it daily at UTSALA. Will fix this...
  • Redshift - GPU offline renderer, early days, but already pretty fully featured: https://redshiftrender.cgrecord.net/2019/11/houdini-18-redshift-hydra-delegate.html
  • 3delight - CPU offline renderer, early days, again remarkably fully featured, incredible time to first pixel, remarkable ability to load massive datasets quickly: https://gitlab.com/3Delight/HydraNSI/-/wikis/Videos
  • Arnold - CPU offline renderer, early days, but seems to be supporting most of what you'd want already: https://github.com/Autodesk/arnold-usd
  • Prorender - AMD's GPU offline renderer, early days: https://twitter.com/bhsavery/status/1028318614003232768
  • Octane - GPU offline renderer, beta: https://twitter.com/otoy/status/1123053790024716288?lang=en
  • Vray - CPU offline renderer, rumoured Hydra delegate, but no proof online that I could find.