Difference between revisions of "HoudiniLops"

From cgwiki
Line 193: Line 193:
 
Again send that to gdrive, that to the phone, hey presto, textured usd model on iOS!
 
Again send that to gdrive, that to the phone, hey presto, textured usd model on iOS!
  
Bonus fun trick that I used [https://twitter.com/thecgwiki/status/1240139391042977794 for a twitter post] was again thanks to Ben. He pointed out that Apple have a free app called [https://apps.apple.com/us/app/reality-composer/id1462358802 Reality Composer], which lets you quickly prototype AR setups and bind USDZ assets. Loading up the face tracking template, pulled in Ben's head and moved it off to the side, job done.
+
Bonus fun trick that I used [https://twitter.com/thecgwiki/status/1240139391042977794 for a twitter post] was again thanks to Ben. He pointed out that Apple have a free app called [https://apps.apple.com/us/app/reality-composer/id1462358802 Reality Composer], which lets you quickly prototype AR setups and bind USDZ assets. Loading up the face tracking template, pulled in Ian's head and moved it off to the side, job done.
  
 
== Todo ==
 
== Todo ==

Revision as of 05:26, 18 March 2020

Lops quickstart

The simplest take on Lops is that it's a procedural hierarchy editor. At the school where I teach we'll be using Lops this year for doing layout, creating sets, all that stuff, so this quickstart is heavily focused on that. From that perspective there'll likely be a lot of 'ahhhh, is that all there is to this?' moments, as this side of Lops is relatively straightforward. Lops and USD are capable of lots of other things, will cover those when I get to them!

Some context: Houdini is 99% flat files, wherever possible Houdini artists are used to taking carefully constructed heirarchies for characters, sets etc, throwing it all away and just treating the geo as a big garbage bag of polygons. Fine for the most part, but at some point you have to start dealing with hierarchies again.

You can get away with it usually by making sure any @name and @path attribs you have on the way in still exist on the way out, but if you have to actually manipulate hierarchies, move stuff to live under different parents, translate a parent and have the children follow, that was hard.

Lops is the answer to these issues (and a few other issues too). It's a way to bring a Houdini procedural mindset to manipulating scene hierarchies. Naming of top level folders, putting things in right subfolders of a hierarchy, reparenting this to that, editing specific transforms etc, Lops has you covered. Hooray!

To get started, make sure to set your desktop to 'Solaris', so you can look at the scene graph tree and see what's going on with your object hierarchy. This should drop you to a new context, so in addition to obj, shop, mat etc, you have a new one, stage.

Credit where it's due, Ben Skinner did most of the work here, I just wrote it down. Ben developed a lot of the USD stuff for our pipeline at UTSALA in 2018, then was first to dive in and play with Lops and PDG in 2019, so many thanks to him. He has his own website of more coder focused tips at http://vochsel.com/wiki/ , and is now in Toronto working at Tangent Anmation. If you see him at a Toronto Houdini user group, make sure to buy him a beer.

Right, lets go!

Define a top level folder

Selection 132.png

Create a primitive lop. Look in the scene graph tree (SGT), you can see you have a tree with 2 things, HoudiniLayerInfo and primitive1. The parameters for the primitive lop set its primitive path as /$OS. In other words its at the top of the hierarchy, and $OS means its named after the node itself. Rename the node itself from 'primitive1' to 'set', and you'll see in the SGT its been renamed to /set.

Add a sphere to the scene

Create a sphere lop. View it, you can see its made you a sphere, and its location in the SGT is /sphere1.

Merge the sphere and set

You can do the houdini thing, put down a merge node, and connect the set lop and the sphere1 lop to it. Look in the SGT, they're now both in the hierarchy.

Merge the sphere and the set, Lops style

Selection 133.png

Merging is fine, but you can also connect nodes inline, like Katana. Delete the merge, wire the sphere after the primitive. Look in the SGT, you've done the same as the merge but with 1 less node.

Remember, lops aren't sops! Sops is about manipulating geometry, lops are about manipulating hierarchies. Lops nodes can carry through what's in the previous node, and add their own stuff. Takes some getting used to, but you quickly get the hang of it.

Merge the sphere and make it a child of the set

Sphere parent.gif

The set primitive and the sphere are sitting side-by-side in the SGT, we probably want the sphere to be a child of the set. A manual way for the moment is just to set the path for sphere1 to where we want it to go. Select the sphere lop, change its primitive path from /$OS to /set/$OS. Look in the SGT, its now a child of /set.

Bring in a pig from sops

It's unlikely you'll just have a scene full of spheres and nulls. Jump over to sops and make a pig, and append a null named OUT_PIG so you can find it easily. Get back to the /stage network. Append a sop import, set the sop path to find your pig. Look at the SGT, ugh, ugly name, its called it sopimport1 over there. Rename your lop to 'pig'. Now it has a nice name, but a bad location, it should be under /set. Change the Import Path Prefix to /set/$OS.

Move the pig

Can't just have the pig at origin, that's silly. Select the pig in the SGT, and choose the translate tool from the viewport left-hand options. Drag it away so it's no longer blocked by the sphere. Now look in the node view, see that its created an edit node for you. This works like an edit node in sops, so you can select the sphere, move that, back to the pig, move that, etc, all these general changes are stored on the single node. Works, but sometimes you'll want more explicit control. I wonder if lops has something like the transform sop?

Move the pig with a transform lop

Of course it does. Append a transform lop, and at the top where you'd expect to find a group, there's a field expecting you to give it a name of a SGT location. Clear the expression and start typing /set/pig, you'll see it has tab completion like usual groups. You can now move stuff more explicitly. That's nice. Also note you can move /set, and the children move as expected. That's a trick you can't easily do in vanilla houdini.

Edit lots of things with a stage manager

Say you have lots of usd files on disk, and you need to do lots of making folders, parenting stuff, getting initial layouts correct. This is easy in Maya with its Outliner cos you can just directly grab groups, rename, do things, but the SGT is view only. You don't wanna go use Maya do you?

No you don't. Append a stage manager instead. The parameter pane now looks like a simple version of the SGT, but this is fully editable. R.click, make folders, double click stuff to rename things, shift click and drag and drop stuff, go craaaaazy. Further, click the little folder icon, it brings up a file browser, so you can find all those usd files on disk, drag them into the graph view, or even into the viewport. Click the little transform icon next to things to move them directly from this one node. It's amazing.

Fancier combining with graft and reference

Say you had a castle set, and had gone through with the stage manager and defined locations for moat, drawbridge, castle, courtyard etc. Meanwhile you had another chain of lops nodes to make a bedroom. Once you have that whole chain, how would you insert that bedroom scene graph into the correct location of the bigger castle scene graph?

A graft is the simple way. It takes 2 inputs, and reparents the right input to a SGT location in the right input. By default it has an expression to find the last defined primitive from the left input, and parents all the stuff from the right input under that primitive. You can override that and put it wherever you want, but that's base idea.

A reference is a fancy graft. As well as 'parent all the right inputs to somewhere on the left' input, it can also directly load usd files from disk, and parent them to a location (this is its default behavior).

Reference vs payload

The reference lop has a few modes, with alternate between 'reference' and 'payload'. A reference is just loaded, and that's that. A payload gives you options similar to a file sop; you can have it in a delayed load mode, or just bounding box, or full geo. Wherever possible (and wherever it makes sense), use payload.

Materials

Think about what needs to be done here, and this becomes more intuitive. We need to define materials, pull those materials into the SGT, and finally assign those materials to some geometry.

A material library does all this. Append one, by default it looks for materials inside itself. Dive inside, you're now in a mat context.

  • Create a few principled materials, name them nice, jump up again.
  • Click the 'auto-fill materials' button, look at what it's done; it's made a /materials folder in the SGT, and put all the materials under it. From the parameters pane it will have made a multilister for each material, each has a 'geometry path' parameter.
  • You can drag geometry from the SGT into this parameter, or use the tab completion stuff, or use wildcards.


The material assignment will appear in the viewport if the viewport understands your material. The binding of a material to geometry is tagged on the primtive. Select a primitive in the SGT that has a material assigned, and look in the scene graph details pane. There's been an attribute for a material binding, linking to the chosen material.

Variants

For our current project we'll be in a forest fire. Some trees will be on fire, others won't. I remembered a siggraph talk by MPC on The Jungle Book fire sequence, where layout had fire assets and props they could put in the set, seemed like a good thing to try in Lops.

To be specific, I would like a tree asset, and have an option to have the tree on fire or not. Variants are the USD mechanism for this.

The Lops skin on top of variants is kind of a fancy merge, kind of a fancy switch.

First get your geo ready. I've sop imported a tree, assigned a material, and used a graft to put it all under a nice top level SGT transform '/testTree01':

Tree variant prep1.gif

I did a quick pyro sim in sops, made it loop (the sidefx labs loop sop is awesome), wrote a vdb sequence on disk. I imported that with a volume lop, assigned a material, grafted that under /treeTest01 as well:

Tree variant prep flame.gif

But we don't want to choose between tree or flame, we want to choose between tree, and tree+flame. No big deal, lets just merge the tree and the flame to create our tree+flame, ready to feed to our variant setup:

Tree and fire merge.gif

Now the variant magic. We have a tree, a tree+flame, and connect them to a variant lop. I create a 'add variants to new primitive' lop, and connect the tree and tree+fire to the second input.

When this is all done, variants are presented as a drop down selection, so we need to define a name for the drop-down option, names for each of the drop downs, and what thing in the SGT this is all applying to. Here I'm telling the variant thing (the primitive) is /testTree01, the name of the drop-down will be 'fire_toggle'. To name the options within the drop down, double click and rename in the second column of the multilister:

Variant setup.gif

Now we can select which one to use with a 'set variant' lop. Append, choose the variant option ('fire_toggle'), choose a variant, see the SGT and the viewport update to flip between fire and no fire. Neat!

Variant set.gif

Oh wait, the thing is called something silly (or was when I set it up), the variant node uses /$OS as the name for the new variant. Silly node. Change that to /testTree01, and it all works as expected.

This can now be duplicated (try a duplicate lop the equivalent of a copy and transform), and set variants on a subset of the trees. It's pretty cool.

Why do all this when we could've just used a switch? Remember, when we save this USD asset out to disk, all that variant magic is now inside it. So we could choose variants here in Houdini, or in USDview, or in Maya, or in Katana, or in any package that supports USD. If we get to final lighting and the lighter realises they need to have more or trees on fire, they can do it, it won't involve a kickback to fx or layout.

A cleaner way to prepare those variants

Ben Skinner pointed out the merge isn't necessary, I could just chain 2 grafts. He's right of course.

Variant cleaner prep.gif

Instancer for trees on a groundplane

Lops instancer.png

Download hip: File:lops_forest.hipnc

  • instancer lop
  • left input is for the scene stream that'll be passed through
  • right input is for the tree
  • internal sops is where you define the scatter locations

So:

  1. sop import a tree, connect that to the right input
  2. on instancer, 'prototypes' is the things to be instanced. So set prototype source to 'second input'
  3. on instancer, 'target points' is the point locations. Default mode is 'internal sop', we'll use that
  4. dive inside, create a groundplane (or object merge in something), append a scatter
  5. done!


It's not a copy sop with left input-instanced-onto-right. We're in katana/usd style magical land now, we might have already setup a bunch of stuff for the set, characters, fx, and then adding onto this stream will be our forest.

There's some tidying up to do here though, names and stuff should be better. The USD convention is for the objects you're instancing to go in a prototypes folder, which the instancer does for you. Generally name things as nice as possible. Starting this from scratch I've made a primitive lop named set, to get a /set at the top of my heirarchy. I've named the instancer 'forest', and put it under /set/$OS. The tree via the sop import is named 'tree' and its primpath is /$OS. When the instancer grabs it, it gets moved underneath to be at /set/forest/prototypes/tree.

Usdz and iOS

Ian desktop crop sm.PNG

A friend (hi Ian!) got a 3dscan with a texture, asked if I could help him reduce it. I figured this would be a interesting challenge, and a chance to follow in the path of Ben Skinner who had done some fun AR tests with USD and iOS.

Basic import convert and export

First step was to import the obj. It was 900mb, more than Houdini could handle, but I could load it into Blender and immediately export as alembic. Obj is a super old format, alembic is more recent and designed to handle high polycounts, once converted Houdini could load that happily.

Once that was in Houdini, I could run a polyreduce and bring it down to about 20,000 polys.

I used a sopimport to bring it into Solaris, and a usd rop to export a usd. Once that was on disk I used the command line tool 'usdzip' which is part of the USD package to convert it to a usdz file.

Upload that to google drive, download from google drive to my phone, click it, and it opens automatically in AR view and.... its enormous. Like Ian's head is the size of Mount Everest. And it's got an ugly pink and purple preview material. But it works!

Fix scale and material

Scale and rotate usd.PNG

Back in sops I appended a transform sop after the polyreduce, and set uniform scale to 0.01.

To fix the pink+purple look Ben told me I have to add a usd preview material. In Lops I put a material library lop after the import, dove inside. I created a usdpreviewsurface material, set the basic parameters, jumped up a level, assigned it to the head, export. Run the usdzip -> gdrive -> phone process, its now the right size and a uniform gray material, but facing the wrong way. Rotating the transform sop 180 degrees fixed.

Add a texture

Lops arkit matnet.PNG

The head scan came with a diffuse texture, time to add that too. It was massive (16k x 16k), so I used cops to reduce it to 2k, and save as a PNG, as Apple only supports PNG textures.

In the material library subnet I added a usduvtexture and filled in the path to the PNG. I thought I'd see the texture in the viewport, but nothing. Ben pointed out the network needs to bind the @uv attributes, in Lops that is done with a usdprimvarreader. Create it, set signature to float2, var name 'st', connect result to the st input of the usduvtexture node. Again, no result.

Last thing to do is to tell the sopimport to convert @uv to @st. Jump up, Select the sopimport node, expand the 'import data' section, scroll to the bottom, enable 'translate UV attribute to ST'. The texture now appeared in the realtime viewport!

Export that USD, and convert to usdz again. This time usdzip needs to be told to pack both the model and the texture, which you do with the --arkitAsset command:

usdzip --arkitAsset ianhighrestexhead.usd ianhighrestexhead.usdz

Again send that to gdrive, that to the phone, hey presto, textured usd model on iOS!

Bonus fun trick that I used for a twitter post was again thanks to Ben. He pointed out that Apple have a free app called Reality Composer, which lets you quickly prototype AR setups and bind USDZ assets. Loading up the face tracking template, pulled in Ian's head and moved it off to the side, job done.

Todo

  • How to use usd clips to loop that 40 frame vdb sequence -- got an answer, need to write this up
  • load shotgun metadata? shot start/end? handles?
  • lops and the farm/tractor/pdg
  • Confirm rop usd outputs are going to the right places
  • what vex wrangle tricks can we do in lops?
  • scene import, pitfalls
  • cameras and lights
  • controlling render settings
  • usdskel stuff for crowds
  • usdshade, loading shader networks that exist in usd files, make overrides

Musings

Why is USD interesting if I'm not a big studio?

A rant I did on discord, in the pub, to my family, copied here and tidied up for your benefit. Nice images, practical examples etc will come later.

Short version: It lets small studios punch well above their weight.

Long version:

Big studios have lots of big things. Big farm, big teams of artists, big IT and infrastructure. All of those things are important to get big shows done, but a key factor is allowing people to solve systemic problems that aren't purely tech and aren't purely art. Pipeline TDs, department TDs, RnD, there's enough people hired and they're given enough space and time to allow a big studio to function more efficiently. Small studios generally can't afford this.

6 years ago

Take a film I worked on about 6 years ago, wall to wall photoreal cg, crowds, environments, the works. In the start of a show like that it feels like a small studio, a small team of people who each have a specialty, just experimenting and sorting things out. As the show progresses more artists are hired, the work expands.

At a certain point the scale of the project starts to have an effect. The quantifiable stuff is fine; number of shots, number of assets in shot, get a metric of how long it takes an artist to make a certain asset or finish a shot, multiply that out to get x thousand days for a single artist, look at how much time you have left, divide one by the other, thats the number of artists you need. Oversimplifying, but that's the idea.

What happened as more assets were completed, more stuff got shoved into shots, was the tools got slower and slower. So slow that it began to affect artist productivity. In a smaller studio you shrug, maybe panic, but you can't do much more than that, everyone is busy doing the 'arty' work assigned to them.

In a big studio, the TDs and RnD folk kick in. They can analyse the tools, identify bottlenecks, rewrite slow things, adjust stuff to get time-to-first-pixel more quickly, time to final comps faster. One of the things that really slowed us down was assembling big shots; tools 6 years ago could handle 1 asset fine, probably 10, maybe 100. But 1000, 10000, things get slow, and thats when you want an army of TDs with you to solve stuff.

Now

Jump to now, whats changed? Machines are faster, renderers are better, cloud computing is a thing. Some tools are better, some have made things incredibly efficient. Megascans, Substance, Houdini improvements, means making individual assets and fx is much faster.

Big assembly still sucks. Maya is still miserable at handling lots of transforms, Houdini when working in /obj is clunky and lame. I just finished watching The Lion King, and was blown away by how good the completely digital environments were. Looking at them critically, you could probably make a single element of those environments easily enough (a tuft of grass, a rock, a tree), but to assemble thousands of them into a set, ugh, a nightmare.

A big sequence in a small studio without USD

Say you were crazy enough to do that with a team of 7: 2 modellers, 2 surfacers, a layout artist, a lighter, a comper. You have 30 shots in a Savannah to do. Run a quick breakdown, thats 7 grass models, 8 rocks, 4 trees, ground, 4 mountains, twigs, pebbles, 5 bushes. Each has 3 surfacing variations. Modellers and surfacers work away on that as fast as they can, save it all on disk. Layout artist gets started, pulls all these models into maya via references, lays them out, animates a camera. Lighter gets started, uh-oh, there's a non manifold edge somewhere that causes the render to crash.

The lighter flags it, can't tell exactly which model it is, but its in the lower right corner of the shot. Layout artist tries to identify the asset, its rock07v01. Modeller fixes it, saves as v02. Now what? The layout artist has to find every instance of rock07, and update it from v01 to v02. Meanwhile the lighter finds the texture used for grass03 is too high res, while tree04 roughnessmap is too low res. They get kicked back to surfacing, version up, again layout person has to find those materials and update in the layout file. Then director notes, more changes. Also in shot 30 the tree needs to be moved for a nicer composition. Oh, and this all now has to be moved to katana, cos maya just can't handle this anymore.

All of those things are distressingly common, and are maybe 10% of the daily churn of shots and assets. All those changes need to be updated, rolled into shots. If you're working across multiple DCCs, how do you handle this? Alembic is ok for geo, but doesn't store material definitions. It still requires a hard bake at some point, if assets get updated, someone has to open the maya scene, update, republish. Maybe you can write a python script to automate it, or a batch job on the farm. But then how do you ensure lighters are using the right file? And now the alembic is blowing out into bigger and bigger filesizes, so big maya and katana are having problems loading it....

And so it goes. At this point you'd be wondering why you ever bothered, and surely if we're suffering through all this, others are too, and why are we all solving it alone?

Enter USD

Well its not just you, and not just the small studios, big places have the same issues. Even Pixar. Luckily Pixar have lots of smart people, and are keen on open source, so have shared their solution, and its USD. USD solves lots of things, lets run through how it handles the issues outlined above:

  • A usd file can point to other usd files, which can point to other usd files, which can point to other usd files. Like maya references this means you could have a file, say shot050.usd, which is actually only 5 lines of text, as internally it points to camera.usd, fx.usd, props.usd, char.usd, set.usd. If you went and looked at set.usd, it might refer to hundreds of usd files within it, so grass01.usd, tree03.usd etc, with their transforms defined for the set. Dive into grass01.usd, that'll have the polygons that define the model, but it can also have a shader graph defined in it. So usd at this point can be thought of as fancy alembic, which can reference other alembics.
  • Those references to other usd files, they can be hard paths on disk, so /assets/setpieces/grass/grass01_v002.usd, but it can also be an abstracted path. In our case at UTSALA we use an abstraction that points to shotgun queries, so the path looks like type='setpiece'&name='grass01'&version='latest'. When the file is loaded, the usd libraries know 'ah, I better ask shotgun what this means so I can get a path on disk', gets that path, loads the thing. THIS IS HUGELY POWERFUL. No more relying on the poor layout artist to update versions of things. No double checking lighting renders to ensure assets are correct. No writing of code in your DCC to compare disk paths to what shotgun says should be in your scene. This awareness of version and disk path abstraction is built into the usd file format itself.
  • usd works cross platform. originally for maya, then katana, then houdini, and now rapidly spreading into most dcc's. This means a lot of the difficult interop and translation between dcc's is gone.

But also...

  • usd comes with a standalone viewing tool, usdview. Think of it as mplay or rv, but for 3d scenes. Want to check whats in a shot? Load it in usdview, its blazingly fast, plays buttery smooth, it's a great QC tool
  • referencing lots of usd files that in turn reference more usd files isn't just appending files together like maya referencing; you can be really clever and have 'stronger' usd files selectively update, tweak, modify 'weaker' usd files. So you could have char.usd for base character anim caches, but a later charfx.usd file could insert fur, hair, cloth caches into the hierarchy of the character setup, so lighting don't even have to know they're coming from another department.
  • usd has its own fast method to talk to render engines. Almost all the major players have announced support for this (called 'hydra delegates'), meaning you don't even need a DCC app to send usd files to a renderer; they can function like .rib, .ass, .ifd, and be rendered directly.
  • USD has built in support for lots of things you need but don't realise you need until you think about it. High performance instancing, LOD, variations, volumes, lights, geo attributes, curves, particles, crowds, motion blur, cameras, it's all in there. Pixar have been using USD for their films for a good 5 years, and USD's predecessor for many years before that, they've thought of and solved most of the use cases!

Enter Lops

This was all amazing, but still required knowledge of USD's python api to tie it all together. What didn't exist was an artist friendly interface to all this incredible power. That's what Lops/Solaris is. Sidefx have made a new network type that represents USD primitives as nodes, USD operations as nodes, and lets you do all the stuff you'd want to do.

Compare to Clarisse and Katana

One could question how does this compare with the two apps known for solving big shots, big numbers of lighters, namely Katana and Clarisse.

Katana set out to solve the question of big shots and lots of them many years ago, and does it broadly by letting lighters import huge amounts of geometry from multiple locations, merge them together, create and modify lights, material assignments, object properties etc, then send all that to a render engine. By design it won't actually try and load geo unless you explicitly ask it, meaning lighters can work in katana quite quickly. It has good high level support for switching chunks of a network based on rules, so you might turn on a bunch of lights for shots facing characterA, vs a bunch of different lights for shots facing charB.

Katana's pro is also it's con; it can feel very hands off, you're generally operating on your scene in a kind of abstracted state, making rube goldberg machines trying to catch names of objects in your shot, do things if found, hope that it all falls out the bottom as you'd expect. It's also a little beyond the reach of small studios, both being quite expensive, and needing substantial TD work before it can even run effectively.

Clarisse tries to solve similar problems to Katana, but by being tightly coupled to its renderer is much more direct and hands on. It's faster to get going with less technical expertise, and was quickly adopted by matte painters as a way to generate massive shots with thousands of trees, buildings, stuff.

It's cons are that it's developed a reputation for being unstable, and that it isn't really designed to talk to existing renderers, you're buying into a unified lighting tool+renderer.

Both Katana and Clarisse work on the core idea that they're the final stop; 3d assets are brought into them, images come out. Also the ability to edit the scene is limited to what lighters require, and in Clarisses case what matte painters want; you can create lights, cameras, modify materails, some object attribtes, but thats's it. You can't really model geometry, or do fx, or do uvs, or animate characters, anything that you'd traditionally do in maya, you do in Maya.

Compare to Lops.

Lops by itself should cover most of what Clarisse and Katana do. Import big shots, create and modify lights, material assignments, object properties, send to a renderer. But being built around USD, you get all the I-can-see-all-the-geo from Clarisse, combined with the I-can-render-to-whatever-renderer from Katana.

But Lops isn't by itself, it's in Houdini! There's nodes to allow you to send stuff from Lops to other contexts in Houdini, and to go back the other way. So select a building in lops, take it over to be destroyed by and RBD solver, bring it back in. Create a standalone volume, pull that into your shot. Realise this certain model needs better uv's, fine, bring it into sops, uv away, bring it back.

PLUS, it's not just to and from Houdini. Save your final setup as USD, send it to katana if you need to. Or back to animation. This is the U in usd, its universal, you can bounce this back to any usd compliant dcc, it should be able to use it.

Proceduralism for scenes

A final sell for existing Houdini folk is the difference between sops and /obj. Once you've used sops for a bit, you get comfortable with the idea of copy-to-points a ton of pig heads to points, or creating volumes from scratch, or selectively deleting faces who's normal faces directly along +Y. Yet we jump up to /obj and its same old manual cameras, manual lights, manual visibility states, manually dragging and dropping objects into rop object lists.

USD and Lops brings that concept of sops proceduralism to /obj contexts. Bringing in caches from disk can be rule based. Make lights based on procedural things. Scatter and layout stuff as manually or as automatic as you want. Have all that stuff change instantly based on the shot name. Save an entire lightrig to a cache on disk, bring it back later via a HDA menu, or through shotgun. Proceduralise all the things!

State of USD and Lops late 2019

The above is the sell. What's the reality? What should you be aware of? Bullet points for me to fill in later:

  • USD is rapidly evolving. Base geometry, layering of geometry is solid. Storing shading networks in USD is relatively new, as are volumes. USD crowd support is bleeding edge. USD for realtime and mobile is very bleeding edge and changing all the time.
  • Lops as a node UI on USD is very very new. So some parts are a new thing sitting on a new thing, expect some stuff to not be fully working. Some things don't update when you expect, need a bit of a kick to work.
  • USD terminology can be confusing. To me it feels like synonym bingo, lots of stuff to avoid maya specific or katana specific things, takes a little getting used to.
  • Hydra render delegate support is very new. PRman has probably the best support (its a pixar product, go figure), the rest are all at v1 or even v0.99 support for hydra. Karma is still in beta, other stuff is in a state of flux. That said, everyone seems to agree that USD is the obvious choice moving forward, and are investing heavily in supporting it.
  • USD to generate final frames is pretty new. Up until recently USD was sort of used like alembic++, in that it ultimately was brought into Maya or Katana as a file format, but then to be sent to the render it would use Maya or Katana native behavior for that. This idea of pushing USD right through to the render engine itself is pretty recent, even stuff as seemingly fundamental as defining AOVs or render quality options is very new and still being actively discussed, expect changes.
  • Lops as a katana replacement is still WIP. To be explicit about the last 2 points, if 'proper' support for renderers via Hydra is new, and support for generating final frames is new, then using Lops as a Katana replacement, who's entire reason for being is to talk to renderers and generate final frames, is pretty bleeding edge. Ironically USD and Lops is probably more foreign to Houdini users than it is to Katana users. Katana folk will find a lot of the concepts and workflows familiar, even a lot of the terminology is kind of the same, while Houdini folk will have some head scratching, questions raised as to why this is more complicated than Rops. My take on it all is that H18 is v1, they've done the heavy lifting of getting most of the USD concepts translated to nodes, the framework is in place. Next steps from both Sidefx and the community is to wrap up these nodes into HDAs, streamline the workflow, so that its easy for both veterans and new users.
  • Lops as a tool for layout artists and pipeline folk is awesome. All the stuff that used to require loads of python, asset wrangling, runtime procedurals, effort and pain, bah, its all gone. Just go slap some nodes down, do a happy dance.
  • USD support in realtime engines is super new. Unity got support in the last 6 months, UE4 got proper support in the last 6 days. Expect changes.
  • USD is largely cache based, not rendertime procedural based. Requires some changes of thinking; if you're used to render-time procedurals to grow fur, generate crowds, do things, change your thinking. A core principle of USD is speed, render time procedurals screw that. USD now supports render procedurals, but Pixar are strongly advising folk to be careful if going down that path.
  • No version control out of the box. When you specify a path to a file in USD, it isn't loaded directly, but gets handled by a module called the asset resolver. This is a plugin architecture to allow you to specify file paths in different ways. USD ships with a single asset resolver, which is basically just a pass-through for files on disk; if it recognises a path your give USD is a 'real' path on disk, it will load it. But what you really want is an asset resolver that talks to your asset management system, like shotgun. This gives you the ability mentioned earlier, to just use shotgun queries like asset name and version number, and the asset resolver will ask shotgun for the path on disk. As mentioned before this is really powerful, giving version control at the filesystem level rather than in the DCC. Unfortunately, you don't get any of this from the USD distribution or Pixar, you have to write this yourself. But hey, are you using Shotgun? Well you're in luck! The clever folk I work with at UTSALA wrote an asset resolver for shotgun, it's called Turret, its open source, go get it! https://github.com/UTS-AnimalLogicAcademy/turret

Which renderers have Hydra delegates

Hydra is the USD module that handles sending the scene description to a renderer. The end goal is that render developers don't have to write separate translators and importers for Maya, Katana, Houdini, Mac, Windows, Linux etc, they just write a single Hydra plugin, and it will work everywhere. Similarly for any new fangled geometry formats that USD hasn't covered yet, as long as they write the plugin for that correctly, render engines should support it directly.

When renderers develop support for Hydra, that's called a Hydra delegate. Delegates can be offline renderers or realtime, GPU or CPU, support some features or all features of USD. It's handy that when you have it all running, you can swap between different renderers as easily as swapping between wireframe and solid shaded mode in Houdini. Here's a quick list of names, what they are, what they support:

  • Storm - Pixar's " fast interactive viewport-style renderer)". Think of this as the realtime preview, it's the default in usdview, and good for checking animation, fur, camera layouts. Doesn't support volumes, doesn't support complex shading. Storm used to be called Hydra, which caused confusion with the Hydra module itself, hence the rename.
  • HoudiniGL - Sidefx's realtime viewport delegate, used by Lops by default. Supports volumes (as I understand it, it's the default Houdini viewport renderer ported to Hydra), most of what you're used to in houdini viewports.
  • Karma - Sidefx's offline renderer, in beta, early days. More or less an update to mantra, so think of it in those terms (vex based shading, principled materials, volumes fur etc, but ingests usd rather than ifd). Good breakdown of where Karma is relative to Mantra: https://www.sidefx.com/faq/karma/
  • Embree - Example fast raytracer from Intel. It's basically an ambient occlusion renderer with basic surface colour support, motion blur, not much else. Lops doesn't support it internally as its more of an example implementation, doesn't do volumes, but can be handy in usdview in a pinch. https://www.embree.org/
  • OSPRay - Intel interactive (but not realtime) CPU raytracer. The followup to embree, support more things, bigger, better, new, but they state clearly its not competing with Arnold and Renderman and the like, its an intermediate quality renderer. https://www.ospray.org/
  • Renderman - Pixars CPU renderer, supports all the things, full materials, motion blur, volumes, several different integrators. Really handy in USDview to see things exactly in context, the debug integrators to let you see wireframes on objects, or shader complexity, or a ton of other things, really useful. Amusignly there's not much info or screenshots about the renderman usd hydra delegate in action, even though we use it daily at UTSALA. Will fix this...
  • Redshift - GPU offline renderer, early days, but already pretty fully featured: https://redshiftrender.cgrecord.net/2019/11/houdini-18-redshift-hydra-delegate.html
  • 3delight - CPU offline renderer, early days, again remarkably fully featured, incredible time to first pixel, remarkable ability to load massive datasets quickly: https://gitlab.com/3Delight/HydraNSI/-/wikis/Videos
  • Arnold - CPU offline renderer, early days, but seems t be supporting most of what you'd want already: https://github.com/Autodesk/arnold-usd
  • Prorender - AMD's GPU offline renderer, early days: https://twitter.com/bhsavery/status/1028318614003232768
  • Octane - GPU offline renderer, beta: https://twitter.com/otoy/status/1123053790024716288?lang=en
  • Vray - CPU offline renderer, rumoured Hydra delegate, but no proof online that I could find.