A few people from Sidefx, Epic, Foundry, Tangent, ALA got together to have a chat about USD. Specifically that there's a bit of a knowledge gap between loading kitchen.usd and having a poke, vs actually implementing USD for a studio.
The clever folk at Sidefx did a fantastic job of taking a brief of 'what is an asset', and implementing it in Solaris and USD. This is my attempt to explain in non-USD and non-Houdini tech terms.
The plan is to start with a simple asset definition, and gradually scale it up in features and complexity. Once this is done, we might move onto how this then moves into layout and larger tasks.
The most simple definition of an asset would be a 3D model. Lets assume we want something more complete than that, and want an asset that contains a model and a material.
As such we need a few things:
- A model
- A material
- A way to assign that material to a model.
On top of these, we'll likely need a few other things:
- Names for the mesh and the material (self evident but worth pointing out)
- Locations in a scene hierarchy for these to go; imagine the Outliner in Maya or Unreal, the Scene Graph in Katana, the Scene Explorer in Max. We should be able to say 'the mesh will be found at /Assets/myAsset/geometry/mesh and the material at /Assets/myAsset/materials/mymaterial' for example. Those scene hierarchies are up to you, but they need to be defined.
- Location(s) to save this asset.
Why the optional 's'? Production pipeline diagrams will show a nice straight line between modelling and surfacing, but the reality is both departments are pushing work down the pipe at different rates at different times. In the simplistic model that would mean every time modelling published an asset, surfacing would HAVE to publish, otherwise no updates would travel downstream. Unfortunately a lot of older pipelines are stuck in this workflow.
But what if you could break that? What if you could specify a location on disk for modelling to save their work, another for surfacing, and have a third location that combines the two? That way modelling could save as often as they want, surfacing could even start before surfacing, and the asset will always get the combo of both.
In USD, it's possible to break an asset into layers, and define different save locations for each layer. In this case we'll define the geometry in one layer which is our final output, save the material into a different layer with its own save location, and import it to the final asset.
Let's go through these steps again in more detail:
- Create a 'main' layer, and store the mesh in it
- Create a layer, define a material in it and a save location
- Reference the material layer into the main layer
- Assign the material
- Write a USD file
Here's how that looks in Houdini's Solaris:
It's worth keeping in mind that those nodes represent basic USD functionality. The same steps should be possible in any 3d app that talks USD, maybe via scripting, or even outside a 3d app entirely and done purely in python scripting. And just as it should be possible to write that same workflow in any USD compatible app, it should also be possible to read that in any USD compatible app.
YMMV of course. :)
Adding Proxy Purpose
Most studios want an asset available in various formats. A final quality one for rendering, a low res one for layout for example.
USD allows this by tagging meshes with a 'purpose'. USD allows 3 kinds of purpose out of the box:
As implied render is final hero geometry, proxy is a lightweight standin, guide might for anim controls for example.
So to extend this setup, we'll define another layer, load the proxy mesh into there. We then combine the hero and low-res meshes together into a single heirarchy, assign the purpose tag to the hero and low res meshes, and assign material as before:
Here's the Solaris overview of that:
A sublayer is one of several ways of combining things. It does a 'dumb' merge, so it just takes all the stuff from one stream (the /BucketAsset), and combines it with the stuff in the other stream ( /BucketAssetProxy) )
The rest is as same as before, create a layer to bring in the materials, assign, write out a file.
Create an asset level variant
It's pretty common in a production to find assets like car01 car02 car03, tree01 tree02 tree03 etc. Most of the time these are defined as unique assets, everyone assumes this is how it has to be.
With USD however, you could have a single 'car' asset, or a single 'tree' asset, and have the ability to choose from a range of cars or trees. These are called variants.
Variants go beyond just pointing to different models on disk though. You could also have variants that only swap between materials (eg clean car vs a dirty car), or get really tricky and selectively hide and show parts of a mesh within a variant.
'So what' you might say, 'we can do that in [insert 3d app] fine'. You might also say 'separate asset definitions are good enough'. The big win here is once again this is available to all 3d apps that can talk USD. So while this asset might be authored in Houdini, the layout department can be loading cars and trees in maya, and swapping between different variants. And then further downstream again in Katana, lighters can access and swap between variants if there's a last minute change, without having to go all the way back to layout, or modelling, or a callsheet for the assets belonging to a shot or set.
Here's the Solaris method for adding a geometry variant:
In this example Mike has taken the bucket asset, hidden the handle, and made that a variant. What's cool about this method is USD will be smart enough not to save the entire bucket to disk with the handle, then entire bucket again without a handle, it's smart enough to just store the difference between the models. Not impressive with a bucket, but that could save huge amounts of memory and disk space for massive assets with relatively small changes between variants.
Creating Render and Proxy LOD variants
What if you need more than just a highres ('render') and lowres ('proxy') version of your asset? Variants can be used for this too. While the previous example was a swap between a handle and no-handle version of bucket, (and we might name that variant 'handle'), we could have a a variant selection called 'lod', and put high/medium/low res meshes into those variants.
The Solaris network below is showing off by procedurally creating those mesh reductions, but the steps involved would be the same for any other app or pipeline:
- take your highres mesh
- create medium and low res reductions (as many as you feel you require)
- define a variant called 'lod' (or whatever name you want)
- attach those reductions to the variant
This network goes the extra mile and also defines lod's for both the render and proxy purposes. You could argue you don't need those, but hey, choice is good right?
Adding material variants to LOD variants
And here we go all the way, setting material variations as well as model variations. You can see the network is getting pretty baroque, but the base idea is the same:
- bring in the mesh
- create the LOD's
- create several materials
- assign materials to lods
- create a variant set that lets you choose between materials and lods