Difference between revisions of "UsdAssetGuide"

From cgwiki
(12 intermediate revisions by the same user not shown)
Line 29: Line 29:
 
==== Scene hierarchy location ====
 
==== Scene hierarchy location ====
  
[[File:lops_hierarchy_example.PNG]]<br>
+
[[File:treeviews.png]]<br>
''Example scene hierarchy in Solaris (will add the same in Maya, Katana, USDview)''
+
''Example scene hierarchy in Houdini (Solaris), Maya Outliner, Katana, USDView''
  
 
Scene hierarchy simply refers to where in a tree view of your asset the various bits live. This tree view goes by several names in different packages, but should be immediately recognizable for your preferred app:
 
Scene hierarchy simply refers to where in a tree view of your asset the various bits live. This tree view goes by several names in different packages, but should be immediately recognizable for your preferred app:
Line 64: Line 64:
 
We need to say where the file is saved, that's intuitive enough. But why would we need to define multiple save locations?
 
We need to say where the file is saved, that's intuitive enough. But why would we need to define multiple save locations?
  
The short answer: Pipeline and department flexibility. Modelling can save to one place, surfacing can save materials to another, the files work together happily.
+
The short answer: USD allows you to say up front, 'the bucket asset lives at Bucket'''Asset'''.usd', but under the hood BucketAsset.usd might point to 2 other USD files, Bucket'''Mesh'''.usd and Bucket'''Surfacing'''.usd.
 +
 
 +
Modelling and surfacing save to their respective files, downstream departments just load the high level asset, and it all works.
  
 
<div class="toccolours mw-collapsible mw-collapsed">
 
<div class="toccolours mw-collapsible mw-collapsed">
Line 74: Line 76:
 
The reality is the departments work in parallel, publishing at at different rates. Surfacing might start work on a material weeks before a model is ready, in other cases modelling might have many assets published, and need to see it in shots for context. Using the simplistic pipeline either surfacing couldn't start anything until models are ready, and modelling could never see their work until surfacing published a material for every mesh.
 
The reality is the departments work in parallel, publishing at at different rates. Surfacing might start work on a material weeks before a model is ready, in other cases modelling might have many assets published, and need to see it in shots for context. Using the simplistic pipeline either surfacing couldn't start anything until models are ready, and modelling could never see their work until surfacing published a material for every mesh.
  
But what if you could break that? What if you could specify a location on disk for modelling to save their work, another for surfacing, and have a third location that combines the two? That way modelling could save as often as they want, surfacing could even start before surfacing, and the asset will always get the combo of both.
+
But what if you could break that? What if you could specify a location on disk for modelling to save their work, another for surfacing, and have a third location that combines the two? That way modelling could save as often as they want, surfacing could even start before modelling, and the asset will always get the combo of both.
 
</div>
 
</div>
 
</div>
 
</div>
Line 104: Line 106:
  
 
Here's the same in a collapsed text box if you want to copy/paste into an editor and have a play:
 
Here's the same in a collapsed text box if you want to copy/paste into an editor and have a play:
 
  
 
<div class="toccolours mw-collapsible mw-collapsed">
 
<div class="toccolours mw-collapsible mw-collapsed">
Line 259: Line 260:
  
  
The mesh geometry is stored in Bucket.usdc, a binary format, here it is:
+
The mesh geometry is stored in Bucket.usdc, a binary format, here a download link:
  
  Download file: [[:File:Bucket.usdc]]
+
  [[:File:Bucket.usdc]]
  
 
While this could be stored in plain text as well, there's often no need for it, as its unlikely to be looked at by humans. Binary is better for size on disk and loading speed.
 
While this could be stored in plain text as well, there's often no need for it, as its unlikely to be looked at by humans. Binary is better for size on disk and loading speed.
Line 283: Line 284:
 
[[File:Selection_001.png]]
 
[[File:Selection_001.png]]
  
A sublayer is one of several ways of combining things. It does a 'dumb' merge, so it just takes all the stuff from one stream (the /BucketAsset), and combines it with the stuff in the other stream ( /BucketAssetProxy) )
+
A sublayer is one of several ways of combining hierarchies of things. It does a 'dumb' merge, so it just takes all the stuff from each of the inputs and throws them together.
 +
 
 +
In this gif you can see one node has a hierarchy under /BucketAsset/Render,the other under /BucketAsset/Proxy, the sublayer combines so that we end up with a tree with both Proxy and Render under /BucketAsset:
 +
 
 +
[[File:sublayer_combine.gif]]
  
 
With the render and proxy models sitting together in the hierarchy, they get their purpose tag's set.  
 
With the render and proxy models sitting together in the hierarchy, they get their purpose tag's set.  
  
 
The rest is as same as before, create a layer to bring in the materials, assign, write out a file.
 
The rest is as same as before, create a layer to bring in the materials, assign, write out a file.
 +
 +
In terms of the usd file itself the changes are pretty straightforward; it now loads in 2 meshes via the sublayer command, and sets the purpose tags:
 +
 +
[[File:render_and_proxy_usd.jpg]]
 +
 +
Here's the full usda file:
 +
 +
<div class="toccolours mw-collapsible mw-collapsed">
 +
BucketAssetWithProxy.usda
 +
<div class="mw-collapsible-content">
 +
<syntaxhighlight lang=JSON>
 +
#usda 1.0
 +
(
 +
    endTimeCode = 1
 +
    framesPerSecond = 24
 +
    metersPerUnit = 1
 +
    startTimeCode = 1
 +
    subLayers = [
 +
        @./BucketProxy.usdc@,
 +
        @./Bucket.usdc@
 +
    ]
 +
    timeCodesPerSecond = 24
 +
    upAxis = "Y"
 +
)
 +
 +
over "BucketAsset"
 +
{
 +
    over "Proxy"
 +
    {
 +
        uniform token purpose = "proxy"
 +
 +
        over "Bucket"
 +
        {
 +
            rel material:binding = </BucketAsset/Materials/BucketMat>
 +
        }
 +
 +
        over "Rope"
 +
        {
 +
            rel material:binding = </BucketAsset/Materials/BucketMat>
 +
        }
 +
    }
 +
 +
    over "Render"
 +
    {
 +
        uniform token purpose = "render"
 +
 +
        over "Bucket"
 +
        {
 +
            rel material:binding = </BucketAsset/Materials/BucketMat>
 +
            rel proxyPrim = </BucketAsset/Proxy/Bucket>
 +
            uniform token purpose = "render"
 +
        }
 +
 +
        over "Rope"
 +
        {
 +
            rel material:binding = </BucketAsset/Materials/BucketMat>
 +
            rel proxyPrim = </BucketAsset/Proxy/Rope>
 +
            uniform token purpose = "render"
 +
        }
 +
    }
 +
 +
    def "Materials" (
 +
        append references = @./Materials/BucketMaterials.usda@</BucketAsset/Materials>
 +
    )
 +
    {
 +
    }
 +
}
 +
 +
</syntaxhighlight>
 +
</div>
 +
</div>
  
 
=== Adding mesh variations ===
 
=== Adding mesh variations ===

Revision as of 20:18, 21 July 2020

Introduction

A few people from Sidefx, Epic, Foundry, Tangent, ALA got together to have a chat about USD. We noticed a substantial knowledge gap between having a poke around kitchen.usd vs actually implementing USD for a studio.

After some back and forth, we felt it'd be handy to give a field guide of how to get started with USD, trying to stay as platform agnostic and jargon free as possible.

Step 1 was to look at assets.

Shawn Dunn from Epic wrote an asset brief, Mike Lydon from SideFx had a go at following the brief in Solaris, eventually roping in Rob Stauffer and others to help. This is a WIP document to explain those concepts in plain English.

Basic asset

The most simple definition of an asset would be a 3D mesh. Lets assume we want something slightly more complex than that, and want an asset that contains a mesh and a material.

As such we need a few things:

  • A mesh
  • A material
  • A way to assign that material to a mesh.

On top of these, we'll likely need a few other things:

  • Names for the mesh and the material
  • Scene hierarchy locations for the mesh and material
  • On-disk location(s) to save this asset.

The first is self evident, the others need a little explaining.

Scene hierarchy location

Treeviews.png
Example scene hierarchy in Houdini (Solaris), Maya Outliner, Katana, USDView

Scene hierarchy simply refers to where in a tree view of your asset the various bits live. This tree view goes by several names in different packages, but should be immediately recognizable for your preferred app:

  • Maya Outliner, or...
  • Unreal World Outliner
  • Katana Scene Graph
  • Unity Hierarchy window
  • Max Scene Explorer
  • C4D Objects Tab
  • Blender Outliner/Scene Collection
  • Houdini Tree View


Some of those applications will only have 3d objects in the hierarchy, things like materials and material nodes will live somewhere else. In USD nothing is hidden, so meshes, materials, render nodes are all visible in this hierarchy, called the Scene Graph.

We should be able to say 'the mesh will be found at...

/Assets/myAsset/geometry/mesh 

...and the material at...

/Assets/myAsset/materials/mymaterial

...for example. Those locations and names are up to you, but they need to be defined.

On disk location(s)

We need to say where the file is saved, that's intuitive enough. But why would we need to define multiple save locations?

The short answer: USD allows you to say up front, 'the bucket asset lives at BucketAsset.usd', but under the hood BucketAsset.usd might point to 2 other USD files, BucketMesh.usd and BucketSurfacing.usd.

Modelling and surfacing save to their respective files, downstream departments just load the high level asset, and it all works.

The longer answer:

Production pipeline diagrams will show a nice straight line between modelling and surfacing. It implies modelling save a file, it flows down to surfacing, they save a material on top of the file, it flows down to the next department.

The reality is the departments work in parallel, publishing at at different rates. Surfacing might start work on a material weeks before a model is ready, in other cases modelling might have many assets published, and need to see it in shots for context. Using the simplistic pipeline either surfacing couldn't start anything until models are ready, and modelling could never see their work until surfacing published a material for every mesh.

But what if you could break that? What if you could specify a location on disk for modelling to save their work, another for surfacing, and have a third location that combines the two? That way modelling could save as often as they want, surfacing could even start before modelling, and the asset will always get the combo of both.


In USD, it's possible to break an asset into layers, and define different save locations for each layer. In this case we'll define the geometry in one layer which is our final output, save the material into a different layer with its own save location, and import it to the final asset.

Refined asset steps

Let's go through these steps again in more detail:

  • Create a 'main' layer, and store the mesh in it and a hierarchy location (called a primitive path or prim path in USD)
  • Create a layer, define a material in it and a save location
  • Reference the material layer into the main layer and set its prim path
  • Assign the material
  • Write a USD file

Implementation and output

Here's how that looks in Houdini's Solaris. Solaris represents USD operations as nodes, you can see this represents a flowchart of the steps we just defined pretty closely:

Selection 209.png

It creates a layer to load a model, defines another layer for materials and sets a save location, combines the layers, assigns materials, saves a file.

The end result of this is a few usd files on disk. USD can be saved as binary (the default) or plain text, here's the plain text result with some highlighting for the important bits:

Usd01 overlay.png

Here's the same in a collapsed text box if you want to copy/paste into an editor and have a play:

BucketAsset.usda

#usda 1.0
(
    endTimeCode = 1
    framesPerSecond = 24
    metersPerUnit = 1
    startTimeCode = 1
    subLayers = [
        @./Bucket.usdc@
    ]
    timeCodesPerSecond = 24
    upAxis = "Y"
)

over "BucketAsset"
{
    def "Materials" (
        append references = @./Materials/BucketMaterials.usda@</BucketAsset/Materials>
    )
    {
    }

    over "Render"
    {
        over "Bucket"
        {
            rel material:binding = </BucketAsset/Materials/BucketMat>
        }

        over "Rope"
        {
            rel material:binding = </BucketAsset/Materials/BucketMat>
        }
    }
}


That file refers to BucketMaterials.usda, you'll find that below. You can see it stores all the nodes that make up the shading network, and the connections between the nodes.

BucketMaterials.usda

#usda 1.0
(
    metersPerUnit = 1
    upAxis = "Y"
)

def Xform "BucketAsset"
{
    def Xform "Materials"
    {
        def Material "BucketMat"
        {
            token outputs:displacement.connect = </BucketAsset/Materials/BucketMat/BucketUSDPreview.outputs:displacement>
            token outputs:karma:displacement.connect = </BucketAsset/Materials/BucketMat/BucketMat_displace.outputs:displacement>
            token outputs:karma:surface.connect = </BucketAsset/Materials/BucketMat/BucketMat_surface.outputs:surface>
            token outputs:surface.connect = </BucketAsset/Materials/BucketMat/BucketUSDPreview.outputs:surface>

            def Shader "BucketMat_surface"
            {
                uniform token info:implementationSource = "sourceAsset"
                uniform asset info:sourceAsset = @opdef:/Vop/principledshader::2.0?SurfaceVexCode@
                int inputs:baseBumpAndNormal_enable = 1
                vector3f inputs:basecolor = (1, 1, 1)
                asset inputs:basecolor_texture = @../Textures/repacked_BucketWithRopeHandle/repacked_BucketWithRopeHandle_UV_%(UDIM)d_BaseColor.png@
                int inputs:basecolor_useTexture = 1
                asset inputs:baseNormal_texture = @../Textures/repacked_BucketWithRopeHandle/repacked_BucketWithRopeHandle_UV_%(UDIM)d_Normal.png@
                float inputs:metallic = 1
                asset inputs:metallic_texture = @../Textures/repacked_BucketWithRopeHandle/repacked_BucketWithRopeHandle_UV_%(UDIM)d_Metallic.png@
                int inputs:metallic_useTexture = 1
                float inputs:rough = 1
                asset inputs:rough_texture = @../Textures/repacked_BucketWithRopeHandle/repacked_BucketWithRopeHandle_UV_%(UDIM)d_Roughness.png@
                int inputs:rough_useTexture = 1
                token outputs:surface
            }

            def Shader "BucketMat_displace"
            {
                uniform token info:implementationSource = "sourceAsset"
                uniform asset info:sourceAsset = @opdef:/Vop/principledshader::2.0?DisplacementVexCode@
                int inputs:dispTex_enable = 1
                float inputs:dispTex_scale = 0.005
                asset inputs:dispTex_texture = @../Textures/repacked_BucketWithRopeHandle/repacked_BucketWithRopeHandle_UV_%(UDIM)d_Height.png@
                token outputs:displacement
            }

            def Shader "BucketUSDPreview"
            {
                uniform token info:id = "UsdPreviewSurface"
                color3f inputs:diffuseColor.connect = </BucketAsset/Materials/BucketMat/usduvtextureBaseColor.outputs:rgb>
                float inputs:metallic = 1
                float inputs:metallic.connect = </BucketAsset/Materials/BucketMat/usduvtextureMetallic.outputs:r>
                normal3f inputs:normal.connect = </BucketAsset/Materials/BucketMat/usduvtextureNormal.outputs:rgb>
                float inputs:roughness.connect = </BucketAsset/Materials/BucketMat/usduvtextureRoughness.outputs:r>
                token outputs:displacement
                token outputs:surface
            }

            def Shader "usduvtextureBaseColor"
            {
                uniform token info:id = "UsdUVTexture"
                asset inputs:file = @../Textures/repacked_BucketWithRopeHandle/repacked_BucketWithRopeHandle_UV_1001_BaseColorA.png@
                float2 inputs:st.connect = </BucketAsset/Materials/BucketMat/usdprimvarreader_st.outputs:result>
                vector3f outputs:rgb
            }

            def Shader "usdprimvarreader_st"
            {
                uniform token info:id = "UsdPrimvarReader_float2"
                token inputs:varname = "st"
                float2 outputs:result
            }

            def Shader "usduvtextureMetallic"
            {
                uniform token info:id = "UsdUVTexture"
                asset inputs:file = @../Textures/repacked_BucketWithRopeHandle/repacked_BucketWithRopeHandle_UV_1001_Metallic.png@
                float2 inputs:st.connect = </BucketAsset/Materials/BucketMat/usdprimvarreader_st.outputs:result>
                float outputs:r
            }

            def Shader "usduvtextureRoughness"
            {
                uniform token info:id = "UsdUVTexture"
                asset inputs:file = @../Textures/repacked_BucketWithRopeHandle/repacked_BucketWithRopeHandle_UV_1001_Roughness.png@
                float2 inputs:st.connect = </BucketAsset/Materials/BucketMat/usdprimvarreader_st.outputs:result>
                float outputs:r
            }

            def Shader "usduvtextureNormal"
            {
                uniform token info:id = "UsdUVTexture"
                asset inputs:file = @../Textures/repacked_BucketWithRopeHandle/repacked_BucketWithRopeHandle_UV_1001_Normal.png@
                float2 inputs:st.connect = </BucketAsset/Materials/BucketMat/usdprimvarreader_st.outputs:result>
                vector3f outputs:rgb
            }
        }
    }
}


The mesh geometry is stored in Bucket.usdc, a binary format, here a download link:

File:Bucket.usdc

While this could be stored in plain text as well, there's often no need for it, as its unlikely to be looked at by humans. Binary is better for size on disk and loading speed.

Adding Render and Proxy meshes with Purpose

Most studios want an asset available in various formats. A final quality one for rendering, a low res one for layout for example.

USD allows this by tagging meshes with a 'purpose'. USD allows 3 kinds of purpose out of the box:

  • Render
  • Proxy
  • Guide

As implied render is final hero geometry, proxy is a lightweight standin, guide might be for anim controls for example.

So to extend this setup, we'll define another layer, load the proxy mesh into there. We then combine the hero and low-res meshes together into a single heirarchy, assign the purpose tag to the hero and low res meshes, and assign material as before:

Here's the Solaris overview of that:

Selection 001.png

A sublayer is one of several ways of combining hierarchies of things. It does a 'dumb' merge, so it just takes all the stuff from each of the inputs and throws them together.

In this gif you can see one node has a hierarchy under /BucketAsset/Render,the other under /BucketAsset/Proxy, the sublayer combines so that we end up with a tree with both Proxy and Render under /BucketAsset:

Sublayer combine.gif

With the render and proxy models sitting together in the hierarchy, they get their purpose tag's set.

The rest is as same as before, create a layer to bring in the materials, assign, write out a file.

In terms of the usd file itself the changes are pretty straightforward; it now loads in 2 meshes via the sublayer command, and sets the purpose tags:

Render and proxy usd.jpg

Here's the full usda file:

BucketAssetWithProxy.usda

#usda 1.0
(
    endTimeCode = 1
    framesPerSecond = 24
    metersPerUnit = 1
    startTimeCode = 1
    subLayers = [
        @./BucketProxy.usdc@,
        @./Bucket.usdc@
    ]
    timeCodesPerSecond = 24
    upAxis = "Y"
)

over "BucketAsset"
{
    over "Proxy"
    {
        uniform token purpose = "proxy"

        over "Bucket"
        {
            rel material:binding = </BucketAsset/Materials/BucketMat>
        }

        over "Rope"
        {
            rel material:binding = </BucketAsset/Materials/BucketMat>
        }
    }

    over "Render"
    {
        uniform token purpose = "render"

        over "Bucket"
        {
            rel material:binding = </BucketAsset/Materials/BucketMat>
            rel proxyPrim = </BucketAsset/Proxy/Bucket>
            uniform token purpose = "render"
        }

        over "Rope"
        {
            rel material:binding = </BucketAsset/Materials/BucketMat>
            rel proxyPrim = </BucketAsset/Proxy/Rope>
            uniform token purpose = "render"
        }
    }

    def "Materials" (
        append references = @./Materials/BucketMaterials.usda@</BucketAsset/Materials>
    )
    {
    }
}

Adding mesh variations

It's pretty common in a production to find assets like car01 car02 car03, tree01 tree02 tree03 etc. Most of the time these are defined as unique assets, everyone assumes this is how it has to be.

With USD however, you could have a single 'car' asset, or a single 'tree' asset, and have the ability to choose from a range of cars or trees. These are called variants.

Variants go beyond just pointing to different models on disk though. You could also have variants that only swap between materials (eg clean car vs a dirty car), or get really tricky and selectively hide and show parts of a mesh within a variant.

'So what' you might say, 'we can do that in [insert 3d app] fine'. You might also say 'separate asset definitions are good enough'. The big win here is once again this is available to all 3d apps that can talk USD. So while this asset might be authored in Houdini, the layout department can be loading cars and trees in maya, and swapping between different variants. And then further downstream again in Katana, lighters can access and swap between variants if there's a last minute change, without having to go all the way back to layout, or modelling, or a callsheet for the assets belonging to a shot or set.

Here's the Solaris method for adding a geometry variant:

Selection 002.png

In this example Mike has taken the bucket asset, hidden the handle, and made that a variant. What's cool about this method is USD will be smart enough not to save the entire bucket to disk with the handle, then entire bucket again without a handle, it's smart enough to just store the difference between the models. Not impressive with a bucket, but that could save huge amounts of memory and disk space for massive assets with relatively small changes between variants.

Adding level of detail

What if you need more than just a highres ('render') and lowres ('proxy') version of your asset? Variants can be used for this too. While the previous example was a swap between a handle and no-handle version of bucket, (and we might name that variant 'handle'), we could have a a variant selection called 'lod', and put high/medium/low res meshes into those variants.

The Solaris network below is showing off by procedurally creating those mesh reductions, but the steps involved would be the same for any other app or pipeline:

  • take your highres mesh
  • create medium and low res reductions (as many as you feel you require)
  • define a variant called 'lod' (or whatever name you want)
  • attach those reductions to the variant

This network goes the extra mile and also defines lod's for both the render and proxy purposes. You could argue you don't need those, but hey, choice is good right?

Selection 003.png

Adding material variation

And here we go all the way, setting material variations as well as model variations. You can see the network is getting pretty baroque, but the base idea is the same:

  • bring in the mesh
  • create the LOD's
  • create several materials
  • assign materials to lods
  • create a variant set that lets you choose between materials and lods

Selection 004.png