- 1 Instancer for trees on a groundplane
- 2 Musings
- 2.1 Why is USD interesting if I'm not a big studio?
- 2.2 State of USD and Lops late 2019
- 2.3 Which renderers have Hydra delegates
Instancer for trees on a groundplane
- instancer lop
- left input is for the scene stream that'll be passed through
- right input is for the tree
- internal sops is where you define the scatter locations
- sop import a tree, connect that to the right input
- on instancer, 'prototypes' is the things to be instanced. So set prototype source to 'second input'
- on instancer, 'target points' is the point locations. Default mode is 'internal sop', we'll use that
- dive inside, create a groundplane (or object merge in something), append a scatter
Remember its not a copy sop with left input-instanced-onto-right. We're in katana/usd style magical land now, we might have already setup a bunch of stuff for the set, characters, fx, and then adding onto this stream will be our forest.
Why is USD interesting if I'm not a big studio?
A rant I did on discord, in the pub, to my family, copied here and tidied up for your benefit. Nice images, practical examples etc will come later.
Short version: It lets small studios punch well above their weight.
Big studios have lots of big things. Big farm, big teams of artists, big IT and infrastructure. All of those things are important to get big shows done, but a key factor is allowing people to solve systemic problems that aren't purely tech and aren't purely art. Pipeline TDs, department TDs, RnD, there's enough people hired and they're given enough space and time to allow a big studio to function more efficiently. Small studios generally can't afford this.
6 years ago
Take a film I worked on about 6 years ago, wall to wall photoreal cg, crowds, environments, the works. In the start of a show like that it feels like a small studio, a small team of people who each have a specialty, just experimenting and sorting things out. As the show progresses more artists are hired, the work expands.
At a certain point the scale of the project starts to have an effect. The quantifiable stuff is fine; number of shots, number of assets in shot, get a metric of how long it takes an artist to make a certain asset or finish a shot, multiply that out to get x thousand days for a single artist, look at how much time you have left, divide one by the other, thats the number of artists you need. Oversimplifying, but that's the idea.
What happened as more assets were completed, more stuff got shoved into shots, was the tools got slower and slower. So slow that it began to affect artist productivity. In a smaller studio you shrug, maybe panic, but you can't do much more than that, everyone is busy doing the 'arty' work assigned to them.
In a big studio, the TDs and RnD folk kick in. They can analyse the tools, identify bottlenecks, rewrite slow things, adjust stuff to get time-to-first-pixel more quickly, time to final comps faster. One of the things that really slowed us down was assembling big shots; tools 6 years ago could handle 1 asset fine, probably 10, maybe 100. But 1000, 10000, things get slow, and thats when you want an army of TDs with you to solve stuff.
Jump to now, whats changed? Machines are faster, renderers are better, cloud computing is a thing. Some tools are better, some have made things incredibly efficient. Megascans, Substance, Houdini improvements, means making individual assets and fx is much faster.
Big assembly still sucks. Maya is still miserable at handling lots of transforms, Houdini when working in /obj is clunky and lame. I just finished watching The Lion King, and was blown away by how good the completely digital environments were. Looking at them critically, you could probably make a single element of those environments easily enough (a tuft of grass, a rock, a tree), but to assemble thousands of them into a set, ugh, a nightmare.
A big sequence in a small studio without USD
Say you were crazy enough to do that with a team of 7: 2 modellers, 2 surfacers, a layout artist, a lighter, a comper. You have 30 shots in a Savannah to do. Run a quick breakdown, thats 7 grass models, 8 rocks, 4 trees, ground, 4 mountains, twigs, pebbles, 5 bushes. Each has 3 surfacing variations. Modellers and surfacers work away on that as fast as they can, save it all on disk. Layout artist gets started, pulls all these models into maya via references, lays them out, animates a camera. Lighter gets started, uh-oh, there's a non manifold edge somewhere that causes the render to crash.
The lighter flags it, can't tell exactly which model it is, but its in the lower right corner of the shot. Layout artist tries to identify the asset, its rock07v01. Modeller fixes it, saves as v02. Now what? The layout artist has to find every instance of rock07, and update it from v01 to v02. Meanwhile the lighter finds the texture used for grass03 is too high res, while tree04 roughnessmap is too low res. They get kicked back to surfacing, version up, again layout person has to find those materials and update in the layout file. Then director notes, more changes. Also in shot 30 the tree needs to be moved for a nicer composition. Oh, and this all now has to be moved to katana, cos maya just can't handle this anymore.
All of those things are distressingly common, and are maybe 10% of the daily churn of shots and assets. All those changes need to be updated, rolled into shots. If you're working across multiple DCCs, how do you handle this? Alembic is ok for geo, but doesn't store material definitions. It still requires a hard bake at some point, if assets get updated, someone has to open the maya scene, update, republish. Maybe you can write a python script to automate it, or a batch job on the farm. But then how do you ensure lighters are using the right file? And now the alembic is blowing out into bigger and bigger filesizes, so big maya and katana are having problems loading it....
And so it goes. At this point you'd be wondering why you ever bothered, and surely if we're suffering through all this, others are too, and why are we all solving it alone?
Well its not just you, and not just the small studios, big places have the same issues. Even Pixar. Luckily Pixar have lots of smart people, and are keen on open source, so have shared their solution, and its USD. USD solves lots of things, lets run through how it handles the issues outlined above:
- A usd file can point to other usd files, which can point to other usd files, which can point to other usd files. Like maya references this means you could have a file, say shot050.usd, which is actually only 5 lines of text, as internally it points to camera.usd, fx.usd, props.usd, char.usd, set.usd. If you went and looked at set.usd, it might refer to hundreds of usd files within it, so grass01.usd, tree03.usd etc, with their transforms defined for the set. Dive into grass01.usd, that'll have the polygons that define the model, but it can also have a shader graph defined in it. So usd at this point can be thought of as fancy alembic, which can reference other alembics.
- Those references to other usd files, they can be hard paths on disk, so /assets/setpieces/grass/grass01_v002.usd, but it can also be an abstracted path. In our case at UTSALA we use an abstraction that points to shotgun queries, so the path looks like type='setpiece'&name='grass01'&version='latest'. When the file is loaded, the usd libraries know 'ah, I better ask shotgun what this means so I can get a path on disk', gets that path, loads the thing. THIS IS HUGELY POWERFUL. No more relying on the poor layout artist to update versions of things. No double checking lighting renders to ensure assets are correct. No writing of code in your DCC to compare disk paths to what shotgun says should be in your scene. This awareness of version and disk path abstraction is built into the usd file format itself.
- usd works cross platform. originally for maya, then katana, then houdini, and now rapidly spreading into most dcc's. This means a lot of the difficult interop and translation between dcc's is gone.
- usd comes with a standalone viewing tool, usdview. Think of it as mplay or rv, but for 3d scenes. Want to check whats in a shot? Load it in usdview, its blazingly fast, plays buttery smooth, it's a great QC tool
- referencing lots of usd files that in turn reference more usd files isn't just appending files together like maya referencing; you can be really clever and have 'stronger' usd files selectively update, tweak, modify 'weaker' usd files. So you could have char.usd for base character anim caches, but a later charfx.usd file could insert fur, hair, cloth caches into the hierarchy of the character setup, so lighting don't even have to know they're coming from another department.
- usd has its own fast method to talk to render engines. Almost all the major players have announced support for this (called 'hydra delegates'), meaning you don't even need a DCC app to send usd files to a renderer; they can function like .rib, .ass, .ifd, and be rendered directly.
- USD has built in support for lots of things you need but don't realise you need until you think about it. High performance instancing, LOD, variations, volumes, lights, geo attributes, curves, particles, crowds, motion blur, cameras, it's all in there. Pixar have been using USD for their films for a good 5 years, and USD's predecessor for many years before that, they've thought of and solved most of the use cases!
This was all amazing, but still required knowledge of USD's python api to tie it all together. What didn't exist was an artist friendly interface to all this incredible power. That's what Lops/Solaris is. Sidefx have made a new network type that represents USD primitives as nodes, USD operations as nodes, and lets you do all the stuff you'd want to do.
Compare to Clarisse and Katana
One could question how does this compare with the two apps known for solving big shots, big numbers of lighters, namely Katana and Clarisse.
Katana set out to solve the question of big shots and lots of them many years ago, and does it broadly by letting lighters import huge amounts of geometry from multiple locations, merge them together, create and modify lights, material assignments, object properties etc, then send all that to a render engine. By design it won't actually try and load geo unless you explicitly ask it, meaning lighters can work in katana quite quickly. It has good high level support for switching chunks of a network based on rules, so you might turn on a bunch of lights for shots facing characterA, vs a bunch of different lights for shots facing charB.
Katana's pro is also it's con; it can feel very hands off, you're generally operating on your scene in a kind of abstracted state, making rube goldberg machines trying to catch names of objects in your shot, do things if found, hope that it all falls out the bottom as you'd expect. It's also a little beyond the reach of small studios, both being quite expensive, and needing substantial TD work before it can even run effectively.
Clarisse tries to solve similar problems to Katana, but by being tightly coupled to its renderer is much more direct and hands on. It's faster to get going with less technical expertise, and was quickly adopted by matte painters as a way to generate massive shots with thousands of trees, buildings, stuff.
It's cons are that it's developed a reputation for being unstable, and that it isn't really designed to talk to existing renderers, you're buying into a unified lighting tool+renderer.
Both Katana and Clarisse work on the core idea that they're the final stop; 3d assets are brought into them, images come out. Also the ability to edit the scene is limited to what lighters require, and in Clarisses case what matte painters want; you can create lights, cameras, modify materails, some object attribtes, but thats's it. You can't really model geometry, or do fx, or do uvs, or animate characters, anything that you'd traditionally do in maya, you do in Maya.
Compare to Lops.
Lops by itself should cover most of what Clarisse and Katana do. Import big shots, create and modify lights, material assignments, object properties, send to a renderer. But being built around USD, you get all the I-can-see-all-the-geo from Clarisse, combined with the I-can-render-to-whatever-renderer from Katana.
But Lops isn't by itself, it's in Houdini! There's nodes to allow you to send stuff from Lops to other contexts in Houdini, and to go back the other way. So select a building in lops, take it over to be destroyed by and RBD solver, bring it back in. Create a standalone volume, pull that into your shot. Realise this certain model needs better uv's, fine, bring it into sops, uv away, bring it back.
PLUS, it's not just to and from Houdini. Save your final setup as USD, send it to katana if you need to. Or back to animation. This is the U in usd, its universal, you can bounce this back to any usd compliant dcc, it should be able to use it.
Proceduralism for scenes
A final sell for existing Houdini folk is the difference between sops and /obj. Once you've used sops for a bit, you get comfortable with the idea of copy-to-points a ton of pig heads to points, or creating volumes from scratch, or selectively deleting faces who's normal faces directly along +Y. Yet we jump up to /obj and its same old manual cameras, manual lights, manual visibility states, manually dragging and dropping objects into rop object lists.
USD and Lops brings that concept of sops proceduralism to /obj contexts. Bringing in caches from disk can be rule based. Make lights based on procedural things. Scatter and layout stuff as manually or as automatic as you want. Have all that stuff change instantly based on the shot name. Save an entire lightrig to a cache on disk, bring it back later via a HDA menu, or through shotgun. Proceduralise all the things!
State of USD and Lops late 2019
The above is the sell. What's the reality? What should you be aware of? Bullet points for me to fill in later:
- USD is rapidly evolving. Base geometry, layering of geometry is solid. Storing shading networks in USD is relatively new, as are volumes. USD crowd support is bleeding edge. USD for realtime and mobile is very bleeding edge and changing all the time.
- Lops as a node UI on USD is very very new. So some parts are a new thing sitting on a new thing, expect some stuff to not be fully working. Some things don't update when you expect, need a bit of a kick to work.
- USD terminology can be confusing. To me it feels like synonym bingo, lots of stuff to avoid maya specific or katana specific things, takes a little getting used to.
- Hydra render delegate support is very new. PRman has probably the best support (its a pixar product, go figure), the rest are all at v1 or even v0.99 support for hydra. Karma is still in beta, other stuff is in a state of flux. That said, everyone seems to agree that USD is the obvious choice moving forward, and are investing heavily in supporting it.
- USD to generate final frames is pretty new. Up until recently USD was sort of used like alembic++, in that it ultimately was brought into Maya or Katana as a file format, but then to be sent to the render it would use Maya or Katana native behavior for that. This idea of pushing USD right through to the render engine itself is pretty recent, even stuff as seemingly fundamental as defining AOVs or render quality options is very new and still being actively discussed, expect changes.
- Lops as a katana replacement is still WIP. To be explicit about the last 2 points, if 'proper' support for renderers via Hydra is new, and support for generating final frames is new, then using Lops as a Katana replacement, who's entire reason for being is to talk to renderers and generate final frames, is pretty bleeding edge. Ironically USD and Lops is probably more foreign to Houdini users than it is to Katana users. Katana folk will find a lot of the concepts and workflows familiar, even a lot of the terminology is kind of the same, while Houdini folk will have some head scratching, questions raised as to why this is more complicated than Rops. My take on it all is that H18 is v1, they've done the heavy lifting of getting most of the USD concepts translated to nodes, the framework is in place. Next steps from both Sidefx and the community is to wrap up these nodes into HDAs, streamline the workflow, so that its easy for both veterans and new users.
- Lops as a tool for layout artists and pipeline folk is awesome. All the stuff that used to require loads of python, asset wrangling, runtime procedurals, effort and pain, bah, its all gone. Just go slap some nodes down, do a happy dance.
- USD support in realtime engines is super new. Unity got support in the last 6 months, UE4 got proper support in the last 6 days. Expect changes.
- USD is largely cache based, not rendertime procedural based. Requires some changes of thinking; if you're used to render-time procedurals to grow fur, generate crowds, do things, change your thinking. A core principle of USD is speed, render time procedurals screw that. USD now supports render procedurals, but Pixar are strongly advising folk to be careful if going down that path.
- No version control out of the box. When you specify a path to a file in USD, it isn't loaded directly, but gets handled by a module called the asset resolver. This is a plugin architecture to allow you to specify file paths in different ways. USD ships with a single asset resolver, which is basically just a pass-through for files on disk; if it recognises a path your give USD is a 'real' path on disk, it will load it. But what you really want is an asset resolver that talks to your asset management system, like shotgun. This gives you the ability mentioned earlier, to just use shotgun queries like asset name and version number, and the asset resolver will ask shotgun for the path on disk. As mentioned before this is really powerful, giving version control at the filesystem level rather than in the DCC. Unfortunately, you don't get any of this from the USD distribution or Pixar, you have to write this yourself. But hey, are you using Shotgun? Well you're in luck! The clever folk I work with at UTSALA wrote an asset resolver for shotgun, it's called Turret, its open source, go get it! https://github.com/UTS-AnimalLogicAcademy/turret
Which renderers have Hydra delegates
Hydra is the USD module that handles sending the scene description to a renderer. The end goal is that render developers don't have to write separate translators and importers for Maya, Katana, Houdini, Mac, Windows, Linux etc, they just write a single Hydra plugin, and it will work everywhere. Similarly for any new fangled geometry formats that USD hasn't covered yet, as long as they write the plugin for that correctly, render engines should support it directly.
When renderers develop support for Hydra, that's called a Hydra delegate. Delegates can be offline renderers or realtime, GPU or CPU, support some features or all features of USD. It's handy that when you have it all running, you can swap between different renderers as easily as swapping between wireframe and solid shaded mode in Houdini. Here's a quick list of names, what they are, what they support:
- Storm - Pixar's " fast interactive viewport-style renderer)". Think of this as the realtime preview, it's the default in usdview, and good for checking animation, fur, camera layouts. Doesn't support volumes, doesn't support complex shading. Storm used to be called Hydra, which caused confusion with the Hydra module itself, hence the rename.
- HoudiniGL - Sidefx's realtime viewport delegate, used by Lops by default. Supports volumes (as I understand it, it's the default Houdini viewport renderer ported to Hydra), most of what you're used to in houdini viewports.
- Karma - Sidefx's offline renderer, in beta, early days. More or less an update to mantra, so think of it in those terms (vex based shading, principled materials, volumes fur etc, but ingests usd rather than ifd). Good breakdown of where Karma is relative to Mantra: https://www.sidefx.com/faq/karma/
- Embree - Example fast raytracer from Intel. It's basically an ambient occlusion renderer with basic surface colour support, motion blur, not much else. Lops doesn't support it internally as its more of an example implementation, doesn't do volumes, but can be handy in usdview in a pinch. https://www.embree.org/
- OSPRay - Intel interactive (but not realtime) CPU raytracer. The followup to embree, support more things, bigger, better, new, but they state clearly its not competing with Arnold and Renderman and the like, its an intermediate quality renderer. https://www.ospray.org/
- Renderman - Pixars CPU renderer, supports all the things, full materials, motion blur, volumes, several different integrators. Really handy in USDview to see things exactly in context, the debug integrators to let you see wireframes on objects, or shader complexity, or a ton of other things, really useful. Amusignly there's not much info or screenshots about the renderman usd hydra delegate in action, even though we use it daily at UTSALA. Will fix this...
- Redshift - GPU offline renderer, early days, but already pretty fully featured: https://redshiftrender.cgrecord.net/2019/11/houdini-18-redshift-hydra-delegate.html
- 3delight - CPU offline renderer, early days, again remarkably fully featured, incredible time to first pixel, remarkable ability to load massive datasets quickly: https://gitlab.com/3Delight/HydraNSI/-/wikis/Videos
- Arnold - CPU offline renderer, early days, but seems t be supporting most of what you'd want already: https://github.com/Autodesk/arnold-usd
- Prorender - AMD's GPU offline renderer, early days: https://twitter.com/bhsavery/status/1028318614003232768
- Octane - GPU offline renderer, beta: https://twitter.com/otoy/status/1123053790024716288?lang=en
- Vray - CPU offline renderer, rumoured Hydra delegate, but no proof online that I could find.