r6 - 05 Jul 2012 - 16:17:04 - MattEstelaYou are here: TWiki >  Maya Web > MayaToHoudini

Maya To Houdini

Most houdini users are FX people. Audio waveforms driving particle sims driving fluid sims driving cloth sims driving shaders driving particles etc.

The lighting and shading team at Dr.D were most definitely NOT FX people. A lot were former Animal Logic lighters used to their maya/mayaman/prman pipeline, others came from SPI and katana, others from... other things. The interesting thing was that everyone was keen to learn how houdini compared as a lookdev tool to other solutions, this page will deal mainly with that.

Caveats before we launch in; this is a re-edit of a page I made almost a year ago, I've not touched houdini much since leaving Dr.D. Memories are hazy and frankly, coloured; Happy Feet 2 was one of the most stressful projects any of us had worked on, so its difficult to view the tools objectively. But I'll try. Also, we used a heavily customised pipeline; 3delight rather than mantra, super custom SOHO scripts, proprietary geo format, custom lights, heavily ptc based... it got about as far away from native houdini and mantra as is possible. All that said, I think we all became reasonably conversant in houdini and what it can offer to lighters. Lets go!

learning houdini, houdini apprentice

Houdini traditionally had several criticisms levelled at it:

  • difficult to get
  • difficult to find documentation
  • difficult interface

All 3 points are definitely addressed now.

  • difficult to get - you can download the apprentice version directly from sidefx. its an untimed, save enabled version of houdini master (the equivalent of maya unlimited). For $99 you can add HD rendering and remove watermarks, but for learning the free version is fine.

  • difficult to find documentation The built in documentation is better than maya's docs in most cases, and there's now a wealth of video tutorials available on vimeo and youtube. For a few years this was created by a vocal minority (especially Peter Quint on Vimeo), Sidefx have really stepped up lately and made an incredible amount of content available. Impressively they've tried to cater to new users with the first steps series of videos, pdfs and scenes, as well as advanced users with the masterclass series that explains how the new pyro solvers work from first principles.

For an absolute bare basics 'I have an hour free, lets try this' lesson, the go procedural pdfs are a good place to start.

If all that isn't enough, Sidefx often run free workshops worldwide. They're getting very aggressive that everyone should know about houdini, keep an eye open.

  • difficult interface - Houdini v9 released in 2006 overhauled the interface to a modern pyqt style. The old interface was brittle and unintuitive, the current system is drag-n-drop everywhere, shelves and buttons where you expect, handy tooltips... it feels like they had a look at maya and went 'ok, we'll do that as a bare minimum, then add some extra niceness'. It's much less scary than when I had a peek at houdini v2 many years ago!

basics: maya transforms and shapes vs houdini /obj and sops

Hopefully you get the difference between a shape and a transform in maya. A shape can be a simple poly shape, or it can be the result of many operations in the construction history of a shape, which ultimately outputs a shape. The transform takes your shape, and moves/rotates/scales it in the world.

Something you might have noticed is the way maya displays the transform -> shape -> construction history is messy and inconsistent. The outliner is very transform centric, and by default specifically hides shape and construction history nodes. If you enable them, shapes appear parented under transforms, but history nodes just get thrown into a huge list after the transform nodes. The hypergraph lets you toggle between the transform node view and the DG node view, but it resets its layout each time you switch, and the connections at the DG level can get VERY hard to follow... the general vibe is 'this is complicated. Look, don't touch'.

Houdini has the same basics, but the methods for displaying and interacting are different in a few important ways. Transforms are treated as containers. Inside those containers are the nodes that are wired together to make a final shape. The final node is the shape, and that is what is moved by the transform. In essence that's the same as maya, but while maya implies it via the outliner and hypergraph, Houdini makes it explicit, and it does this via how things are displayed in its node graph. By default the node graph displays all the transforms, similar to hypergraph. Double click on a transform and you dive inside it, revealing the nodes of its construction history.

Once inside the node network for an object, the difference to maya becomes more clear, in fact becoming more like nuke. While maya uses nodes, and has a few kind-of-ok tools for modifying them, houdini and nuke assume the user experience is ALL about nodes, so both offer similar important features:

  • the node layout is saved, so when you return, its as you left it
  • you get post-it notes, backdrops, node colour tools to help you tidy and navigate your node graph
  • data flow is kept as thin and clean as possible between nodes, so it should be easy to follow how data moves
  • you can branch node networks, disable many nodes at once, rewire this to that... you can experiment with nodes in a way maya makes difficult, almost impossible

It's that last point that's quite interesting; whenever you have to dive into the construction history of maya, its with a grimace and the script editor close by. Nothing is easy, lots of stuff requires scripts to be wired together properly, its effort. Houdini actively encourages you to dive in, try adding another node, disable that node, see what happens, add a note saying 'this could be made better...'. It encourages a playground environment for node operations that maya lacks.

Also, similar to how its easy in nuke to take a few nodes, add a few parameters, then collapse it into a tool or gizmo you can re-use, houdini lets you do exactly the same. The clean self contained node networks, the core concept of collapsing/expanding containers, means you can create a little node network to achieve a certain effect, drag it to your shelf, and know you can re-use it later without writing a line of code. The barrier for developing tools and features in houdini is MUCH lower than maya, where you really need a reasonable level of mel or python to be productive as a TD.

Rops vs render globals (esp render dependencies)

The 'nodes are important' motto extends to rendering and render layers. Unlike maya which (until recently anyway) had a single render globals window, Houdini uses another node view, where each node represents a render process. Each node contains what you'd expect a render engine needs to know; a list of objects, a list of lights, a camera, frame range, render settings etc.

So if you have a render represented as a node, what does wiring them together do? This creates a dependency, so if you ask the last node to render, all the previous nodes in the graph will be rendered first, one after the other. A typical render node chain might involve first baking shadows, then an indirect point cloud, then branch off into seperate bg/mg/fg render layers. You can even insert controller nodes into these trees to further dictate how the nodes render; frame by frame for one node, or make an entire branch of render nodes do the entire framerage first, and so on. Its very powerful.

takes vs render layer overrides

So if each ROP node takes the place of a render layer, how do render layer overrides work? Houdini has this split into a separate system called 'takes'. Functionally its almost identical to render layer overrides; you create a new take, then associate object attributes with that take, and change them to your bidding. Switching to another take will change those values back to their default. In terms of implementation, maya watches for attribute changes automatically, and makes their title orange to show they've changed in that renderLayer, ie, watching for changes is always implicitly 'on'. Houdini goes the other way; by default when you switch into a take all parameters become grayed out. The idea being that if they're grayed out, their at their default state, and aren't affected by this take. You need to right-click on a parm, choose 'include in take', and it becomes active and able to be modified. Alternativey you can turn on 'auto takes', which will immediately link parameters to a take.

A few niceties of houdini takes vs maya render layer overrides; you can open the take list panel which shows you all the parameters associated with each take. you can parent takes to takes within the takelist panel, so that you can have one set of basic overrides, and many other takes will inherit from that take. Takes are intentionally seperated from renders because they're useful for general work; hence the name, 'takes'. You can setup a scene, then, movie set style, go for 'take1', and edit the scene non distructively, then create take2, do another run of changes, take3, take4...

Lastly, to associate these takes with a render, there's a 'render with take' drop-down on each render node, where you specify which take to use.

mantra vs mentalray vs prman, ipr, that sorta thing

While its easy to complain about maya's mentalray integration, one thing that it's always been good at is speed of translation; its very quick from the time you hit render to the time you see the viewport start to update. My limited understand of the maya-to-mentalray engine is that its fast compiled code, uses in-memory translation, and is heavily optimised.

Houdini uses a system called SOHO to do its houdini-to-renderer translation (Scripted Output of Houdini Objects), which is a python based system. Its designed to be generic enough that support for any renderer can be easily written, and for it to be easily modified by td's. However it /is/ in python, therefore it /is/ noticably slower than maya-to-mr. People who have only used maya and mentalray are often surprised at how slow other renderers are. Houdini and mantra can be an order of magnitude slower again.

That said, houdini/mantra IPR is very good, about as good, if not better, than maya/mentalray, and AFAIK there's no equivalent for mayaman/mtor. It approaches the arnold demo's in terms of how interactive and fluid the process is; move geo, lights, cameras, modify shader parameters, it all happens in near realtime, doing progressive refinement. Also note that mantra has 2 modes; micropoly, a very prman style mode, and pbr, a very arnold style mode. both appear quite powerful.

Also, because of SOHO's generic nature, it ships with built-in support for prman and mentalray, albeit not many people use the mentalray binding. Rumours abound of v-ray and arnold connections in the works.

houdini viewport shading vs maya viewport shading

This is a strange one; I've never been a big fan of full textured lit viewports, but its nice to know maya can handle that if required; at the very least it can do a reasonable attempt at translating even a complex shader network. Houdini right now doesn't offer this. Shaders are built in shops (shader-operators), using a node based visual programming system closer to slim/ICE/shaderman than hypershade. This is both good and bad. Hypershade can be thought of as a hard-to-use compositor, letting you link and layer colours together, and is relatively forgiving. Shops and its even more techy sibling, Vops, are creating mini programs, and as such they're much more strict about data types, and what can be connected to what.

For example, procedural textures, single colours, images on disk, are NOT the same thing, and can't be used interchangeably, at least not directly. You can dive deep into the shaders and mess around, but if you're used to loading a maya blinn, setting a pure diffuse colour, no wait, i'll replace that with a ramp, no wait, i'll replace that with a layered texture, THEN a ramp, no wait, noise feeding the ramp feeding the layered texture etc... you can't easily do that stuff in houdini.

What you CAN do though is design shaders from the ground up. Vex, mantra's shading language, is VERY similar to renderman's shading language SL, so its much easier to steal bits of shader code from siggraph papers and whatnot.

But back to the viewport question; again from my limited understanding, maya uses the internal render to bake out chunks of your hypershade network, which are fed into the realtime viewport. Houdini doesn't offer this. Instead, houdini uses GLSL directly, again a language very similar to renderman SL, but designed for realtime viewports. Thing is, houdini currently doesn't do a live re-creation of your vex shader into a glsl shader; thats up to you. You can have a kindasorta simialr glsl shader, and have it refer to image paths of your real shader, but you can't get it to preview procedural textures in the viewport, for example. Maybe in the future, but not now. That kinda sucks.

So the default houdini shaders come bundled with a generic GLSL shader that looks up diffuse/spec/refl/opacity maps if they exist, or just display simple user-chosen colours if they don't. You could probably try and write your own GLSL shader to match your amazing VEX shader, but really, is it THAT important? Still, maya wins here, and max craps over both maya and houdini. smile

interface; panel tear off copies, pinned panels, maximize viewports

I always forget maya has that 'copy tab' button at the bottom of the attribute editor, houdini has it too, keyboard shortcut ctrl-shift-c. Because houdini has you jumping between different networks a lot more than maya (scene network, SOPS, ROPS, SHOPS, and if you're getting fancy CHOPS, COPS, DOPS etc), you often want to keep a render parameters window handy while doing other things. Most panels in houdini are context aware, so they'll keep changing to match the curent selection. There's a pushpin icon in the top corner of every panel, having this pressed in will keep it 'pinned', so it won't change when you change selection. This also holds true for network views, so you can have a seperate network editor that always stays on ROPs, while the others follow your selection.

alt-' or ctrl-b will maximize a viewport, takes a while to stop hitting spacebar...

python, drag anything into the console

something that irked me for a while is that there's nothing like maya's mel output window in houdini. Ie, everyone seems to learn mel the same way; you do a sequence of actions, look at the mel editor to see the commands that have been echoed, copy them, and start working up a melscript to do your bidding. this works because the 99% of the maya UI is written in mel; its what makes it so flexible (and also so slow...). Houdini's python interface doesn't allow you to interactively see commands as you run them, so it takes some digging around at first to understand whats going on.

That said, once you get your head around it, python in houdini is quite nice. That should be a full page by itself really, but a nice trick is that you can drag anything into the houdini python console, and it'll appear as a proper python object path. Thats not just objects and lights, but bits of the interface too.

project structure, relative lack thereof

There's no builtin project settings ala maya, its up to you to define it. Houdini again shows its unixy heritage here, and expects you to define variables, the names and paths are up to you. They can either be defined from the shell if you have pipeline tools in place, or within houdini itself. Eg, you might define $TEX for your textures, $SHD for shadow maps, $BTY for beauty renders, $SHOT for the master shot folder etc.... houdin will define $HIP for you, which is the folder where the current .hip file is saved.

use http paths for textures, neat trick

cute trick of houdini; when defining paths to textures, they can be relative paths, or explicit paths, or paths with variables, or if you're feeling saucy, a direct http link. Nice for portable demos when sharing hip files on forums and in email.

mplay vs fcheck/renderview (note it doesn't save images by default!)

mplay is houdini's standalone image viewer and flipbook tool, and its craps all over fcheck, sorry autodesk. LUT support, floating point, aov's, gain/gamma/brightness... its remarkably good. It behaves simlar to renderman style tools in that you tell your mantra/renderman rop to display to mplay (rather than to an exr or other image file), and it communicates with the renderer via a network port, directly displaying the render result. This has an important implication; maya and the renderview will always save an image to disk, as well as display to the UI; mplay never saves images to disk unless you explicitly go 'file -> export images' or similar. If you're like me, and like to use the renderview saved images as a log of your work, mplay is working without a safety net. If it or houdini crashes, your images are lost.

implicit linear workflow everywhere

Well, it is. Mplay defaults to a viewer gamma of 2.2, internally mantra is all linear light. That's nice.

other interesting houdini features for lighter/lookdev types

some of these are probably mentioned above, but to summarise:

  • buy a single seat of houdini, you get essentially unlimited mantra rendering on your farm. that's pretty sweet. (you still need to export your houdini scene into a .ifd per frame, to do this requires a license, so you'd need a few of these if you run lots of jobs on your farm)
  • houdini ships with a simple farm manager for free called hqueue. hard to find info on people using it in production though.
  • houdini has a built in node based compositor. its not nuke, but fine in a pinch.
  • similarly, buy a single seat of houdini, you can use mplay on any machine on your network, as many as you want.
  • sidefx offer mantra cloud rendering via amazon.
  • sidefx do daily builds of their software, and respond very quickly to bug reports.
  • the gradually being phased out way of interfacing to houdini via the command line treats your scene like a unix shell. this is fun and intuitive. cd /obj/ball1; ls; rm null etc.

ok ok houdini is awesome. anything bad?

Yeah, a few things.

  • nodes are great/nodes are awful the 'nodes are awesome' motto it shares with nuke means it can suffer the same cry of horror as nuke: 'oh god, its full of nodes!' picking up a shot from someone else can be daunting at first, as you have to get your headspace into how they've setup their node network. even with annotations, coloured nodes, groups etc, other people's work can be tricky to decipher. only made worse by...
  • nodes within nodes within nodes at least nuke comps, for the most part, are big flat graphs. if there's a show-wide comp template, you have a fighting chance of being able to follow the node flow. houdini on the other hand forces shape node graphs inside transform containers, shader graphs on their own page, and render graphs somewhere else again. if you have a lot of interdependency, you find yourself diving in and out of networks like a dolphin on crystal meth, or opening several node graphs and trying to follow connections between graphs. shader graphs are the worst, if left unchecked you can get 10 levels deep into shader graphs, like playing a node version of nethack.
  • no unified object/render graph same as above, but mentioned for emphasis. this is katana's big win; everything a lighter would want is all exposed in a single graph. in houdni you're always jumping between the /obj graph where your cameras/objects/lights are, and /rop where your render nodes are. you get limited gui tools to help confirm if your object render list really exists in the scene, or if your light has been renamed, or if this shader override will work. katana (from what I've been told anyway) makes all this mostly transparent.
  • lighting is not a focus - sidefx have made it clear with their latest release that they're focused on serving the needs of fx rather than the needs of lighters, and its totally the right decision. their market is almost entirely fx, there's no benefit for them right now to devote a lot of resources to lookdev and lighting. thats not to say its neglected, what is available in houdini now is very good, but going forward it seems maya and katana can push more feature for lighters in the short term. whether maya and katana can do everything else houdini can do... probably not.
  • mantra is slow i've not used it in production, but reports from everyone who's put it head to head against other renders say its slow. still, hey, its free.
  • heavy scenes in maya are heavy scenes in houdini - no fancy deferred loading/rendering tricks like in katana; if you try to assemble a monster shot of 50,000 chars, terrain, fireworks, fluid sim, you'll be waiting a while for houdini to load it all.

-- MattEstela - 09 Sep 2011

Edit | WYSIWYG | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r6 < r5 < r4 < r3 < r2 | More topic actions
 
Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback