Quick blat of ideas while this is fresh in my mind...
I've avoided HDAs for a long time, I'm still avoiding them actually, but latest job requires I get my head into it.
Like a lot of Houdini things its a good idea hidden by a slightly obscure interface, this'll try and clear it all up.
What is an HDA anyway?
A Houdini Digital Asset is kind of like a maya reference. You can export a chunk of a hip to an hda, now those nodes live somewhere else on disk, and you can load them into your hip like a reference.
They have a few features that make them a little more complicated than maya references:
- You can't just go File -> Export on a bunch of nodes, you need to use the HDA workflows to export.
- Saving and updating is kind of magical and in-place, needs a mild leap of faith to trust its working.
- As well as dumping a bunch of nodes to an external location, you can also put parameters on the top of the hda to control nodes within it, which is a trick you can't do with maya references
- There's no HDA editor in the style of the maya reference editor per-se, its all done per node
- HDA are supposed to be run from a central location if you're in a big studio, meaning pipeline environment variables and all that. There's workarounds and tricks, but thats the ideal scenario.
- HDAs when defined will appear in your tab menu rather than expecting you to go File -> Load reference, which is neat.
Creating an HDA
Again several ways, but here's how its most often done:
- Select the nodes you want to HDA-ify
- Put them in a subnet
- R.click on the subnet, Create Digital Asset
- Set the name and location to save it. The default is to prefix with your username, I usually remove that. Also be clear about the save location, if its just for you, fine, put it in your home folder, but you probably want to put it somewhere more public if its to share with others.
- Houdini shows you the 'edit operator type' dialog. This is for you to customise the interface. The simple trick here is to put the dialog to one side, dive into your hda subnet, select nodes, and drag and drop parameters you want to control directly into the 'existing parameters' region. They'll be automatically channel referenced onto the top of the hda, super handy.
- Hit accept, you're done.
Lock states, editing and saving and reverting hdas
Now you have an HDA, you can do a few things with it:
- make modifications to the hda in the hip (the equivalent of maya reference edits)
- push those changes out of your hip and into the hda on disk (the equivalent of opening a second maya, opening the reference, editing and saving it)
- reverting any changes (the maya equivalent of deleting all reference edits).
Next to the hda is a padlock icon. Red means its unlocked and you might've made local change, gray means its locked and matches the definition on disk.
In maya reference terms red means you might've made reference edits, gray means you've made no changes. If you dive inside the hda when the padlock is gray you'll see the nodes inside are dimmed; you're literally locked out of making edits.
If you want to make changes you can go back to the top of the hda, r.click and choose 'allow editing of contents'. The icon goes red, dive inside and it no longer dimmed, you can edit away. Any changes you make are stored in the hip, the equivalent of making reference edits in maya. Like maya, the original hda (the reference) is untouched. If you were to bring in another copy of the hda, it wouldn't have those edits (the equivalent of referencing the same file twice, the second won't have the reference edits of the first).
To save your changes to the hda on disk, right click on the hda and choose 'save node type'. That's it. Those changes are now in the hda on disk. That's the equivalent in Maya of opening a second maya, loading the refernce, copying your changes into there and saving.
That said your hda will still be in the red and unlocked state, meaning its still also going to save local changes into the hip.
To do the equivalent of a 'delete reference edits', ie make it match exactly to the state of the hda on disk, r.click again and choose 'match current definition'. The padlock goes gray and locked, if you go inside you can see its all dimmed, but should now show that the changes you saved are now part of the hda rather than of the hip.
Load $HIP/hda on hip load via hou.session
Download hip: File:hda_from_hip.zip
Hacky. But I like hacky.
My current job has a couple of interesting contraints. Hip files need to be run In The Cloud, and ideally run as standalone as possible. That means minimal environment variables, minimal fixed locations for libraries or assets, just the hip and what the hip might need in subfolders.
We also need multiple people working on a single file at once, using github to manage our files. So that means having a single monolithic hip is an issue, because really only one person can work on it at once (we've tried using the diff tools, but it falls apart on big files, especially if you have things like stash nodes and frozen nodes).
Splitting work up into HDA's would be ideal, but the assumtption is you load all your hda's at houdini startup, and use environment variables to say where those HDAs will be found, which breaks our first requirement.
Enter this trick.
The amazing Chris Gardner shared a python script he wrote to hot load hda's. Give it a folder, it'll scan the folder and load any hda's it finds. That script lives here:
But how can we call that from a hip if we're trying to no load any external scripts or dependencies? Hou.session, that's how!
Hou.session is a python module that is saved as part of a hip, and is loaded (and therefore executed) on hip load. It can be accessed via Window -> Python Source Editor.
As such I copied Chris's script into hou.session, and I run
At the bottom of the script. Now I can make a hda folder next to the hip, save hda's in there, and they'll be loaded when the hip is loaded.
Obviously you'd want to move stuff to a more solid pipeline infrastructure over time, but this'll do in the short term.