TG Jay, Chinese TD
Some great info over at his site, lighting tutorials, melscripts, bugfixes... worth a bookmark!
Light Linking Optimization
Light linking is important when lighting and rendering. In very large scenes with many surfaces and lights to manage light linking over, Maya can get bogged down, causing sluggish scene interaction, long render prep times, long file open times and long save times.
Light links live as actual attribute connections between a light, the surface it lights, and the Light Linker node. This results in 2 connection links made per light per surface. Only surfaceShape nodes are linked so if you link to a hierarchy of objects every object makes the link, not the topnode of the hierarchy.
There is a way around having to link to each individual object thanks to Object Sets.
Lights can be linked to an object set ( Create -> Quick Select Set
, or Create->Sets
) and all objects within the set will be illuminated by the light.
Take the following example:
A scene is made up of a set (totaling about 2000 separate surfaces), a chair (totaling about 30 surfaces) and a character (totaling about 25 surfaces).
You have 5 lights, 2 Illuminate the only set, 2 illuminate the character, chair and set, 1 illuminate the chair only.
Take note of the following comparisons:
standard light linking
1 light linked to 30 chair surfaces * 2 = 60 links
2 lights linked to 25 character surfaces * 2 = 100 links
2 lights linked to 2000 set surfaces * 2 links per light per surface = 8000 link connections
Grand total: 8160 links made
(imaging bloating your ma file by 8160
Light Linking to Sets
Same comparison as above except, you have 1 object set containing the set(environment) surfaces, 1 object set containing the character surfaces, and 1 object set containing the chair surfaces:
2 lights linked to 1 set(environment geometry) * 2 = 4 links made
2 lights linked to 1 set(character geometry) * 2 = 4 links made
1 light linked to 1 set(chair geometry) * 2 = 2 links made
Grand Total: 10 links
. Holy crap, Batman. That's right old chum.
As you can see linking to object sets for very large amounts of geometry is far more efficient. Not only will it make scenes save, open, and evaluate faster; it will keep file sizes down, lower memory requirements and, if rendering with mental ray, dramatically reduce translation time.
This is simple example, imagine if you hade 3 characters in the shot, and 50 - 100 more lights to manage. Your scene will bloat exponentially. Also note that in the first example, that is the most efficient scenario of light linking directly to geometry, making use of illumnates by default on and off where the least amount of links could be made. Far more inefficient linking can create 4 to 5 times the number of links made.
All of this information goes exactly the same when disconnecting surfaces from a light. Disconnecting a light from objects makes the same amount of connection links as linking a light because maya has what are called lightIgnored links which force the light to not illuminate lightIgnored linked objects. These light ignored links are created when breaking light links between a light and an object or, in the light linking mode of the relationship editor, if you select a light which has its Illuminate by Default attribute on and unlink geometry.
Oct 17, 2005
- A lightfog object needs receive shadows on, otherwise it'll never have shadows cast into the fog.
- With respect to Spotlight based fog texture/shadow rays quality. Select the fog enabled spotlight (fog)cone, go to the Attribute Editor and switch on Volume Samples (down the bottom somewhere), you can raise this default figure of 1 up to around 3 or more if you're a value abuser. This process will make any textures mapped into the fog appear in more detail.
-- DeanErvik? 29 May 2003
Raytraced transparency leaves slight shadow
Great tip from Bill Spradlin
: Check to make sure your Shadow Attenuation is set to 0 under the raytrace section of the shader. This is an setting that was intended to fake caustics but really it should be defaulted to 0.
Poor man's caustics
Tip from Soren Jacobsen ( www.kurgan.dk ). Create a simple blinn shader and setup it. Connect a the facingRatio attribute of a samplerinfo to the vCoord of a black & white ramp. Connect the outColor attribute of this ramp to the transparency of the blinn shader. Make sure the b&w ramp setup makes the blinn more transparent as it faces the camera. Then set ramp's white color gain to 1.5 or more. Place a light with raytraced shadows (e.g. radius : 8, rays : 7).
- 06 May 2005
Uv space tricks to share single material across multiple objects and multiple filetextures
Say you have a big robot. Each major section (arm, leg, chest) requires its own 2048x2048 texture, yet you want to use a single material. The usual choice is to use switch nodes (unwieldy), or a massive 20,000x20,000 texture to get enough texture space. Neither is ideal.
Very clever piece of lateral thinking by Yirosh at The Mill; take each major section and tile them along in uv-space, so legs are in the 0,1 space, arms in 1,2, chest in 2,3 etc.
'But wait' you say, 'won't they just wrap around as if they're all in the 0,1 space anyway?'. Normally yes, however if you uncheck the 'wrap U' and 'wrap V' options on the 2dplacement node, they only appear in the uv quadrant you specify. So all you do is set the 'translate frame' attrs to the matching uv space, and textures only appear on their matching uv shell.
The remaining tech detail is how to connect all your file textures together to then connect to your material. We're using a plusminusaverage node, works well. Just make sure you set the default colour for your filenodes to black (its 50% gray by default), otherwise that colour will taint whatever is outside each file's uv shell. Will attach a screenshot soon.
Watch out with templated objects
Even though its listed as a display option, it actually affects render visibility. Template and object, and its not rendered. Objects set to reference mode DO render however. VERY ANNOYING.
Thanks to Laura at framestore for pointing this out; Mark Davies, he of Raydiffuse (what we old timers used before you kids got your fancy mentalray and final gathering), has another handy shader, curvature. Laura pointed out it can be used as a cavity shader, or as a very quick, very tweakable ambocc shader where you can easily map into cracks and pores. Its neat.
I get a simple warm glow from working this stuff out... takes me back to doing LOGO in primary school...
Translucent leaf shading
Duncan via highend3d:
The translucence attribute on maya shaders should roughly handle the back side of the leaf. Make the translucence depth 1 and adjust the translucence focus to the point where the back side of the leaf is about the right amount of brightness relative to the front. The translucence will allow you to pick up shadows on the back side of the leaf. If you have Maya7 you can preview the translucence focus effect in hardware with highQuality shading enabled. The focus is basically how directional the scattering of the light is by the material. For thin translucent objects, like a leaf, this is higher than for thicker
materials( like candles ). At any rate, the focus attribute allows you to have the underside of a leaf look glowy, while at the same time the light
facing side will look more like typical lambert shading. A high focus value would make sense for something like nearly transparent onionskin, where
one will actually see a "glow" around the light source when looking at it through the skin.
Incandescent materials vs hardware texturing
Say you're setting up a diffuse pass, where you want a full-bright, unshaded copy of your colour texture. I've seen several ways of doing this; using a surface shader, piping the texture into incandenscence, or ambient etc... all render fine, but have the problem of poor texture resolution in the viewport. A neat workaround someone showed me is to leave the colour plugged into the colour swatch. turn diffuse to 0, and ambient to full white. Viewport texture quality stays fine, and you get your full-bright render.
Surface shaders for speed
If you're doing matte passes and whatnot, you don't need proper lighting from blinns/lamberts, and speed is paramount, replace your shaders with surface shaders. They don't do any lighting calculations at all, saving you precious CPU cycles.
Tips on organic (medical) shading
Courtesy of Andy Wagener and Stewart Pomeroy:
Late last year I was brought on to a Med Vis. job. The existing team there had been working together for years and had some great tricks.
Out of those tricks a few things distilled themselves:
Facing Forward Ratio. On just about every aspect of shaders. Mainly though adding a very slight hint of colour in the Ambient Value of most shaders really kept everything from muddying up. And this was important for the Art Direction of all of their work. Basically they avoided the colour/hue of black everywhere
- Blend Colours with a ramp piping into the Surface Luminance of a shader. This also helped keep the colours from muddying up.
- Facing Forward Ratio in the incandescense on things like molecules. This gave the appearance of the surface having depth.
- Dont underestimate the value of using animated texture maps.
Shading via surface luminance nodes
Modified from a post by Chad Briggs on highend3d:
Plug your surface luminance into a clamp with min 0 max 1, that into the v-coord of a ramp, then that into your material. You'll need the clamp otherwise you get strange effects. Easier still is to use a ramp shader, but this'll give you very fine control with more complex setups.
Joseph Francis has an even cooler (although more complex) tip on his blog, where you use ramps to control spotlight colour falloff: http://www.digitalartform.com/archives/2005/08/hue_falloff_in.html
- http://www.imageafter.com/ - excellent hi-res images across a wide range of subjects. They maintain their own section for texture links here.
- http://www.flickr.com - ok there's about 3 billion cat and baby photos, but if you dig deeper there's lots of good stuff to be found. If you register you get access to multiple resolutions of most photos, some can be remarkably high quality.
Disable textures so they pass their default color instead
Nice tip by jason brummet from the highend3d list:
given a shade node blinn1
//this turns the connection state on and will render texture
shadingConnection -e -cs on "blinn1.color";
//this turns the connection state off and render default color
shadingConnection -e -cs off "blinn1.color";
You'll get a visual indicator of this in the blinn1 attribute editor; the connection button will turn red.
You can also ignore links in the attribute editor of a material via the right click menu. Open the AE for your material and right click the name of the attribute (as if you were going to Break Connections) and choose Ignore While Rendering. This is an interface for the mel commands that Jason Brummet posted.
On an additional note, the Ignore While Rendering will ignore every
input connection to that attribute, meaning expressions, keyframe animation, set driven key, as well as texture connections.
-- Edit by: SeanFennell?
Oct 17, 2005
Dragging nodes onto the AE breaks under linux
I found I couldn't drag nodes from hypershade into the attribute editor swatches to link textures together. Found quickly switching to the channel box and back sometimes fixes this, but failing that
, if you tear off a copy of the AE with the 'copy tab' button, it should work.
Camera projection, matching image planes
Why does alias make this so difficult? Seems every time you do something involving camera projection you have to relearn it from scratch. For a process thats so often used in CG, they could make the whole thing a lot easier. Anyways... [rant off]
- Set filter for your file texture off
- Set filter on projection to zero
- Set projection to perspective, link to camera
- fit type: 'match camera resolution'
- fit fill: 'fill'
Seems you have to use square pixels for this to align correctly (others have found different), and if you're using film gate vs resolution gate, you have to use different settings. Also make sure your camera's overscan is 1 and xy offset are 0,0, otherwise it won't line up. Furthermore, if your image dimensions are substantially different from the film gate, it won't line up, even if you're using resolution gate all the way through every node in your damn setup. Still gotta sit down and work out exactly
what maya expects, so I don't keep fighting this every time.
Here's Andy Boyd's great tutorial on camera projection: http://www.a3d.co.uk/maya/camera_projection.html
, and another by Mike Breymann: http://www.mikebreymann.com/images/cmap_tut.pdf
Creating alpha's for camera projections/camera mapping/photogrammetry
Had a brainwave the other day that's probably obvious to others; the problem with camera projections in maya is they project completely over your object, so its difficult to combine multiple projections. Normally you have to bake the projection, then analyze the resultant map in UV space and try and work out whats correct, and whats unwanted stretchy texture. I realised if you parent a spotlight to the camera and bake in shadows+lighting, you get a matte of only whats visible to the camera.
The next step is to somehow convert this into a shader network that does this all in a single step. Something like a facing ratio, but whereas facing ratio is always relative to the renderable camera, this would need to be linked to the projecting camera (or light). I have a hunch this'll involve things like lightinfo nodes, samplerinfo nodes, lightinfo nodes, and god help us, some vector math nodes. There's a mentalray photogrammetry shader
by jeremy pronk that i'm sure works great, but I'm curious to work this out for myself as a learning exercise. Besides, I keep getting problems with mentalray bakes, must try and figure out why...
Facing ratio normal camera vs ray direction
Here's a breakdown of the differences between them by Robert Rusick:
Returns a vector from the current sample point. This vector is a "surface normal", which means it is perpendicular to the surface at that point. Additionally, this vector is in the camera coordinate system, which is why ( I'm assuming ) it is called the "normal camera".
normalCamera and bump maps
The normalCamera is used to determine things like shading and reflections on a material.
Bump maps work by "perturbing" the surface normals; deflecting them as they would be deflected if the bump textures were part of the geometry, which in turn affects how shading and reflections are calculated.
Returns a vector from the current sample point, pointing away from the camera ( as if following a ray projected out from the camera ). Or it could equally be considered to be a vector from the camera,
pointing toward the current sample point.
Which direction does the ray direction vector point?
I've found the documentation on this attribute a little confusing ( i.e., wrong). It suggests that rayDirection gives the direction toward the camera from each point.
That last assertion was backed up with some experiments with a texture test rig (which would too tedious to describe for this post ).
-- Robert Rusick - 06 May 2004
Creating a concentric volume ramp
Use a fluidTexture3d. Example scene: concentricRamp.ma
On the fluid texture make the density and velocity both OFF to avoid creating any grids or dynamics. Make the colorInput "Center Gradient". The color ramp now is applied based on the distance to the center of the cube. Now make the dropoff shape "sphere".. having a dropoff shape will keep the color ramp from wrapping. Now set your default color (under color balance) as desired.
Note that the default color only appears outside the cube, not outside a sphere. To fix this make the first entry in the color ramp the default color and set its interpolation to none. Create another entry in the color ramp at a position about 0.33. Now you should have the desired ramp. Note that if your default color attribute is textured you will need to also connect the same texture to the first color ramp indice.
If desired you also can modify the opacity ramp in a similar fashion to the color ramp. The outTransparency and outAlpha attributes on the fluid texture show the effects of the opacity ramp.
-- Duncan Brinsmead - 9 June 2004
Note that the shaded viewport won't update if you transform the placement node; you'll have to nudge a value in the attribute editor, or view it through IPR.
Convert envCube into spherical map
Create a samplerInfo node and connect its normalCamera to the
rayDirection on the envCube. Thus it things the view is always
looking down the normal, so instead of behaving like an environment
map it is more like a projected texture.
Create a default sphere and then do hyperShade -> edit -> convertToFileTexture.
The resulting file texture when used as a spherical env should exactly match the envCube(before you connected to its rayDirection).
-- Courtesy Duncan Brinsmead, 30 Mar 2004
Ok, I didn't connect it to an envSphere, but you get the idea.
How to connect video to a shader?
The movie node seems to give a lot of people problems, the simplest way around is to use an image sequence instead. Get your comping app to create a sequence in the form name.####.ext (tif or tga seem to work best), then run through the following steps:
- create a regular file node, point it at the first image in your sequence
- turn on the 'use frame extension' toggle
- set a key of 1 at frame 1, and a matching end key for the last frame in your sequence
- link it into your shader network as you normally would.
I prefer to key the frame extension rather than use an expression. Often you'll have to slip your footage backwards or forwards, time-ramp it, play backwards, hold a frame etc... this is much easier to control from the graph editor than with an expression.
Realtime playback of sequences
You can force a sequence to be cached in video memory, very handy. Here's how:
- on the file node, expand 'hardware texture cycling'
- set the first, last, and by frame values. If you have limited texture memory, you might have to use every 2nd frame, and limit the range.
- set playback options to 'every frame', make sure you're in shaded texture mode, and hit play. It'll play slowly first as it caches into ram, then play realtime or faster after that. You can change your playback speed back to normal at this point.
Occasionally that cache will be lost if you tweak anything in the scene (even if its not related to the texture). Restarting maya can sometimes help, sometimes not.
Limiting projected textures using wrap
In the projection itself, under 'effects' is a 'wrap' toggle. Enabling this will limit the projected texture to the volume of the texture placement node. This is handy for applying decals and logos, as you can now scale the projection node so that it cuts one side of your object, and the logo won't appear on the other side. Of course, your logo probably has alpha you need to use, so next you'd probably ask...
How do I link alpha to a projection?
- On the projection node, expand 'color balance' and set alpha gain to zero
- Connect your texture's alpha to the projection alpha offset
- Create a reverse node, connect the projection out alpha to the 3 reverse inputs
- Connect the reverse out to your materials transparency
Here's a pic, because we all like visual aids:
How do I layer a projection over another texture?
Example scene, relink the file texture to something with alpha on your system. projectionWithAlpha.ma
The most editable way is to use a layered texture (same holds for layering regular textures):
- Create a layered texture node, set the default green swatch to black. Its important to leave a 'base coat' like this, you'll get weird results otherwise.
- Create a new swatch (click in a blank area of the swatch bar), set its blending mode to 'none', connect your base texture colour to the swatch colour.
- Create another swatch, connect the projection colour to the swatch colour, and the out alpha to the swatch alpha, and set the blend mode to 'over'.
- Connect out colour of the layered texture to the colour of your shader. You can reorder swatches by middle mouse dragging.
Connecting multiple 3D attributes to plusMinusAverage
In the connection editor, use the right mouse button when you go to connect to the input. an option comes up that says "connect next available" and it makes multiple connections! You can also use the -na flag in connectAttr.
From memory this has changed in maya 4.5, so now you can only do this via the mel command.
Polysmoothed objects don't have correctly smoothed UV's, what do I do?
Move to maya 5. If you're stuck on an earlier version, read this:
Bumpmap and facing ratio
To get a bump map to work with facing ratio, connect the outNormalZ from the bump node to the vcoord of your ramp
Toggle hypershade updates
- renderThumbnailUpdate true; // Turns on thumbnail updates.
- renderThumbnailUpdate false; // Turns off thumbnail updates.
Setting resolution of hardware textures
You can override the 512x512 limit by adding a int attribute to your texture named 'resolution'. You can now go as low as 1, or as high as 4096 (assuming your video card doesn't explode under the strain).
Sometimes you'll find maya will do this for you, or place it on the shader rather than the texture. Don't be alarmed...
There's a handy plugin for max that allows HSV maniplulation of images, and while maya has an HSV node, it takes some effort to use it in a shader. I wrote a melscript to do the work for you, download it from highend3d: http://www.highend3d.com/maya/mel/?group=melscripts§ion=rendering#2209 hsvControl.mel
Maya 6.5 added a few colour adjust nodes that use the new fluid ramp widgets, thus avoiding the need for this script. Its meant to work similar to the photoshop curves dialog, which is great, but the widget can't be resized, which is not so great. Maya 8 seems to have fixed this though, with full-screenable ramp widgets. Woo!
The box reflect node is a great way to get ultra-cheese chrome reflections without raytracing, and it handles procedural inputs better than the standard ball environment reflect. I created an example shader, lozenge, looks a little like this:
- 20 Feb 2004