Mental Ray

There is a new section for 3rd party MentalRayShader. The default Mental Ray in combination with maya is such a huge topic that there has to be a clear and separated structure. Post information about mental ray shaders and 3rd party shader in the new separate MentalRayShader section.

-- AndreasMartin? - 22 Mar 2006



I've checked them and removed some 404's as of july 2012, I'll check the SSS links later as it's not really useful anymore (ie it works out of the box) -- MattBernadat

Understanding Mentalray and optimising performance

  • Production shaders PDF - Master Zap's documentation of the MIP production shaders, which include motion vectors, mirrorball and gray-ball lighting, and a few other cunning tricks.
  • DJX - David Johnson's blog. Lots of good practical mentalray advice, taking up some of the fiddly specifics of maya's mentalray implementation where mr.zap leaves off. Save oodles of time flailing about for broken or undocumented features, odds are David has got there first, written it up, and created 5 melscripts so that you never have to worry.
  • MR Zap's blog - clever guy who writes lots of cool shaders over at mental images (now at autodesk?). fast-sss, skin shaders, the new architecture materials, all his work. Lots of great tips on his site for getting the most out of his shaders. AFAIK it's pretty much dead since 2011.
  • Gnomon Freebies - including fast-sss based skin shaders. be sure to read mr.zaps notes as well (link above).
  • Finally, a proper, well supported, user driven mentalray site! This will be the new home for all things mentalray, looks like everyone is getting behind the site. A lot of stuff that was otherwise difficult to find (ctrl.shaders, interior lighting tutorial on that italian site, etc) are all being hosted at mymentalray now. Most of the cool links below you'll find there, so head on over and browse around.
  • LAMRUG - LA mental ray users group, lots of great (old) articles to be found. Make sure to read the Sampling tips. If you are curious about render times, read this other article by Horvátth Szabolc
  • XSI_rendering_lajoie.pdf - softimage XSI paper on optimsing mr sample settings, bsp etc. Interesting read that applies to Maya+MR

SSS/Displacement/Final gathering walkthrough links

3rd party MentalRayShader/plugin links [updated]

  • mix8layer and bumpcombiner tutorial - the original spanish version is here, but once again bablfish does a pretty good translation. Mix8layer is a mentalray native version of layered texture, the tutorial explains its use as well as other native mr nodes, depth of field, and other handy things. Great intro to using hypershade with mentalray.
  • deathfall mentalray forums in spanish and english - be sure to look at both, the screencaptures tend to make things very clear, and you can always fall back to babelfish if need be.
  • - looks very xsi centric, but from my limited understanding its possible to get such things to work in maya. The forums on this chap's website might have clues...
  • - liquid for mentalray? Eh? Looks fairly new (dec 20), I've heard a few complaints about the bundled maya2mr translation by alias when used in production, maybe this is a fix?
  • - russian programmer/td with notes, plugins, render gui's, shading nodes etc for mentalray
  • - some c++ classes for mentalray, puppet has a compiled set here.
  • ctrl_buffers, a mentalray shader that does arb-outs. Woo! Read the thread to see some neat hypershade screengrabs and renders. After being out in the world for a few years it definitely showed it was all possible, but suffered from patchy (ie nonexistent) documentation and random release dates. Fantastic that released this stuff, but I always got nervous that it was closed source and unsupported. I'd suggest using simplepasses instead, explained below.
  • A displacement shader that lets you control the direction of displacement. Pretty pictures are in that thread if you scroll down a bit.
  • - a motion vector exporter to use with revision fx realsmart motion blur. You'll want to go to this page by revision fx that has installation and usage instructions, as well as links to maya builds by Horvatth Szabolcs for windows and Tom Cowland for OSX. This functionality is now builtin via the mip_motion_vector shader, explained below.
  • A CGTalk thread by Mike Eheler on creating custom shader folders. Very handy, my default folder was getting pretty messy...

Tutorials & Tips'n Tricks


If your hypershade freezes and you experience a lot of lag it's probably because maya is updating thumbnails (mostly on mia materials). It's really frustrating when working on big networks. Use:

renderThumbnailUpdate 0; //to turn it off
renderThumbnailUpdate 1;

David Johnson (djx) wrote a mel to integrate it in the hypershade for versions 2008-2012:



A proxy based forest; each tree is bursting out of the ground with deformation blur, the foreground tree has transform as well as deformation blur, seperate shaders are assigned to the trees in different passes. Even though the final poly count is something like 20 or 30 million (yes really!), the maya scene is at most 500kb.

New in maya 2009 are mentalray proxies. These allow you to export geometry to a native mentalray .mi format (File->export selection->options, mentalray, render proxy), then associate it with a simple shape in your scene, eg a poly cube. When mentalray renders the cube, it will be replaced by the contents of the .mi. Mi files themselves are mostly plain text, making them ripe for editing and manipulation. Here's some tricks.

Changing materials

Proxies store both geometry and materials. Assigning materials to the lo-res object will have no affect. This makes it a pain for setting up passes and whatnot. If you peek inside a proxy you'll find that the link from material to object is a single line:

material ["myAwesomeMaterialSG"]

this will refer to a shading group (SG) within the proxy. If you edit the name to start with a double-colon:

material ["::myAwesomeMaterialSG"]

mentalray will look for that SG within your maya scene instead. So, as long as you duplicate your materials within your render scene, and make sure the SG names match up, you can then do whatever you want. My usual tactic is to select the SG, assign a layer override on the 'surface material' slot, and plug in whatever other material I require.

There's no way to have maya create .mi's this way, but you can use perl to edit your .mi's, and store a backup:

perl -pi -i.bak -e 's/(material \["\w*:|material \[")/material \["::/' myProxy.mi

That rediculous bit of code finds any line that starts with material [" and replaces it with material["::

This is suprisingly fast; running it on 3gb of proxies finished in under a minute.

Its not enough to just have the materials and SG's in your scene though; by default mentalray will assume no objects are assigned to your materials, so they'll be ignored. Instead you need to create a dummy cube for each material, assign it, hide them, then turn off 'optimize non-animated display visibility' in the render globals->mentalray options to force them to be exported when rendering.

Animated objects

Proxies are currently designed to only work with static objects. While there's options within the export dialog for frame numbers and whatnot, they're ignored when you export a proxy. Instead, you can get maya to export raw mentalray bits of your scene (called scene fragments), then wrap it in the few lines of text required to make mentalray recognize it as a proxy. The melscript 'meWriteProxy 0' does all of this for you.

Rendering animated proxies

Because the fileProxy property on a shape is a text string, and maya doesn't let you keyframe text strings, you need to find another way to animate that value. The simplest way is to use an expression that does a getAttr and setAttr. At its simplest you create another string attribute called say 'proxyPrefix', then create an expression that looks something like

string $prefix = `getAttr myShape.proxyPrefix`;
int $frame = `currentTime -q`;
setAttr myShape.fileProxy = $prefix+"."+$frame+".mi";

Rendering animated proxies on the farm

Annoyingly while the above trick works from within the renderview, it will be ignored if you batch render or render on the farm. The brute force trick is to save out a copy of your full scene per-frame, then render each of these on the farm. This is a pain to manage! A slighlty less brute-force trick is to then reference one of those per-frame scenes into a new scene, then create an expression to update the reference per-frame. Very hacky, but it works. The procedures 'meSaveSeq' and 'meRefExp' will save out a sequence and setup the reference expression respectively.

Note that you should only really save out the proxy stuff; I found that referencing the entire scene (especially with the camera) and trying to update it per-frame will crash mentalray.

Proxies and motion blur

If you animate the lo-res proxy shape, it'll motion blur happily. Slightly tricker is if you want stuff inside the proxy to also motion blur. As long as you turn on motion blur before running meWriteProxy, then motion information will be stored within the proxy.

Rendering it is another matter. Deformation blur works without hassle. If you have transform blur though, it won't. The trick is to keyframe your lo-res shape to animate a tiny amount.

Textures and proxies

Similar to the above trick, keyframe your lo-res object a tiny amount and turn on motion blur to have textures render correctly. If you don't there's a weird bug in the rasterizer that will cause textures to swim.

Rendering proxies in general

If you're using proxies its likely that you're looking at lots of geometry. As such, you'll want to optimise the renders as much as possible. This means use the rasterizer, and if possible disable raytracing entirely. Doing this allows mentalray to churn through loads more polys than it could otherwise deal with.

Proxies generate loads of warnings and errors, yet frames will still render correctly. If the farm keeps error logs this can cause problems (a 97 frame sequence I rendered created 24gb of error logs!), so if possible disable logging completely. Within qube this is only possible by using the maya batchrender style of job.

Particle instances is exported to proxies in a slightly odd way; everthing is parented under a 'instancerGrpInst' node. The meWriteProxy will include this by default. Depending on your error warning level this will cause mentalray to generate loads of errors if you don't have an instancer in your proxy. It can be ignored, but again watch the size of your error logs, mentalray will complain about this a LOT if you have loads of proxies!

Using maya instances which are then saved out with meWriteProxy are very memory efficient; if possible use these.

At the moment meWriteProxy takes a single parameter, '0' or '1'. 0 is the default, 1 will export the materals to a seperate .mi if you wish to edit it by hand. The perl trick mentioned above negates the need for a lot of hand-editing however.

Self occlusion, ambient occlusion (ambocc) sets etc

Uses miLabel as helpfully explained on djx's blog. Note that it goes on the transform, not the shape. Yet proxies go on the shape, not the transform. Keeps things interesting I guess.

Summary: Add attribute 'miLabel', int, give it a non-zero value. Using a matching value on the occlusion texture will enable/disable that transform from occlusion. Using a negative value in id inclexcl will also turn that shape off. Thats used more if you're doing many shapes all with the same occlusion, but some should occlude and others shouldn't.

Enabling the mip shaders in 2009

They can now be enabled/disabled via an optionvar, no mel hacking required. In the mel box type

optionVar -iv "MIP_SHD_EXPOSE" 1

save prefs, restart maya, shaders are now all there, even placed in correct shader categories.

Final gather speckling - fg cast/fg receive are your friends

FG speckling? Beyond the obvious (misused IBL, low FG settings), make sure objects which shouldn't contribute to FG are disabled. On their shape node, mentalray, final gather cast and final gather receive.

An example. Simple lamp, area light inside lamp. Light casts shadows, but lampshade geo has cast/recieve shadows disabled. Horrible speckling ensues. Thinking about it, wherever the area light intersected the lampshade was setting those fg sample points to crazy high values, basically the same as including a sun in an IBL image. Hiding the lampshade from FG fixed the speckling.


Mip motion vectors

Easier than I remembered, create the mip_motion_vector node (it lives in the mr materials section), apply it as an output shader to your camera. Then in the camera secondary outputs turn on colour, depth and motion vectors. Also make sure you turn on motion blur in the globals! To speed things up it helps to assign a dummy shader, like a surface shader. Of course the right way is to have it be calculated at the same time as your beauty via a framebuffer, but anyway.

Here's a highend3d post about getting these motion vectors to work with nuke. I'm still getting odd artifacts, will have to do some further investigating:

To make this work in 2009, connect mip_motion_vector.message to cameraShape.miOutpuShader (its in a subgroup called miControls or something). The next time you look at the camerashape in the attribute editor it will have exposed a new section, 'legacy output shaders'. turn on depth and motion vectors, all good. This assumes you want to render directly rather than through framebuffers.

Make sure the camera is at scale 1,1,1, and any parent groups are also at a scale of 1,1,1. Motion vectors won't render correctly if you use odd values. In fact you'll get lots of little errors sneak in if you use a scaled camera, its good practice in general to avoid this at all costs.

Framebuffers in 2009, no thanks

So maya 2009 has built-in framebuffers now. And they don't work. Useless in fact. That's the last straw for me, so over the next year I'll be moving to 3delight.

Disappointing that maya and mentalray is still such a half-baked solution after all these years. No doubt the guys at mental images are a talented bunch, as are the maya team, but it seems to me that with mentalray-for-maya neither team is taking full responsibility, so its in a limbo where neither has tried using it in production, and realise how broken it is. At least 3delight both develop the renderer and the maya integration (and soho vfx, the co-developers of the maya-side use it in production), so the buck stops with them.

Until either mental images or autodesk take ownership, mrfm will stay broken. < / rant off >

Framebuffers in maya 2008 quick summary

You create buffers and passes. Buffers store the shader data. To get this data into an image file it needs to be connected to a pass, which contains info about the image name and image type. That in turn connects to your render camera. A buffer and pass are needed for each framebuffer you want, so you'd have a reflectionbuffer and reflectionpass, a specularbuffer and specularpass, a diffusebuffer and diffusepass etc. Buffers are created in the render globals, passes are created in the camera. So that's the output side covered.

To pipe shader info to these outputs currently requires a custom shader, called a buffer store. There's a few around, ctrl.buffers being the most well known one, but my current preference is simplepasses available here. Its the only one available on all platforms, and source code is available, so its somewhat future proof.

Anyways, the buffer store shader you use contains several colour swatches. Whatever is connected to these will get piped to the matching output pass. Simple. You then assign this store shader as a material to your object, and render away.


Framebuffers, what works, what doesn't

You'd expect you could take your standard maya blinn and separate out its diffuse/spec/refl right? Nope. Materials need to be buffer-ready, they'll have several outputs that you connect to your buffer-store shader. At the moment the only internal material that offers this is mia_material_x. 3rd party wise there's the puppet shader, and I believe 1 or 2 others, but apart from that you're expect to create your own.

So if you have a pre-existing material, how are you supposed to break it into its separate components? Brute force. Duplicate your material, turn off everything but the diffuse properties, connect that to the diffuse slot on your buffer-store shader. Duplicate again, turn off everything but specular, connect that to the specular slot. And so on. As you'd expect its not efficient (you're essentially calculating 5 seperate materials rather than 1 material split into its components), and it makes for big hypershade networks. That said, you get a considerable advantage if you're using dense or heavily displaced geometry, as mentalray only has to tesselate once and shade 5 times, vs tesselate 5 times and shade 5 times. Its probably an incentive to use the new container nodes in hypershade to keep things tidy. Or to use mia_material_x wherever possible. I suspect the next version of maya will make all the built-in standard materials buffer-friendly, as this is a key feature of both renderman for maya and 3delight.

Another problem is alpha. This is related to a key difference between maya and mentalray; maya's shader networks all use RGB connections, and transparency is treated as a seperate RGB value too. Mentalray internally uses RGBA everywhere. This makes for a lot of guesswork for the maya->mentalray translator, mystery extra alpha connections here and there, all a bit messy. You've probably experienced occasional mentalray renders where the alpha is all black; that problem rears its head many times over with framebuffers. The beauty pass often drops its alpha (as your primary maya-blinn material has no obvious alpha output), so you'll want to specifically make a matte framebuffer to make sure you're covered in comp.

You can't do seperate light or shadow passes with framebuffers, at least not without 3rd party shaders again. I vaguely recall a ctrl shader for doing exactly this, but as mentioned above I'd rather not rely on closed-source solutions unless I can pay for it and get some guarantee it'll be supported.

A lesser yet still important problem is how you preview and mix material components. If you've got an existing material then you should be fine to duplicate and split it into components, but you'll probably want to keep tweaking and adjusting the mix of dif/spec/refl/amboc etc for your beauty pass. The least-resistance method so far is the mib_color_mix node, found in the mentalray data conversion section. Its basically the same as a layered texture or layered shader, but allows you to connect its inputs to a buffer-store, keeping things somewhat neat.

You can see the ideal method is to use a buffer-aware shader like mia_material_x, and avoid a lot of the hand wiring stuff. Again I hope its easier in future versions.

So, thats the overall rundown, I'll give a step-by-step in MayaFrameBufferTutorial, heavily influenced by some great research done for us by Aaron Grove and from the render wiki.

Zbrush, maya, mentalray, executive summary

This is intentionally brief, if you're doing this for the first time read scott spencers guide for a very detailed tutorial.

  • Install the multi displacement 2 and ADE plugin:
  • Under zplugins->multi displacement 2 (MD2), set your map size and sub-pixel accuracy.
  • Click 'export options', create a single preset of 3-channels, 32bit, 'full range' in each channel, scale off, vertical flip enabled. The shortcut code for all that is DE-LAEK-EAEAEA-D32
  • Close, again under zplugins -> MD2 click 'create all'. It'll ask you for an image name, then go and calculate your map
  • In maya, setup displacement using a mr approx node, apply the image as your displacement map, set alpha gain to 2.2, offset -1.1 while this works in theory, I'm finding I still have to adjust this value by eye per object. These values are pretty random; current model is on 30/-15. Hmm.
  • Render
  • Use photoshop CS3 to do touchups on the map (you can paint in 32bit now). If you've used lots of pinch and nudge turn on 'adaptive' under tools->displacement, it does a slower, but more accurate raycast when generating subsequent displacement maps.
  • If mentalray starts throwing memory errors, convert your tif to a .map using imf_copy -p foo.tif

Approximation Editor, displacement vs subdivisions

Use displacement if you don't require the model to be subdiv-smoothed (pretty rare). Otherwise use subdivisions. You used to have to apply both, this is no longer the case.

Image planes show in maya software, but not in mentalray

Check the path to your image plane, if its a sequence it might be in the format image.@.tif. Replace the @ with the first frame number, mentalray will be happy again.

Missing alpha, exr, photoshop, workarounds

If you use the native mentalray shaders, eventually you'll notice the alpha channels are missing. To get them back, open the mentalray render globals, and towards the bottom under 'custom entities', enable 'pass custom alpha'. In fact if you're rendering with mentalray, that should be the first thing you enable. Why isn't it on by default?

But what if you forget this, and you set a 5 hour EXR print still rendering? If you bring the render into photoshop, you'll see photoshop brings your render in as a layer. And because it see's a blank alpha, it treats the layer as being completely empty. Which sucks, as there's no tools in photoshop to extract the colour info.

Instead, replace photoshop's exr reader with the one from Not only does it bring your render in as a standard flattened image with a black alpha, it also has a nice thumbnail preview when you first load the image, letting you set the gamma and exposure. Note that it then brings the image in as 16-bit rather than floating point, so if you plan on doing tonemapping tricks, you have to set the image type to 32-bit, then back again.

Final Gathering, Global Illumination, Ambient Occlusion

After much avoidance, sat down to learn the full FG/GI/photon/ambocc workflow. Much easier than I expected (maybe the tools have improved in 8.5, last I looked was v6 or so).

Main thing I've learned is why you need GI vs FG. Basically if its an exterior daylight shot, FG is fine, but whenever you're getting light bouncing off multiple surfaces (ie interiors), GI. Final Gather is good for a single bounce, Global Illumination for multiple bounce. That said, final gather is designed to work with GI, its not mutually exlusive. Ideally you get a reasonable looking render with GI, that suffers from mild noise. You then enable FG, which smooths out the solution and gives a nice clean render.

Another way to think of it is that final gather is calculated from the camera view, so its implied that its a one-bounce solution (it won't calculate what it can't see). GI is calculated from the light's point of view, firing photons and bouncing them around the scene, so its designed for that sort of multibounce effect.

And ambient occlusion? Yet another way of thinking about FG/GI/Ambocc is in terms of complexity. GI is most complex, bouncing photons around. FG is mid level, calculating for each ray from the camera how much light each point gets, and what colour. Ambocc is simplest, in that it only calculates how hidden a surface point is, and shades it accordingly. That said, ambient occlusion is often the main kicker in making surface seem real, so for a full hero render, you're likely to use all 3 effects at once.

As an aside, both FG and GI can be used to approximate an ambient occlusion pass, however they're so computationally expensive its best avoided. You'd have to fire a crazy amount of photon for GI to look like ambient occlusion, and while FG can fake ambient occlusion in a reasonable amount of time (there's mentalray shaders designed exactly for this), I'm now more inclined to use FG as a smoothing pass over a GI solution. So, to summarise:

  • Ambient Occlusion - darkening and picking out of fine details, through 'simple' calculation
  • Final Gather - Camera based single-bounce indirect lighting, good for outdoors, and excellent for smoothing and refining a Global Illumination solution
  • Global Illumination - Light centric multiple bounce lighting, excellent for indoors (wasted for outdoors), tends to be noisy and grainy if used by itself, to get smooth results needs massive numbers of photons. FG does a much better job at smoothing out GI, at a fraction of the time.

Global Illumination for interiors, quick workflow


You can either read the talky version below, or look at GlobalIlluminationIllustrated.

  • Start with the GI. Enable GI in render globals, set accuracy to 1, photon radius to 0.1, have a tiny preview window with low samples (-2,-2), and enter a name for the photon cache, turn rebuild on.
  • Pick a keylight, enable photons, start with 1000 photons, render
  • Look at the render, make a judgement on how bright the photons look. The photon intensity directly ties to this. If the photons are bleaching to white, lower the photon intensity by half. This is directly related to scene size, so if your scene is small, the intensity might come down from 8000 to as low as 30.
  • Judge how far photons are bouncing into your scene. The photon exponent determines how 'bouncy' the photons are. Higher values = less bounce. This seems to scale exponentially. So if you're getting no photons at all on the far wall, lower the exponent. If you're getting too many photons everywhere, increase the exponent.
  • Now double or triple your photon count. You want to get a reasonable coverage of photons. You'll probably have to tweak the intensity and exponent again. Again, hotspots = turn down intensity, not enough light in corners = lower exponent. If the exponent needs to go lower than 2, start going down by much smaller values; 1.9, 1.8 etc
  • Once you have a nice photon spread, you can stop calculating it. In the render globals, turn off 'rebuild photons'
  • Now its time to smooth. In the globals set the photon radius to 1, accuracy to 100. Radius clearly changes the photon spots size, accuracy is basically a blur value. Increase both until you have a smooth(ish) render. Don't set the radius too high, or you'll have photons bleed where they're not meant to be. If you can't get it mostly smooth (think like a fractal noise turned down to 0.3), increase your lights photon count, and rebuild the cache.
  • Set a larger render preview, set samples to -2 0. If the GI still looks mostly ok (a little blotchiness is fine), its time to use final gather to do final smoothing.
  • Enable final gather, my simple values are 100 rays, radius 0.2, point interpolation 20. Set your render globals to your final render size (if reasonable, like PAL), enable a cache file for FG too, set rebuild to 'on'.
  • If the render looks good, lock the FG cache (rebuild to 'freeze')
  • finally add ambient occlusion. If possible use the mia_materials, enable the ambocc options. Set the length so that corners are darkened appropriately for your scene scale, and set the 'darker' colour so that they don't get too dark, then boost the samples until its smooth (32 seems to work well in most cases).
  • boost samples to 0 2, colour contrast down to 0.05, render.

Note this is for a still, I haven't needed a workflow yet for animation. Also, I've found if using a directional light to simulate sunlight, setting the light colour to a pale yellow with intensity of 2, but the photons to a pale blue works great to fake the sun+ambient blue look.

It should be possible to get the new physical sun/sky to cast photons, and it works great if your scene is at the origin, but I can't work out how to drive the bounding box values for the photon volume if you're away from the origin. The hacked version of sun/sky from the original 3dmax shader has a 'calculate bounding box' helper, I might have to borrow it for the offical version.

Final gather cache, rebuild options

Hmm, another odd mentalray syntax that means I have to write it down in the wiki. If you decide to store a cache for final gather, you should know by now that it only calculates based on the camera view. If nothing moves in the scene, and you can't be bothered to go GI, you can render your scene from a few different angles, and mentalray will merge the FG results together. Under the 'Rebuild' option you get the rather cryptic choice of 'off/on/freeze'. What does this mean?

  • Off means 'Mentalray won't rebuild the cache from scratch, so any new results will be appended'. This is what you use when merging several solutions together.
  • On means 'Mentalray WILL rebuild from scratch', its the default, it starts a blank final gather cache each time you render.
  • Freeze means 'Don't alter the cache at all', ie you switch to freeze once your FG cache looks good, and you're about to send scenes to the renderfarm.

Turing on 'enable map visualizer' should show you the cache file as a point cloud inside your scene. This makes it easy to spot any areas that have been missed by FG. For some reason I can't get this working on OSX. Odd.

Lights, using the blackbody/cie_d textures

These allow you to enter a colour temperature ala these charts, and it creates the correct colour. Both textures use the same kelvin values. cie_d has 4000K as its lower limit, but it has a more accurate colour result.

Anyways, you can just connect these nodes directly to the colour swatch of a light, and set a value. This image is using area lights converted to mentalray cylindrical area lights, intensity 10, linear falloff, and a cie_d node connected to colour, temperature 10000K, intensity 0.5. Rendered to exr, tonemapped in photoshop. mia_material using a polished concrete preset with round corners, FG+GI.


Refraction and DOF don't work for maya materials in mentalray

Use a native mentalray material, and it'll be correct. Pretty major bug!

Anti aliasing, or what do those min/max samples and contrast settings mean anyway?

There's a few guides out there, but it still seems people treat mentalray render tweaks as something of a black art. There's some great guides on lamrug, but they're a little dry. I'll try and summarise the workflow, butchering terminology as I go, apologies to proper render td's.

When rendering, you can tell mentalray how often to calculate each pixel. Each calculation is called a sample. If you do one sample per pixel, you end up with terrible jaggy aliased edges, old games like doom and quake are doing one sample per pixel (or even 1 sample over 4 pixels). As you do more samples per pixel you get smoother antialised edges, at a cost of more work, which equals longer render times.

But think about this: If you have a big area of flat colour, you can probably get away with less samples. You only need extra samples in areas of contrast and sharp lines. So this is what the min and max samples represent, how little vs how much work mentalray will calculate per pixel.

But how does mentalray determine when to use more samples? With the contrast threshold. By defaut its set to 0.1. That means mentalray compares each sample, if they differ by more that 0.1 (10 percent), it'll calculate more samples, up to your max sample limit.

In practice, for most regular renders a min/max rate of -1/1 should be fine. That is, calculate at least 1 sample every 4 pixels, and at most 4 samples per pixel. If you use these values though, you'll probably still see jagged edges, and be tempted to increase the max samples.


Instead, lower your contrast threshold. Try setting it to 0.05. If its still jaggy, 0.01. If its still jaggy, now set your max samples to 2, and set your threshold back to 0.1. Ie, each time you increase max samples, you should reset your contrast threshold and gradually lower it again to work out the optimal ratio for your render.

You can test this by going into the render globals, mentalray, diagnostics, diagnose samples. The render will show you white points for each sample. Ideally large smooth areas should only have a few samples, while edges and contrasty areas should be outlined in lots of samples. If your image is fairly smooth, yet your diagnostic render is nearly completly white with samples, your clearly wasting render time.

Of course there are exceptions to this rule. If you start to do 'non-standard' renders, say for hair, depth-of-field etc, the contrast threshold test might not be good enough, and you'll get little errors and flecks here and there. In this case you'll want to increase your min samples too, to at least 4 samples per pixel. In fact for fur because the contrast changes so much per sample and per frame, sometimes you want to disable adaptive sampling alltogether, and have it all calculated at the same rate (ie fixed sampling.) Note that in maya 8.5 by default the min and max samples always move together, so they're always 2 values apart. Probably a a good thing.

Maya 8.5 also tells you exactly what the numbers mean, but here they are for clarity:

  • -1 = 4 pixels share a sample
  • 0 = 1 sample per pixel
  • 1 = 4 samples per pixel
  • 2 = 16 samples per pixel
  • 3 = 32 samples per pixel
  • 4 = 64 samples per pixel
  • 5 = 128 samples per pixel

clearly you can see as those numbers get higher the render times will increase exponentially. Anything above 32 samples per pixel is probably overkill, unless you have particularly fancy renders!

Washed out renders (gamma correction)

If you pipe a texture into a diffuse slot and find it washed out, insert a gamma node after the filetexture, set it to 0.45, problem solved.

why do we need to do this then?

You wanna know why? Oh jeez, here we go...

Search cgtalk for info on mentalray, you'll soon find lots of posts about this. You'll then find scary posts by clever folk explaing how its a gamma issue, and all standard images are wrong, and mentalray is correct, tonemappers, exposure blah blah... its all a bit overwhelming. Here's my take.

Because mentalray is now all floating point/HDR fancy, its using a 'pure' colour space, where nothing is corrected for how your eyes percieve colour, nor how monitors display colour etc... This means its using a gamma of 1, that is, no gamma correction at all.

Everything else in post-production that comes from 8-bit land (so regular maya, photoshop, digital cameras, scanners.. everything really), is cheating the colour space, and comes with a built-in gamma correction of 2.2.

So when you use these images in mentalray, you have to cancel out that gamma, ie, that 2.2 has to be converted to 1. Going back to primary school and fractions, you need to multiply 2.2 by its inverse (1/2.2), which is, you guessed it, 0.45.

Note that you don't have to apply this correction per-texture, per-shader. Instead you can apply it right at the end, on your final rendered image. This is what people are talking about when using tonemappers and exposure controls. I dunno... seems like trouble. If you're using mentalray, chances are you're also using hdr images somewhere in your scene. Like mentalray itself, hdr images also aren't gamma corrected. If you apply the correction on the final render, then you're over-correcting your hdr sourceimage too, a bad thing. I feel its better to apply the fix on a per-texture basis, so you know whats going on.

Also, note that this only applies to colour space, ie diffuse and reflection textures. Don't do something silly and apply gamma nodes to bump and spec textures, as they're really controlling data, not colour.

Fast smooth final gathering in 8.5

Nice tip from cgtalk. Might not work for animation, but nice for stills. Set

  • density to 0.2
  • interp points 20
  • rays at 100



this material added in 8.5 is really nice. some presets make it even nicer.

framebuffer outputs for mia_material

you want your diff/spec/ao/refl all split out in one renderpass right? of course you do...

you'll need to install one of the several buffer writers available. I'm using ctrl_buffers at the moment, available from mymentalray and other places:

bumpmaps and mia_material

still seems that bumpmaps and mentalray aren't easy to create natively, at least not to someone familiar with maya. To do a proper 'native' mentalray bumpmap takes quite a few nodes. But with mia_material we can cheat, and steal the bumpmap node used for the fast_sss materials. I'm guessing its actually a phenomena that takes care of all the fancy stuff under the hood. So create your regular maya bump setup, then under mentalray shaders create a miss_set_normal node. Connect your maya bump to the normal vector of miss_set_normal, then connect the output of the miss_set_normap to the bump slot of the mia_material shader.

found a few good forum posts for this. basically if you're shader has a maya-style bump slot, use a regular maya bump. if it has a mentalray bump slot, cheat it and use the miss_set_normal trick above. if there's no bump slot at all, you have to create the full nest-of-nodes setup, or use francesca luce's bump combiner node. links:

how do i enable filtering on these bump nodes?

open the shading group of the material, and under mentalray->custom shaders, turn on 'export with shading engine'

setting a bumpmap on mia_material creates weird shadowing across the terminator

you're probably using a poly object, and the tesselation is too low. either increase it manually, or apply a mr subdiv aprox node to it.

shadows/caustics aren't correct for transparent objects

seems the shader often fails to connect itself properly to the shading group. look at the SG node, open the mentalray section, and make sure the mia_material is also connected to the photon and shadow shader slots.

also note that transparent shadows and caustics are mutually exclusive with mia_material. Look in the refractions and reflections section. If you have the caustics toggle enabled, transparent shadows will be disabled. So either turn on caustics in your render globals, or turn off the caustic toggle in the shader.

transparent objects are black with mia_material

Check you've not got double-sided enabled on your shapes, mia_material assumes all surfaces are single sided. Took ages to figure this out, fool me. Worth the struggle though, blurry refractions look amaaaaazing.

fast motionblur (rapid scanline/rasterizer)

Doesn't handle more complex things like final gathering etc, but on the whole it works quite well. Usual common sense rules apply I guess; if you're final gathering fluids behind raytraced glass with fur, don't expect miracles.

In render globals, switch to the mentalray tab, open raytracing, and switch scanline to 'rapid'. Turn motion blur on, render. The visibility samples controls the quality of the blur, here's an image. This propeller rotates 600 degrees over 3 frames, I also set motion blur->rendering->motion steps to 4.


From 1:18 to 0:14, not bad! Setting the visibility samples to 9 removes the artifacts entirely, and takes about 0:21 to render.

As I understand it the rapid mode seperates the shading calculation from the motion blur calculation. Standard motion blur fires samples and calculates a full shaded point several times. This is most accurate, but innefficient. The rapid mode calculates shading once, then essentially 'bakes' that onto the surfaces, then smears that sample across the image. So if you're doing fast lighting or shading effects, the render won't be accurate. That said, this is more or less how prman handles motion blur, and works fine for 99% of renders.

fast SSS material

Ice test, PAL res, displaced poly sphere, 30 secs per frame.

They ain't kiddin. Tested this on a heavily displaced sphere, production quality pal frame under 30 seconds. I'm now officially sold on mentalray. I just with they'd document it better; its almost as if they don't want people using it, their docs are so obscure. Anyways, here's how to get it going, heavily influenced by josvex's notes from cgtalk. This is standard in maya 6.5, but I believe its possible to run it in 6.0 if you have the tech know-how. Thanks to james, the 2 robs, and everyone else at the mill for helping with this.

  1. create a polysphere, camera, spotlight, shine the light so its behind the sphere and a bit to the side.
  2. hypershade, create a fast_simple_sss material, assign to your sphere
  3. click the map button next to lightmap, it'll create a miTexture node
  4. select it, leave type as 'color', check writable, change to 32 bits, and point it to a blank file with no extension in your sourceimages folder. You must have your project folders correctly setup, otherwise mentalray will bitch. Under linux, the easiest way to make the blank file is to run 'touch myfile' from a command line. Under windows, click the directory button, change the file listing to all types,, new->text document, rename it to just 'myfile' with no extension, select it, ok.
  5. show upstream/downstream nodes to find the shading group
  6. select the shading group, in attribute editor find the lightmap attribute
  7. map it, give it a fast_sss lightmap (you'll have to scroll down a bit)
  8. on the lightmap, set the texture to your miTexture node
  9. render

Took a bit of fiddling to see what did what, here's what I found out:

  • Change the diffuse weight to 0, the front sss colour to green, the back sss colour to red, and the spec colour to nearly black. Much easier to see what your tweaks do this way.
  • The front and back weights are multipliers for the effect, I've found driving the back highigh (say 1 to 5), and the front lowish (0.1) gets your typical cheezy demo sss effect.
  • Increasing samples improves quality, you don't need that much to see a big improvement. Try 128 to start with.
  • If you get banding, increase the texture dimensions of your miTexture node. Again, these don't need to be very high.
  • The scale attribute under algorithm control is the key control to getting the overall look correct. Larger values make the sss regions falloff faster, increasing the apparent scale and density of the model.
  • Adding shadows really helps the effect too
  • If you insert objects in your main shape, assigning them the same material does the proper 'bones in skin' demo.
  • Try spinning the light around to see the front and back regions working. Its very obvious once you start playing with this, but front is the colour when the light is on the same side as the camera, so its your main skin colour, back is for when its backlit, so the red-in-ears colour.
  • Handles displacement beautifully, have a play with this.
  • The radius determines how accurate the sss tracks fine details in your object; smaller values = finer effects, but also means higher render times. And you probably won't need this anyway, sss tends to be a general softening thing, test higher values (1 to 10), see how high you can get away with.
  • falloff determines how deep the effect goes, it in turn is scaled by the overall scale value in algorithm controls. There seems to be an upper limit that scales in turn with the radius; eg if the radius is 1, I could set the depth from 0 to 10, but values beyond that only made fractional changess, even setting values like 100 or 1000. Altering the radius changes that max depth limit.
  • Found it got too dark on the unlit side, and no amount of scale twiddling would fix it. Instead, tried setting the ambient value on the lightmap, that worked really well, and was very fast. Its a cheat, but it worked.
  • Try overdriving the spotlight intensity. 1 seemed to work fine, but I got interesting effects at intensities of 2 to 5. Not always useable, but interesting.
  • It doesn't render with alpha! Can probably be worked around, but weird.
  • To make it work with final gathering, select the lightmap and check 'include indirect lighting'.
  • Check your antialias and multipixel filter settings to see how much work you need to do to get rid of dotty renders. Again, I was surprised at how fast the renders could be with a bit of tweaking.
  • Your object normals have to be correct (ie contiguous and outward-facing), or it won't work. This caught me off-guard when I was swapping UV directions on a nurbs surface.

Fast skin sss material


Now this is fun. Been doing tests with the zbrush displacement head, works really really well.

  1. Create and setup the shader like the simple shader above
  2. Create a 3d bump node, attach to the bump slot, then add a volume noise into that, set its alpha gain really low, like 0.02
  3. Adjust the algorithm scale to get the right effect. When I scaled the head to sit about the same size as a default sphere, the algo.scale needed to be between 10 and 20.
  4. To keep the nostrils and mouth dark, create a mi ambient occlusion texture, and pipe it into the diffuse colour slot. Adjust the dark value so it doesn't go too dark.

  • For some reason this test worked with FG when the other didn't. Didn't create an ibl dome though, just a keylight and rimlight. It did the opposite of what I expected though; it lightened rather than darkened. Made sense once I thought about it, but still... Thats where that ambient occlusion texture came in btw.
  • Being able to set each components scale to zero really helps to get each element working properly. Set all to zero first, then turn the epidermis scale up to 0.5 or so first. Once you got that working, turn on the subdermis, then the back scattering in turn.
  • You can optimise the effect by using a small radius for the top layer to pickup the fine details, then a bit wider for the subdermis, and quite wide for the back scattering.

Here's a maya 6.5 scene, you'll need to grab the displacement map and convert to rgb first. 145k.

But why not read a tutorial by someone who actually knows what they're doing? Includes tips on getting hdr/fg to work, and lots of other handy tricks.

Fast skin SSS and physical sky looks odd

Turn off the 'screen composite' option on the skin material. Its clamping the output to 1, which means when the tonemapper adjusts the final render, it gets crushed down to an ugly gray.



Right, this is the other 30 second demo to make you go 'woo, mentalray rox'. Why can't they do basic tutorials like this?

  1. create a polysphere
  2. new material, assign it, get its shading group, attach a volume noise to its displacement channel
  3. set alpha gain to 0.1, alpha offset to -0.05, ie, we'll let this displace both in and out.
  4. windows->rendering editors->mentalray->approx editor
  5. select your sphere, click 'create' next to subdivision approx, then edit
  6. change type to length/distance/angle
  7. set max/min to 0, 3 to start with
  8. set these magic values: length 5, distance 1, angle 45, view dependant.
  9. render (you're rendering with mentalray right? set your samples to -3, 2)
  10. tweak your volume noise settings until you get a nice general shape
  11. once you got that, find your miApprox node again (turn off DAG only in the outliner, you'll find it), and set the min/max to 0, 5, render
  12. go 'woo'

This is the sorta displacement I heard mentalray was brilliant at, but could never find it. For extra woo value, render with final gathering, or even more fun, render with the sss shader above. These settings appear to work well with the zbrush test head you can pickup from here. Just make sure you convert the texture to an rgb tif in shake or photoshop (mentalray doesn't like single channel tifs), and alter the alpha offset to suit the scale of the model; values for the model scaled to 0.1 won't work when its scaled to 10.

Here's a maya 6.5 scene, 56k: displacement.mb

Displacement and nurbs

Been doing a few tests lately with mentalray displacement, found some curious things. It seems mentalray expects nurbs surfaces to have a 'reasonable' number of spans in the shape. For example, I created a simple nurbs plane of 1 and 1 divisions. Added a mr approx node, set all settings quite high, added a displacement map, rendered. The surface didn't look quite right. Started cranking all the approx values, no change. Messed with the maya tesselation valuee, still nothing. Once I set the divisions to 10 and 10 however, the displacement suddenly appeared, crisp as you please. I thought mentalray did renderman style micropoly tesselation on nurbs patches, or at the very least would divide appropriately based on the displacement map. I guess not.

My settings for a nicely displaced nurbs patch are:

  • surface approx. node
  • approx method: spatial
  • approx style: fine
  • min: 0
  • max: 6
  • length: 1
  • view dependant: on
  • sharp: 0


Subdivision approximation vs displacement approximation

Horvatth Szabolcs via highend3d:

If you only use the displacement approximation the displacement alters the tesselated poly geometry. By adding a subdiv approx you displace the subdivision surface instead. It can make quite a difference if the displacement was painted / generated to the subdiv surface.

and a bit later

If you add both subdiv and displacement approximation to a mesh than only the subdiv approximation is used, the subdiv basically overrides the displacement approximation.

correction to the above

As of Maya 6.5 and later, displacement and subdivision approximation export correctly with the Mayatomr plugin. Previously, the plugin wrote out the approximation to two lines in the generated .mi, while the MI spec requires both approximations to be written to one line. This has been corrected in Maya versions 6.5 and up.

-- SunitParekh? - 23 Mar 2006

Setting up final gathering

Another 30 second demo? Madness. I also explain this in the RenderPassTutorial? , but I stick it here for completeness' sake.

  1. Create a sphere on a plane, light, assign a white lambert to the objects
  2. Open mentalray render globals, enable final gathering, use 100 rays for now.
  3. Scroll down a bit further to image based lighting, click 'create'
  4. On the ibl node in the attribute editor, change from 'file' to 'texture', set the colour slider to white
  5. Render

Of course, I was using this for ambient occlusion like the loser I am, and james pointed out the ambient occlusion miTexture in maya 6.5. I explain how to use that below.

HDR images and final gathering

My ice shader, now with more FG!

Did you know maya 6.5 can load .hdr directly? And display proper thumnails in hypershade? Will maya's wonders never cease? This walkthrough uses paul debevec's lightprobes. So, using the setup from above:

  1. Map the IBL texture swatch to a file texture, point it at an hdr image. I'm using the uffizi probe.
  2. Debevec's hdr images are too bright for mentalray by default, so we'll insert a gamma node to knock it down.
  3. Create -> color utilities -> gamma correct
  4. Connect the filetexture.outColor to gammaCorrect.value, and gammaCorrect.outValue to mentalrayIblShape.color
  5. Set the gamma value to 2.2 for all 3 channels
  6. Render

Don't forget to check 'include indirect lighting' on your lightmap node if you're using the fast_sss shaders.

Correct FG min/max radii

Rob Pieké via highend3d:

Try 10% and 1% of the render area (ie, if your renderview is ~10 scene units wide for the object(s) which are the main focus), then make the min/max 0.1,1. Using the global scene size will often yield radii which are too large (especially if you're using a geometry sphere - instead of an IBL node - for the HDR image).

Setting up ambient occlusion


This is using the new ambient occlusion texture in maya 6.5.

  1. Create an ambient occlusion texture, and a surface shader, connect one to the other, assign the shader to your scene. Render.
  2. Open the ambient occlusion texture in an attribute editor. Test different spread values to get the ambocc scale correct for you (try values between 0.2 to 1.2 to start with)
  3. Samples controls the splotchiness, higher samples = smoother result. In a quick test scene I quickly ramped up from 16 to 32 to 64 to 128, at which point the ambocc looked nice and smooth.

There's a new FG based ambocc texture in maya 7, seems to give better results faster. Eatbug has a tutorial: One thing I haven't been able to find though is a way to scale the effect. There's no 'spread' value like the AO texture. Anyone have any ideas?

Tim Lydecker explains baking ambient occlusion

Here´s my settings for baking using mR batch bake and occlusion (using texture bake set override) and Jeremy Pronk´s toLightSuite ambient occlusion texture piped into the color slot of a surfaceshader. Using Maya 6.01 here:

Bake to: TEXTURE



  • Normal Direction: SURFACE FRONT
  • Camera: PERSP


If also using Texture Bake Set Override:

  • BAKE ALPHA (optional)

  • FILL SCALE 1.00

Shading liquid

Duncan via highend3d:

Q: Anybody know a good shader that can do small amounts of liquid like soda and water? (i'm not looking for an ocean shader). It would be great if it worked well with blobby particles. I'm having a hard time simulating the fluids getting denser/more opaque as the light goes through thicker portions.

A: You don't need subsurface scattering for the effect you describe. The mental ray dielectric can vary the opacity based on depth on the material. It is generally a good shader for water.

Bill Spradlin on general GI workflow

The general workflow is upping the photon intensity of the GI until you get the illumination you want, at that point you'll then want to stop raising the intensity of the photons and start tuning the amount in the scene. You'll do this by adjusting the GI photon amount attr (on the light you are using to emit photons) and by adjusting the GI accuracy. That should start you off in the right direction.

Kim Aldis notes

And some notes by Kim Aldis, thanks Kim!

mental ray is a great renderer but it needs to be treated with respect. Nearly every instance of dying I've seen has been down to memory issues and while Linux certainly is more stable it won't cope any better than Widows when it runs out of memory. There's a few things worth considering:-

1. as Alec says, textures can chew memory up pretty quick. It's easy to forget how much a bitmap uses when it's held in memory in it's entirety. Cut down the size if you can and convert to .map. mental ray treats .map more efficiently.

2. Check your displacement map settings and be familiar with what the different settings mean. Use fine displacement, with view dependancy. With the max displacement set to be good and high you can control the level of subdivision with the length parameter. Start with length at 2.0 and work it down until you get an acceptable quality. If you go below around 0.3 you've gone too low.

3. And this is the big one. Optimise your BSP settings. There's ways of doing this efficiently but essentially what you do is mess with the min and max settings until you get a minimum render time. DOn't underestimate this. I've pulled frame times down from hours to 10s of minutes. If you've a Softimage maintenance account at your site there's a great white paper by Dave Lajoie on optimising mental ray BSP settings. Take some time to understand it.

I found the paper kim mentions, seems its now available for everyone to look at: XSI_rendering_lajoie.pdf

Seems to cover all the key things people want to know (max/min samples, adaptive sampling, contrast ratios etc..) but in an easy to read format. Go read!

Aliasing in MR (crawling/buzzing textures)

Paul Gunson via highend3d:

By default (no final gather or GI) there is a resonable amount of 'crawling' on file textures with MRfM? unless you up the sampling to at least 0-2, even then its still noticeable on file textures that have high contrast. you can avoid this by using the file textures in a mental ray custom shader network, then you can specify eliptical filtering. Additional filtering in a maya shader does absolutely nothing when rendered in MR.

Aliasing in FG

A follow on from Paul Gunson:

And on to final gather smile AFAIK, there is no way to avoid jitter in animations unless you bake everything to textures or vertices before rendering. only other option... with those radii set you can try cranking the rays up to 1000 or more and see if that helps, but the render times per frame will get crazy-insane long. there's an option in Mental Ray called "Freeze" which you can enable with the following script:

select miDefaultOptions;
addAttr -ln finalGatherFreeze -at bool miDefaultOptions;

this is supposed to freeze the FG map to disk and re-use it every frame, but i've heard it doesn't save the map for areas in the scene that the camera can't see at the start frame, so as soon as the camera or any object moves you're doomed.... not much help really. note - you might want to try this out yourself though... or maybe [hopefully] someone else will jump in on the post and give a better solution... :/

and more great tips from Dennis Evdokakis:

Check your samples for anti-aliasing quality:

  • min/max samples
  • Increasing the anti-aliasing settings you increase rendering times.

Under that, there is the contrast threshold (contrastRGBA) What contrast threshold does, it determines when you need more samples in your scene.

When I render, I always first lower the values of contrast threshold, to a desire level, then if not satisfied i up the min max samples. Most of the times i render with 0-2 min/max samples with low contrast threshold. You can use a Gaussian Filter with low values. High values create blury images.

As for Final Gathering, it tents to flicker by its nature. What you can do to eliminate flickering:

In maya 6.5 you have the option to render fg in view mode, which means you calculate the fg in pixels size instead. Also FG radius and FG rays play a critical role. A typical suggestion( i could be wrong here) is when you use fewer rays, pump up the radius. You could also set the'rebuild FG' to freeze. You could render your scene from a few different angles, with rebuild FG set to off. This doesnt rebuild your FG file but adds more rays in it,when you for example render from another camera angle. Then set the rebuild from 'off' to 'freeze' and batch render.

HDR files, tend to introduce flickering artifacts as well (when you use them with FG) Get yourself a copy of HDRShop, or any HDR Image editor. Blur your HDR image, and use this instead. It might be better if you render a seperate FG or (ambient) occlusion pass, and use this in your favorite compositing app, for more control.

Flickering can also happend because of your textures. high frequency, high contrast are some of the reasons. Are you tiling too much? Does your textures have a high black to white ratio or have alot of noise? Reduce them.

Another way to eliminate flickering in textures, is to use eliptical filtering in your custom textures:

  • connect a 'mib_texture_vector' to 'mib_texture_remap'
  • connect the 'mib_texture_remap' to a 'mib_texture_filter_lookup' node
  • inside 'mib_texture_filter_lookup' click on 'tex' to create a mental ray texture node and there assign your custom texture.

See the docs for more info. Here is a typical eliptical filtering phenomenon, in case you feel more comfortable with one node instead.

#$include "C:\Program Files\Alias\Maya6.0\mentalray\include\base.mi"

declare phenomenon
   color "ellipticFilter" (
      color texture "tex",
      scalar "discRadius",      #: default 0.3 min 0 softmax 1
      scalar "maxEccentricity",   #: default 20 min 0 softmax 40
      boolean "bilinear",      #: default true
      vector "offset",
      vector "repeat",      #: default 1. 1. 1.


   shader "coord" "mib_texture_vector" (
         "select" 0,
         "selspace" 0,
         "vertex" 0,
         "project" 0

   shader "remap" "mib_texture_remap" (
         "input" = "coord",
         "transform" 1. 0. 0. 0. 0. 1. 0. 0. 0. 0. 1. 0. 0. 0. 0. 1.,
         "repeat" = interface "repeat",
         "alt_x" off,
         "alt_y" off,
         "alt_z" off,
         "torus_x" off,
         "torus_y" off,
         "torus_z" off,
         "min" 0. 0. 0.,
         "max" 0. 0. 0.,
         "offset" = interface "offset"

   shader "tex" "mib_texture_filter_lookup" (
         "tex" = interface "tex",
         "coord" = "remap",
         "eccmax" = interface "maxEccentricity",
         "maxminor" 6.,
         "disc_r" = interface "discRadius",
         "bilinear" = interface "bilinear",
         "space" 0,
         "remap" "remap"

   root = "tex"
   version 1
   apply texture
   #:nodeid (someNodeIDhere)
end declare

Rendering in MR standalone vs MR for maya

Horvatth Szabolcs via highend3d:

IMHO it is not a good idea using MR for Maya for rendering when you have standalone licenses. The standalone version is much more stable, has way higher fault tolerance (does not die without error messages once in a while) as is more memory effective, since you don't have the whole Maya UI and scene in memory, just the required mental ray data. And you have a great deal more control and possibilities by using custom text for multiple passes, delayed read archives and stuff like that.


Maya does not require a license per machine to render using Maya Software in batch mode.

Maya does not need a license to export MI files through the mrforMaya plugin either.

Maya does require a license to render using the plugin.

Mental Ray does require a license per processor, but does not care about Maya at all.

My advice is:

  • Install a full Maya on all render machines, you don't need any extra licenses.
  • Export MI files in batch mode on the farm.
  • Render the files with as many standalone licenses as you have.
  • If you have extra maya licenses that you can use, render simple scenes with them.
  • Use floating licenses if you can and control software launch limits through the render manager software.

-- AndreasMartin? - 18 Mar 2006 Just gave this section some structure for easier browsing

Topic attachments
I Attachment Action Size Date Who Comment
pdfpdf Getting_the_Maxwell_look_in_Mental_Ray.pdf manage 480.1 K 29 Dec 2007 - 19:57 MattEstela  
pdfpdf Summary_of_-_VRay-like_interior_renders_with_mental_ray.pdf manage 7724.0 K 29 Dec 2007 - 19:56 MattEstela  
jpgjpg ambOcc.jpg manage 8.1 K 03 Sep 2006 - 22:11 MattEstela  
jpgjpg cie_d.jpg manage 21.3 K 14 May 2007 - 22:00 MattEstela  
jpgjpg displacement.jpg manage 4.5 K 03 Sep 2006 - 22:11 MattEstela  
elsemb displacement.mb manage 56.1 K 03 Sep 2006 - 22:11 MattEstela  
jpgjpg fastBlur.jpg manage 16.2 K 03 Sep 2006 - 22:11 MattEstela  
jpgjpg fg_cast.jpg manage 31.0 K 18 Mar 2009 - 15:17 MattEstela  
jpgjpg framebuffer_graph.jpg manage 53.0 K 20 Jul 2008 - 19:15 MattEstela  
jpgjpg iceFg01.jpg manage 3.3 K 03 Sep 2006 - 22:11 MattEstela  
jpgjpg iceTest.jpg manage 7.8 K 03 Sep 2006 - 22:11 MattEstela  
elsemel meRefExp.mel manage 1.4 K 26 Aug 2009 - 09:44 MattEstela  
elsemel meSaveSeq.mel manage 0.9 K 26 Aug 2009 - 09:44 MattEstela  
elsemel meWriteProxy.mel manage 5.8 K 26 Aug 2009 - 09:40 MattEstela  
jpgjpg nurbsMrDisplace.jpg manage 9.0 K 03 Sep 2006 - 22:11 MattEstela  
jpgjpg sssDispZb2.jpg manage 8.5 K 03 Sep 2006 - 22:11 MattEstela  
zipzip manage 145.6 K 03 Sep 2006 - 22:11 MattEstela  
Edit | WYSIWYG | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r37 < r36 < r35 < r34 < r33 | More topic actions
Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback