Urho3D
Materials

Material and Technique resources define how to render 3D scene geometry. On the disk, they are XML or JSON data. Default and example materials exist in the bin/CoreData/Materials & bin/Data/Materials subdirectories, and techniques exist in the bin/CoreData/Techniques subdirectory.

A material defines the textures, shader parameters and culling & fill mode to use, and refers to one or several techniques. A technique defines the actual rendering passes, the shaders to use in each, and all other rendering states such as depth test, depth write, and blending.

A material definition looks like this:

<material>
<technique name="TechniqueName" quality="q" loddistance="d" />
<texture unit="diffuse|normal|specular|emissive|environment" name="TextureName" />
<texture ... />
<shader vsdefines="DEFINE1 DEFINE2" psdefines="DEFINE3 DEFINE4" />
<parameter name="name" value="x y z w" />
<parameter ... />
<cull value="cw|ccw|none" />
<shadowcull value="cw|ccw|none" />
<fill value="solid|wireframe|point" />
<depthbias constant="x" slopescaled="y" />
<alphatocoverage enable="true|false" />
<lineantialias enable="true|false" />
<renderorder value="x" />
<occlusion enable="true|false" />
</material>

Several techniques can be defined for different quality levels and LOD distances. Technique quality levels are specified from 0 (low) to 2 (high). When rendering, the highest available technique that does not exceed the Renderer's material quality setting will be chosen, see SetMaterialQuality().

The techniques for different LOD levels and quality settings must appear in a specific order:

  • Most distant & highest quality
  • ...
  • Most distant & lowest quality
  • Second most distant & highest quality
  • ...

Material shader parameters can be floats or vectors up to 4 components, or matrices.

Default culling mode is counterclockwise. The shadowcull element specifies the culling mode to use in the shadow pass. Note that material's depth bias settings do not apply in the shadow pass; during shadow rendering the light's depth bias is used instead.

Render order is a 8-bit unsigned value that can be used to affect rendering order within a pass, overriding state or distance sorting. The default value is 128; smaller values will render earlier, and larger values later. One example use of render order is to ensure that materials which use discard in pixel shader (ALPHAMASK define) are rendered after full opaques to ensure the hardware depth buffer will behave optimally; in this case the render order should be increased. Read below for caveats regarding it.

Occlusion flag allows to disable software occlusion rendering per material, for example if parts of a model are transparent. By default occlusion is enabled.

Materials can optionally set shader compilation defines (vsdefines & psdefines). In this case they will be added to the techniques' own compilation defines, and the techniques are cloned as necessary to ensure uniqueness.

Enabling alpha-to-coverage on the material enables it on all passes. Alternatively it can be enabled per-pass in the technique for fine-grained control.

Material textures

Diffuse maps specify the surface color in the RGB channels. Optionally they can use the alpha channel for blending and alpha testing. They should preferably be compressed to DXT1 (no alpha or 1-bit alpha) or DXT5 (smooth alpha) format.

Normal maps encode the tangent-space surface normal for normal mapping. There are two options for storing normals, which require choosing the correct material technique, as the pixel shader is different in each case:

  • Store as RGB. In this case use the DiffNormal techniques. This is the default used by AssetImporter, to ensure no conversion of normal textures needs to happen.
  • Store as xGxR, ie. Y-component in the green channel, and X-component in the alpha. Z will be reconstructed in the pixel shader. This encoding lends itself well to DXT5 compression. You need to use the pixel shader define PACKEDNORMAL in your materials; refer to the Stone example materials. To convert normal maps to this format, you can use AMD's The Compressonator utility, see https://developer.amd.com/Resources/archive/ArchivedTools/gpu/compressonator/Pages/default.aspx.

Make sure the normal map is oriented correctly: an even surface should have the color value R 0.5 G 0.5 B 1.0.

Models using a normal-mapped material need to have tangent vectors in their vertex data; the easiest way to ensure this is to use the switch -t (generate tangents) when using either AssetImporter or OgreImporter to import models to Urho3D format. If there are no tangents, the light attenuation on the normal-mapped material will behave in a completely erratic fashion.

Specular maps encode the specular surface color as RGB. Note that deferred rendering is only able to use monochromatic specular intensity from the G channel, while forward and light pre-pass rendering use fully colored specular. DXT1 format should suit these textures well.

Textures can have an accompanying XML file which specifies load-time parameters, such as addressing, mipmapping, and number of mip levels to skip on each quality level:

<texture>
<address coord="u|v|w" mode="wrap|mirror|clamp|border" />
<border color="r g b a" />
<filter mode="nearest|bilinear|trilinear|anisotropic|nearestanisotropic|default" anisotropy="x" />
<mipmap enable="false|true" />
<quality low="x" medium="y" high="z" />
<srgb enable="false|true" />
</texture>

The sRGB flag controls both whether the texture should be sampled with sRGB to linear conversion, and if used as a rendertarget, pixels should be converted back to sRGB when writing to it. To control whether the backbuffer should use sRGB conversion on write, call SetSRGB() on the Graphics subsystem.

Anisotropy level can be optionally specified. If omitted (or if the value 0 is specified), the default from the Renderer class will be used.

Cube map textures

Using cube map textures requires an XML file to define the cube map face images, or a single image with layout. In this case the XML file is the texture resource name in material scripts or in LoadResource() calls.

Individual face images are defined in the XML like this: (see bin/Data/Textures/Skybox.xml for an example)

<cubemap>
<face name="PositiveX_ImageName" />
<face name="NegativeX_ImageName" />
<face name="PositiveY_ImageName" />
<face name="NegativeY_ImageName" />
<face name="PositiveZ_ImageName" />
<face name="NegativeZ_ImageName" />
</cubemap>

Using a single image texture and a layout is used like this:

<cubemap>
<image name="ImageName" layout="horizontal|horizontalnvidia|horizontalcross|verticalcross|blender" />
</cubemap>

For the layout definitions, see http://www.cgtextures.com/content.php?action=tutorial&name=cubemaps and https://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro/Build_a_skybox

3D textures

3D textures likewise require an XML file to describe how they should be loaded. The XML should contain either a "volume" or "colorlut" element, which defines the image to load with its "name" attribute. The "volume" mode requires that the image is in a format that supports 3D (volume) textures directly, for example DDS. The "colorlut" mode allows any format, and instead spreads out the Z slices of the volume image horizontally in a 2D image from the left to the right: for example LUTIdentity.png has 16 16x16 slices for a total image size 256x16.

<texture3d>
<colorlut name="LUTIdentity.png" />
</texture3d>

Using a 3D texture for color correction postprocess (see bin/Data/PostProcess/ColorCorrection.xml) requires the 3D texture to be assigned to the "volume" texture unit, so that the effect knows to load the texture as the correct type. The lookup for the corrected color happens by using the original color's red channel as the X coordinate, the green channel as the Y coordinate, and blue as the Z. Therefore the identity LUT's slices (which shouldn't transform the color at all) grow red from left to right and green from top to bottom, and finally the slices themselves turn blue from left to right.

2D array textures

2D array textures (Texture2DArray class) are available on OpenGL and Direct3D 11. They are also defined via an XML file defining the images to use on each array layer:

<texturearray>
<layer name="Layer1_ImageName" />
<layer name="Layer2_ImageName" />
<layer name="Layer3_ImageName" />
</texturearray>

Techniques and passes

A technique definition looks like this:

<technique vs="VertexShaderName" ps="PixelShaderName" vsdefines="DEFINE1 DEFINE2" psdefines="DEFINE3 DEFINE4" desktop="false|true" >
<pass name="base|litbase|light|alpha|litalpha|postopaque|refract|postalpha|prepass|material|deferred|depth|shadow" desktop="false|true" >
vs="VertexShaderName" ps="PixelShaderName" vsdefines="DEFINE1 DEFINE2" psdefines="DEFINE3 DEFINE4"
vsexcludes="EXCLUDE1 EXCLUDE2" psexcludes="EXCLUDE3 EXCLUDE4"
lighting="unlit|pervertex|perpixel"
blend="replace|add|multiply|alpha|addalpha|premulalpha|invdestalpha|subtract|subtractalpha"
[cull="cw|ccw|none"]
depthtest="always|equal|less|lessequal|greater|greaterequal"
depthwrite="true|false"
alphatocoverage="true|false" />
<pass ... />
<pass ... />
</technique>

The "desktop" attribute in either technique or pass allows to specify it requires desktop graphics hardware (exclude mobile devices.) Omitting it is the same as specifying false.

A pass should normally not define culling mode, but it can optionally specify it to override the value in the material.

Shaders are referred to by giving the name of a shader without path and file extension. For example "Basic" or "LitSolid". The engine will add the correct path and file extension (Shaders/HLSL/LitSolid.hlsl for Direct3D, and Shaders/GLSL/LitSolid.glsl for OpenGL) automatically. The same shader source file contains both the vertex and pixel shader. In addition, compilation defines can be specified, which are passed to the shader compiler. For example the define "DIFFMAP" typically enables diffuse mapping in the pixel shader.

Shaders and their compilation defines can be specified on both the technique and pass level. If a pass does not override the default shaders specified on the technique level, it still can specify additional compilation defines to be used. However, if a pass overrides the shaders, then the technique-level defines are not used.

As a material can set further shader defines, which would be applied to all passes, the "vsexcludes" and "psexcludes" mechanism allows per-pass control to prevent them from being included. This is intended for eliminating the compilation of unnecessary shader variations, for example a shadow shader attempting to read a normal map.

The technique definition does not need to enumerate shaders used for different geometry types (non-skinned, skinned, instanced, billboard) and different per-vertex and per-pixel light combinations. Instead the engine will add certain hardcoded compilation defines for these. See Shaders for details.

The purposes of the different passes are:

  • base: Renders ambient light, per-vertex lights and fog for an opaque object.
  • litbase: Renders the first per-pixel light, ambient light and fog for an opaque object. This is an optional pass for optimization.
  • light: Renders one per-pixel light's contribution additively for an opaque object.
  • alpha: Renders ambient light, per-vertex lights and fog for a transparent object.
  • litalpha: Renders one per-pixel light's contribution additively for a transparent object
  • postopaque: Custom rendering pass after opaque geometry. Can be used to render the skybox.
  • refract: Custom rendering pass after postopaque pass. Can sample the viewport texture from the environment texture unit to render refractive objects.
  • postalpha: Custom rendering pass after transparent geometry.
  • prepass: Light pre-pass only - renders normals, specular power and depth to the G-buffer.
  • material: Light pre-pass only - renders opaque geometry final color by combining ambient light, per-vertex lights and per-pixel light accumulation.
  • deferred: Deferred rendering only - renders ambient light and per-vertex lights to the output rendertarget, and diffuse albedo, normals, specular intensity + power and depth to the G-buffer.
  • depth: Renders linear depth to a rendertarget for post-processing effects.
  • shadow: Renders to a hardware shadow map (depth only) for shadow map generation.

More custom passes can be defined and referred to in the render path definition. For the built-in passes listed above, the lighting shader permutations to load (unlit, per-vertex or per-pixel) are recognized automatically, but for custom passes they need to be explicitly specified. The default is unlit.

The optional "litbase" pass reduces draw call count by combining ambient lighting with the first per-pixel light affecting an object. However, it has intentional limitations to not require too many shader permutations: there must be no vertex lights affecting the object, and the ambient lighting can not have a gradient. In case of excessive overdraw, it is possibly better not to define it, but instead allow the base pass (which is computationally very lightweight) to run first, initializing the Z buffer for later passes.

The refract pass requires pingponging the scene rendertarget to a texture, but this will not be performed if there is no refractive geometry to render, so there is no unnecessary cost to it.

Render order caveats

Render order works well when you know a material is going to render only a single pass, for example a deferred G-buffer pass. However when forward rendering and per-pixel lights are used, rendering of typical lit geometry can be split over the "base", "litbase" and "light" passes. If you use render order combined with depth test manipulation to force some object to render in front of others, the pass order may not be obvious. "Base" pass would be rendered first, but objects often don't use it, rather they render later as part of the forward light loop using the "litbase" pass. This may throw off the depth test manipulation and cause a different result than expected.

An easy fix is to simply disable the "litbase" optimization pass altogether, though this costs performance. This can be done globally from the forward renderpath (Bin/CoreData/RenderPaths/Forward.xml) by modifying the forwardlights command to read:

<command type="forwardlights" pass="light" uselitbase="false" />