Renderbump and baking normal maps from high poly models using Blender 3D - additional information

Normal maps are essentially a clever way of 'faking', on low poly, low detailed models, the appearance of high resolution, highly detailed objects. Although the objects themselves are three dimensional (3D), the actual part that 'fakes' detail on the low resolution mesh is a two dimensional (2D) texture asset called a 'normal map'.

What are normal maps and why use? ^

The process of producing these normal maps is usually referred to as 'baking' (or 'render to image'), whereby an application - in this instance Blender 3D - interprets the three dimensional geometrical structure of high poly objects as RGB ("Red", "Green" & "Blue") values that can then be 'written' as image data, using the UVW map of a low resolution 'game' model as a 'mask' of sorts, telling the bake process where that colour data needs to be drawn.

Generally speaking, there are two ways to generate these types of baked normal maps using 3D;

  • renderbump

  • renderbumpflat

"Renderbump" is the rendering of high resolution geometry to a texture that is then applied as a texture map around a low resolution version of the high poly model; usually on a 1:1 basis as a unique texture (it doesn't tile). A shotgun weapon model would be a good example of how renderbump normal maps are used in practice.

"Renderbumpflat" is the rendering of high resolution objects to a flat texture that is then be used in game as a tile-able 'surface' asset, a brick wall for example, applied to buildings which tiles along the length of its walls. Renderbumpflat uses a slightly different set-up compared with renderbump.

Design note : renderbumpflat can be done in two ways, either using a similar process to what's been discussed in the tutorial above... or, 'faking' it; this requires both a special scene and the application of a special 'normalised' material to models. These can then be 'top down' rendered to image in the same way a 'still' image is created; the result is a fully rendered normal map. This method does require a comparatively increased amount of time and effort to set up correctly however.

There is also a third way to create normal maps and that's to 'convert' 2D artwork into normal maps using software designed for the purpose - CrazyBump or GIMPs normal map filter for example. For best results this process generally requires the use of a gray scale image built around the rule that "white = height, dark = depth".

Wire-frame view of the required assets, in this case a 'tech(ish)' doorway

Wire-frame view of the required assets, in this case a 'tech(ish)' doorway

Solid view of the assets which show the geometrical detailing of the low and high resolution models more clearly

Solid view of the assets which show the geometrical detailing of the low and high resolution models more clearly

Textured view showing the baked normal map after being baked to image and applied to the low poly model

Textured view showing the baked normal map after being baked to image and applied to the low poly model

Making models for normal map baking, using a 'control cage' ^

Building assets for normal mapping isn't always as simple as subdividing a low poly mesh or using 'decimate' on the high (to get the 'low' version). More often than not, the best strategy to use is to build something called a "control cage" and use that to increase the mesh resolution to produce high poly, or decrease it down to produce the low version. The advantage here is that the control cage acts as a 'master' template or object, and because it is 'mid poly' in resolution, it's easier to manage distortions that often happen when altering its general structure.

This can be further helped by constructing the control cage in 'quads' (quadratic polygons), which should be happening by default - subsurface dividing or decimating a quad based mesh doesn't carry the inherent risks of resolution or position related artifacts and problems that occur when faces 'collapse' on decimation, or 'pimple' and 'crack' on sub-surfacing a mesh.

The initial "control cage" used as the basis from which both the low and high poly versions of the mesh are made

The initial "control cage" used as the basis from which both the low and high poly versions of the mesh are made

Low poly version of the control cage. Leaving the structure in 'quads' permits further reduction of the meshes resolution if required

Low poly version of the control cage. Leaving the structure in 'quads' permits further reduction of the meshes resolution if required

Additional edgeloop divisions placed at key strategic points, before subsurface divisions are added, to control the shape of and how 'hard' edges are

Additional edgeloop divisions placed at key strategic points, before subsurface divisions are added, to control the shape of and how 'hard' edges are.

The subsurfaced control cage (x2 iterations for illustrations purposes, x4 for rendering)

The sub-surfaced control cage (x2 iterations for illustrations purposes, x4 for rendering)

For best results the high resolution mesh should be as high as your computer will allow; the more polygon data there is the better the baked results; although bakes are reliant on the amount of UVW space available on the low resolution model, relative to the size of the textures pixel space, the quality and clarity of rendered detail is effected by how much data there is on the high poly object.

Design note : If any subsurface modifiers are applied in the modifier stack then it's usually best to 'apply' them before render baking the normal maps - remember to back-up the file before doing so if you want to go back and edit the mesh at a later date.

Clean UVW maps effect render results ^

Because the render baking process relies primarily on the UVW map applied to the low resolution mesh it's very important to make sure that it is as well laid out and 'clean' as is possible in the space that has been allowed for it's use. This means there must be minimal amounts (preferably none) of the following;

  • Overlapping sections

  • Distortions

  • 'Broken' faces/edges

  • 'Spit' vertexes

  • Detached faces

  • 'Repeat' or 'mirrored' areas

  • Inverted faces

This is mainly because the process is looking at each individual polygon and baking the results relative to the physical layout of the UVW map, so, any overlaps, splits, mirrored areas etc., cause render problems and artifacts in the final output - for example, areas of 'false' smoothing where hard edge show up due to splits in the UVW map, or texture corruption due to overlapping UVW sections; all mean re-rendering the normal map once issues have been found and fixed.

Normal map baked to the area laid out and occupied by the low poly meshes UVW map

Normal map baked to the area laid out and occupied by a 'clean' UVW map, with no overlaps, distortions or problem areas

Mirrored UVW's ^

There are two issues of concern when mirroring models that make use of normal maps;

  • Inverted Normal's

  • Incorrect 'smoothing' seams

Inverted normal's result from instances where a model has been created so that half of it is fully textured, duplicated and 'mirrored' over to the opposite side. This presents a problem because of how normal's work.

Design note : Be aware that mirroring a mesh may cause problems related to both inverted normal's and smoothing errors where the two halves of a mesh are joined.

When duplicating a normal mapped model in such a way, and providing no additional processes are applied, anything that's mirrored will appear back-to-front; because normal's are orientation, position and direction specific, mirroring an object (flipping it left->right for example) will effectively turn it inside out, it will appear to be facing 'backwards'.

To explain this better the image below shows the wire frame overlay of areas that could potentially be mirrored on the tutorial door model (the left side has the wire frame overlay, the right is 'clear' just so it's easier to see the normal map itself). When looking at the normal map it's noticeable there are some differences in colour between various sections of the UVW map, especially so on the door 'uprights' - 'blue' tinting for the right hand side and 'pink' for the left.

If the model is mirrored so the blue tinted right side is duplicated over to the left (discarding the original pink left side) it can result in the game engine reading the normal map as if the blue section were still on the right, causing that half of the model to appear inverted when mirrored over to the left.

There are differences between 'left' and 'right' which can become problematic on mirrored objects

There are differences between 'left' and 'right' which can become problematic on mirrored objects

The problem here is that the fix for this problem isn't 'art' related, it's 'code'; it's not actually possible to physically 'flip' an image once it's been mirrored without that then causing problems on the other side of the mesh (although the 'problem' is corrected the original and 'correct' side get inverted in fixing the problem, so that is then become incorrect).

So this particular issue is usually corrected by game engines at render time which will normally have been written to find, read and use mirrored objects correctly, usually inverting normal's along the "left/right" axis of a model. Not all game engines do this however as it uses up precious processing time, it's more likely that assets are generated as 'uniques' so everything is mapped on a 1:1 basis without any mirroring.

So, keeps the capabilities and limitations of the game engine being used for content creation in mind when working with mirrored objects.

Incorrect or misplaced smoothing seams is a problem generally associated with breaks or splits in the mesh which cause 'seams', hard lines or edges in normal maps where there shouldn't normally be any. There are generally two reasons for this;

  • Vertex, edge or face splits on the mesh

  • UVW map vertex splits

Because of the way baking works the best results are generally achieved by the use of 'continuous surfaces', where ever an edge is found (either the result of hard/soft edges, smoothing or detached faces) it will treat pixels baked based on that edge as if they are terminated, which in turn will change the colour orientation of the normal relative to its neighbour.

The easiest way to think about this is to think how mesh smoothing is typically used on a model. If two neighbouring faces are left as is, smoothing will be averaged out between them and applied to both, giving the appearance of a continuous surface (most helpful on curved surfaces). However, if a hard edge or split is created between the two faces, smoothing will 'stop' at that edge, each side being treated as individual units creating a 'line' or 'hard edge' between them. So, if a model were mirrored down a centre line, care would need to be taken to remember to 'merge' the vertexes between the two halves otherwise the model would have two smoothing groups - a left and a right side - down the models centerline.

Normal maps work in a similar way, except that the 'edge' between both halves of a mesh isn't fixed by merging vertexes or faces on the model, it's baked into the normal map itself, any problems that occur need to be corrected there. This means that when baking the normal map the mirrored half of the mesh needs to be present somewhere on the UVW map so the edge normal's get averaged out and calculated properly as a continuous surface instead of being terminated where the different halves split.

Shown below is an example of what this may mean when laying out the UVW map before render baking the normal maps for the Sci-Fi door that has mirrored sections; the main bulk of the UVW map is one half of the mesh, the other half is incidental as it won't be used except as part of the bake process to ensure the edge between the halves is correctly processed - it's discarded when the main bulk is duplicated and mirrored.

HOW TO : baking normal maps on mirrored models

  • UVW map the model as normal.

  • Select half the mesh or UVW map (selecting faces in 3D view will auto select faces in UVW/Image edit view) and either shrink that down or move it to an unoccupied area of the UVW map; essentially make sure that it is out of the way so it doesn't interfere with render baking.

  • Re-bake the normal map.

Mirrored UVW maps section needed for correct normal map baking; discarded once done

Mirrored UVW maps section needed for correct normal map baking; discarded once done.

Design note : mirroring a mesh will usually result in both flipped normal's and smoothing seams issues as discussed above.

Anti-aliasing and render baking normal maps ^

Currently (as of 2.48) Blender doesn't anti-alias the results of render bakes. This is true of normal maps, ambient occlusion maps or indeed any other 'map' type because the process of production of a baked image is slightly different to that of a rendered image, as created for a movie for example.

This means on closer inspection of a baked normal map something commonly referred to as 'jaggies' will be visible on the image where-ever non-axial, angled or curved edges, faces or lines are drawn (anything in fact that's not at absolute right angles to the image boundaries). This is particularly problematic where smaller details are concerned as it is often the case that there isn't enough pixel space available, dedicated specifically to representing such features that typically only occupy a few pixels (see image below).

Distortions in the normal maps, shown here applied as a diffuse iamge, caused by the lack of anti-aliasing during the normal map baking process in Blender

Distortions in the normal maps, shown here applied as a diffuse image, caused by the lack of anti-aliasing during the normal map baking process in Blender

Ideally this problem should be resolved by being able to render normal maps with Anti-aliasing active in Blender or increasing the amount of UVW map space dedicated to areas of small detail. However, altering the UVW map typically causes distortions in the resulting texture, or, as anti-aliased bakes can't (currently) be done one has to resort to using a number of 'external' processes to correct this issue, all of which are done using 2D image/photo editing software.

HOW TO : fix anti-aliasing issues in baked normal maps

  • blur and sharpen
    This is a technique that's often used on photographs to remove slight pixilation and/or 'noise/moiré'; an image is given a low 'blur' value (less than "1%") to soften the visible edges of pixels and then a 'sharpen' value to re-establish and pick out the edges.

  • RGB Channel editing
    Individual RGB image channels can be exposed to reveal the 256 gray scale values that can then be corrected by painting out problem areas.

  • Oversized render and re-size
    The original normal map is rendered out at least twice the necessary size which is then opened in an image editing application and re-sized smaller; the software will automatically anti-aliase the result.

The simplest correction process to use is the render and re-size, although it does necessitate that the low poly model be UVW mapped with a larger image than it actually needs - normal maps currently (as of 2.48) cannot be render to a given size because they rely on the presence of a texture assigned material.

Any outside editing carried out on normal maps like this need to be 're-normalised' to prevent colour artifacts causing problems - this forces the correction of colour tone to the strict RGB values that normal maps use; most 2D normal map creation applications and photo/image editors will have a way to do this, either as a plug-in or filter.

Fixing normal's maps isn't an absolute necessity, i.e. using the images as they come out of the bake process isn't going to 'break' anything, all it means is that certain types of detail will appear misshapen or missing (thin lines and small features like rivets, screws and scratches in particular). If you do want to fix the normal map errors, rather than fix everything, it's best to initially fix only those details that will be immediately visible to the end user. Prioritise what needs to be corrected.

Anti-aliasing and 'Margin' settings ^

In addition to the above in relation to Anti-aliasing, when baking normal maps 'Margin' needs to be set. This parameter deals specifically with 'edge bleeding' rendered areas to force Blender to over compensate for the 'stepping' ("jaggies") that occur, as a direct result of their being no AA correction, when angled or curved surfaces (anything in fact that's not sat at right angles the image edge borders) are rendered to image.

Shown below on the left hand side of the image is a 'default' bake with no "Margin" value (set at "0"). On the right hand side is the 'corrected' version using a "Margin" value of "3"; the box-out top left, shows more clearly the small black triangles of the image background that, if not corrected, will interfere with and be visible on the model when the texture is used in game; as this is a normal map it will also likely cause visual artifacts - 'black' isn't a valid colour reference for displaying normal's

When lying out the UVW map for an object keep this issue in mind and allow space between sections and components for edge bleed compensation.

Setting 'Margin' values to force over-compensation for Anti-Aliasing issues

Setting 'Margin' values to force over-compensation for Anti-Aliasing issues

Design note : the value used for 'Margin' will depend on how much space is available between objects and/or how severe angles and curves are - Anti-aliasing 'stepping' problems are determined by how steep or curved an edge is. Texture size also has an effect on how severe the problem is, which may mean increasing the value even more to compensate depending on how bad the 'jaggies' are due to pixelation.

Additionally, the problem may also be fixed by cleaning the normal map up in a 2D photo-editing application in a similar way to what was previously mentioned above.

Normal map image formats, avoid JPG ^

When baking out and saving normal maps it's best to use a 'loss-less' image format like BMP, TGA (raw) or TIFF, the reason for this is that JPG's and other 'lossy' compressed formats introduce artifacts into the image that tend to cause visual corruption when used; JPG images in particular suffer this problem quite badly, especially when 'Progressive Compression' has been used often resulting in grids, dark 'blobs' or other sorts of misc. corruption appearing.

Design note : The Although Blender can't natively use, create or export DDS images at the moment, as they're a 'compressed' image format care needs to be taken to use the correct 'filter' - usually DXT5 - for normal maps as there are many different versions of that particular filter. It's possible to create the same sort of issues of visual corruption with those as it with JPG when using the wrong one.

Conclusions; texture space and relative sizes ^

Working with normal maps present all sorts of challenges that you need to be aware of, some aren't directly related to the maps themselves. For instance, using normal maps often means being more aware than usual of pixelation and resolution issues as a result of the amount of space, or lack thereof, available for use by texture images. The usual way to solve this is to simply increasing the size of the UVW map relative to the space available on.

However, before doing this keep the following two points in mind;

  • Keep texture resolution relative across assets

  • Shared texture sheets limits per-asset space

These may not seem like particularly important considerations, certainly not whilst working on individual assets, however, it does come to a head at a project level; when you're creating content for games in particular, assets aren't used in isolation of each other, they have to work as an overall whole, and this means making sure that small objects don't take up huge amounts of space on textures - although rendering a door knob would ideally mean being able to use perhaps double its native resolution, any more than that and an object that will only be twenty or thirty pixels high on screen will have a UVW map that's almost half the size of the door it's attached to. What this means is a disparity between the visual clarity of both when shown together - the door will appear to be lower resolution than the door handle.

So, despite the need to use as much space as is possible to render bake clean, clear normal maps, that usage should be (needs to be) limited by practical concerns and considered in the context of all the assets being used on a model, in a room, in a scene and in the overall game world.

Connect to KatsBits on Facebook Follow KatsBits on Twitter Connect to KatsBits on LinkedIn Google+ Subscribed to KatsBits on YouTube
Search KatsBits using StartPage
KatsBits Web