KatsBits Community

Game Editing => Textures & 2D content => Topic started by: kat on May 15, 2010, 06:57:40 PM

Title: [textures] what's the difference between '2D' and '3D' normal maps
Post by: kat on May 15, 2010, 06:57:40 PM

Assuming we're talking about the same 'blue' normal maps used in most modern games, there isn't a difference, certainly not in terms of what they do as both perform the same function creating the illusion of surface depth and structure where none exists.

Although there is a little bit of cross-over between the 'types' ("2D" and "3D" being the two 'types'), generally speaking where they do differ is in how they are made and used within a game environment. A '2D' normal map for instance is usually the result of converting images, photographs or artwork within a photo-editing suite (http://www.katsbits.com/tutorials/textures/how-not-to-make-normal-maps-from-photos-or-images.php); using an application specific to the job of converting artwork (http://www.crazybump.com); or baking a highly detailed model to a 'flat' ('2D') surface. So in saying "2D" we're generally talking about tileable textures, textures that are applied (generally) to generic world objects - walls, floors, other 'flat' surfaces.

A '3D' normal map on the other hand is the result of 'baking' the structure and physical characteristics of a highly detailed mesh (http://www.katsbits.com/tutorials/blender/baking-normal-maps-from-models.php) on to a reduced, low poly version of said same mesh. So when talking about "3D" normal maps we're generally referring to 'unique' assets, texture images baked specifically for a particular object that's not 'flat' - usually organic or more complex structural shapes.

The reality of this is that there aren't really different 'types' of normal map per-say (unless talking in the technical context/sense of "tangent" vs "local" rendered normals) but there are different 'purposes' and 'functions'. The specific reason for this is to do with "XYZ" axis in three dimensional space when the maps are being converted or baked, the models orientation directly effects the normals themselves because they get baked relative to their position in space on the mesh.

For example, breaking a pillar apart, flattening it out then baking that to a flat UVW mapped 'plain' will tend to yield slightly different results than baking the normal map from the same pillar using the high>low approach, and that generally means artifacts and problems in game because of the inconsistencies. So, whenever possible, make normal maps relative to their eventual use rather than what may be more expedient.