One of the more advanced features coming to IMVU Studio when its released later this year, is the ability to use NORMAL maps. The following is a basic breakdown using Blender 3D to illustrate what normal maps are, how they work, and how to create them.
Important: in relation to texture making and content creation in general, making normal maps does require a more advanced skill set to make and work as expected.
At their most basic normal maps perform two functions, they; 1) give the impression of surface detail or structural complexity; and 2) they augment the way objects might shade and shadow in response to lights and illumination. Generally speaking they are 24 bit RGB bitmap images that exhibit a distinctive blue-ish-purple hue and are used as non-colour textures within existing materials - for IMVU Studio this is the "Normal" slot (shown above).
Design note: normal maps are typically not used to provide 'colour' information, i.e. what the user might otherwise recognise as the patterns that define whether something appears to be brick, denim, lace etc.
Without getting too technical, normal maps work by interpreting RGB (Red/Green/Blue) colour values as surface detail. This is possible because the colours associated with each pixel of an image, a 256x512 for example, represents a direction, the orientation a corresponding face is pointing, which essentially results in the impression of detail or structure that isn't actually present on the avatar.
Design note: normal maps don't create 'height' or 'depth' per se but that's often the impression given due to what is typically being represented, i.e. features perceived as having height and depth.
Generally speaking there are two ways to make normal maps; 1) from high resolution meshes, and 2) the conversion of a greyscale image.
Design note: the first can be more challenging to make but tends to produce better results, the latter is easier but prone to more 'user error' inaccuracies.
In practice this means the first approach needs a high and low resolution version of an object, the latter being UV mapped and textured. The two are positioned in the same place and 'baked', that is, the surface and structural detail of the high resolution mesh is transposed into pixel data that's written, 'baked', to the image mapped to the UV's of low resolution mesh. This is then saved as a bitmap for use.
Design note: IMVU content, clothing in particular, presents a particular challenge with respect to normal maps due to the detail required of the mesh when baked or converted, and the amount of texture space available (number of pixels available) when servicing underwear, lace and other fine detailed items. This may oblige rethinking the way certain products are produced.
For the latter, normal maps can be generated from an image, a greyscale 'template' of sorts comprising tonal values black through grey to white, black being 'depth', white being 'height', each tone then being converted into a normalised RGB colour value. The converted image can then be saved, ready for use.
Tools: nJob (stand-alone); GIMP filters - normalmap, insaneBump; Photoshop filters - nVidia (or directly).
• Download example files (*.psd & *.tga images).
• How NOT to make normal maps.
• Bake tiling normal maps.
• Bake normal maps from meshes.