By writing a first person 3D game, I am currently revisiting some old fashioned rendering techniques : the Quake style rendering.
One of the many innovations that were featured in the Quake engine was the usage of a lightmap texturing in addition to the standard diffuse texture (ie: the plain flat texture).
Lightmap is a texture containing the lighting data for a 3D model in its environnement. Because of the poor performance of these days hardware, it was impossible to compute accurate lighting on the whole environnement in real-time, so that lighting has to be pre-baked then stored inside a texture.
I remember back then that the leading graphic hardware was the Voodoo 2 graphic card from 3DFX, which boasted amazing performance by being the very first card to feature "single-pass dual texturing" especially for the Quake game.
This features was used to mix the diffuse and lightmap directly in one draw call.
In the current engine I am writing, diffuse textures are procedurally generated.

I am computing the lightap texture, currently only for one light source.
I am generating a 256x256 lightmap texture for each corresponding wall. And each pixel of the lightmap is individually computed by evalutating the amount of light received by the physical element of the wall a this position (using Lambertian reflectance).

Final rendering is obtained by multiplying the color component of diffuse texture with the lightmap one.

Nowadays, it is easy to compute real-time per pixel lighting using a shader program (a piece of code that is executed by the rendering pipeline). It's still an expensive calculation that would often be reserved for objects close to the camera whereas far off objects will be rendered using simpler algorithm and light would still be pre-baked, especially in scene where lighting conditions do not change over time.
This technique is also used in mobile device world where graphic processing performance does not allows for complex real-time calculations on a fragment (ie: pixel) level.
http://en.wikipedia.org/wiki/Lambertian_reflectancehttp://en.wikipedia.org/wiki/Quake_(video_game)http://en.wikipedia.org/wiki/Quake_engineHere is the lightmap generation code for one tile.
Texture * TextureGenerator::generateLightmapTexture(unsigned int width,
unsigned int height,T_TextureLightSource * source, T_TextureQuad * quad) {
glm::vec3 x_increment;
glm::vec3 y_increment;
glm::vec3 p1 = glm::vec3(quad->p1[0],quad->p1[1],quad->p1[2]);
glm::vec3 p2 = glm::vec3(quad->p2[0],quad->p2[1],quad->p2[2]);
glm::vec3 p3 = glm::vec3(quad->p3[0],quad->p3[1],quad->p3[2]);
glm::vec3 p4 = glm::vec3(quad->p4[0],quad->p4[1],quad->p4[2]);
glm::vec3 normal = glm::cross(p2-p1,p4-p1);
glm::vec3 light_position = glm::vec3(source->position[0],
source->position[1],source->position[2]);
unsigned char * texture_data = (unsigned char*)malloc(sizeof(
unsigned char) * width * height * 4);
x_increment = (p2 - p1) / (float)width;
y_increment = (p4 - p1) / (float)width;
for (int i = 0;i < height;i++) {
for (int j = 0;j < width;j++) {
glm::vec3 current = p1 + (float)i*y_increment + (float)j*x_increment;
glm::vec3 lightdir = light_position - current;
normal = glm::normalize(normal);
lightdir = glm::normalize(lightdir);
float dot = glm::dot(normal,lightdir);
if (dot > 0.0f) {
texture_data[(i*width+j)*4 + 0] =
(dot * source->color[0]*255 + 30) > 255 ? 255 : dot * source->color[0]*255 + 30;
texture_data[(i*width+j)*4 + 1] =
(dot * source->color[1]*255 + 30) > 255 ? 255 : dot * source->color[1]*255 + 30;
texture_data[(i*width+j)*4 + 2] =
(dot * source->color[2]*255 + 30) > 255 ? 255 : dot * source->color[2]*255 + 30;
texture_data[(i*width+j)*4 + 3] = 255;
} else {
texture_data[(i*width+j)*4 + 0] = 30;
texture_data[(i*width+j)*4 + 1] = 30;
texture_data[(i*width+j)*4 + 2] = 30;
texture_data[(i*width+j)*4 + 3] = 255;
}
}
}
return new Texture(width,height,(unsigned char*)texture_data);
}
By the way, this is one of the few time where C++ operator overloading feature does really shine, being able to write this :
glm::vec3 normal = glm::cross(p2-p1,p4-p1);