It seems like time to post some images from the in-progress precoputed lighting tech that Ignacio is working on. See here for a description of the basics behind what we are doing for lighting. The lightmaps are now 16-bit and these shots use a simple exponential tone mapper.
First, I’ve taken shots from 3 positions inside a house, with four different lighting settings. For all four settings, we are computing only the first lighting bounce. The parameters are:
- 32×32 cube maps, adaptive sampling, quality = 0.5. Time to compute: 197 seconds.
- 32×32 cube maps, non-adaptive sampling. Time to compute: 648 seconds.
- 64×64 cube maps, non-adaptive sampling. Time to compute: 738 seconds.
- 128×128 cube maps, non-adaptive sampling. Time to compute: 1570 seconds.
(Click for full size).
The hardware used to compute these timings is an Intel Core i7-860 with an ATI Radeon 5870 (most of the process is GPU-bound).
The non-adaptive sampling renders one cube map for each shadow map texel, whereas the adaptive sampling uses heuristics to place samples and then smoothes the map between them. At higher quality settings, adaptive sampling provides more-accurate results, but there always seem to be artifacts, so for the time being we are using it as a fast preview mode. If at some point the quality becomes shippable, then we would switch to adaptive sampling for everything.
We also have a sub-texel sampling mode that improves quality, but I didn’t play with that during this run.
This second set of images shows progressive bounces of the lighting with 64×64 cube maps, non-adaptive sampling. (The first one is the same 64×64 image as above). The times are: 1 bounce, 738 seconds; 2 bounces, 1438 seconds; 3 bounces, 2481 seconds. The precompute time is basically linear in the number of bounces, as one might expect (the algorithm is basically just doing the same thing n times.)