It's been a little bit since the last post. We have been busy doing a lot of good things with the game. Here's one of them: Ignacio just got back from vacation and implemented a simple lightmap encoding idea that improves the color resolution of our lightmaps, removing banding, like so:
The difference is not too pronounced in this scene, but with smoother-colored textures it becomes a lot more obvious (and I do want to use more smoother-colored textures, art-direction-wise!)
The situation is, the lightmaps are HDR (for some definition of "high"), so we needed a way to encode those values reasonably into a bitmap. We were using RGBM where M is just an overall brightness factor; you read RGB from the texture then multiply by M to get the output color. We used to just encode this naively, but the problem is, when M is less than 1 then it actually kills precision in your RGB, so the solution is, well, don't ever let M go below 1; just clamp it. This is not rocket science and in fact is the kind of thing that is a little funny, as one starts wondering why we didn't think of that a long time ago, but hey, that's how it is sometimes. The results made me happy, though, so I am posting them.
The improved image quality may enable us to have higher dynamic range throughout the game (this is probably something we will evaluate later on), which would be very nice!
” is the kind of thing that is a little funny, as one starts wondering why we didn’t think of that a long time ago”
Kind of like the game it self, right? Not noticing things that were there, or “why didn’t i think of it” kind of things… just how meta is this game? At this point I’m beginning that even lightmaps have a gameplay purpose!
I am guessing that is one of the underground parts of the game. It looks really cool, the sun light hitting those bricks that might be part of the puzzle and the lightmap is use to attract attention to that, is very clever. I can’t tell the difference at all but i guess it makes it more unique and makes you pay attention to where the map is and stuff.
I am happy that you are happy with these results. And I also found something fun that I think talks a little about the vidya condition nowadays, and I would like to share it with you…
http://i.imgur.com/r4nle.jpg
I guess this is a reason to not like trophies…
The lightmap looks much smoother, and it is still noticeable after the textures are applied. I’m sure it will have a greater effect on smoother textures. It’s such a good feeling when you can implement a simple change in your code to achieve finer results. Good work.
i see no difference at all. But mayby its just me.
Btw, those images you have the problem you mentioned before. The look more pleasant untextured than textured.
Although I don’t think the textures themselves are bad. But you need to adjust the lightmap to have the bounced light be more obvious in the textured version. I realize it’s probably too early to tweak these things though.
You may want to dither the results as well. I don’t mean the lightmap calculation, but just before you return the colour, add a value that is between -1 / 512.0 and 1 / 512.0 for each component in the return value, which will cause the GPU to round down differently. Otherwise you are going to be plowing into many problems with the 24-bit color depth that are going to look very objectionable, due to your game’s reliance on smooth gradients. I can see plenty of banding in the after image (certainly amplified by being JPEG-encoded, but they should be there anyway). The choice of the seed to use for a given pixel can be done in screen space with a static texture, because the dithering is so close to the edge of perception that you don’t get any side effects.
I have a project that is currently displaying little more than gradients everywhere, so I implemented dithering in two minutes to show the difference: post, or image.
http://sticktwiddlers.com/2011/09/09/news-zen-studios-team-meat-at-loggerheads-over-microsoft-comments/
http://sticktwiddlers.com/2011/09/09/exclusive-interview-zen-studios-speaks-up-against-team-meat-microsoft-bash/
http://www.gamertagradio.com/new/2011/09/zen-studios-no-near-death-experiences-a-good-microsoft-experience/
I can’t post links without writing something…
Capcha: stage excleba
Now it says that my comment is “waiting moderation”
anyways… is good people speak out when corporations are being dicks. It takes a lot of guts and it makes the company better… Oh, no , wait. This is Microsoft we are talking about… I want more indie games but why are they getting so much hate? They are the most original and thought provoking games…
thoughts/discuss
Capcha: Problems Arise
I’m a bit confused – why does M < 1 lead to precision loss? If your RGB values are small, doesn't M < 1 mean that you'll scale them up to use the whole 0-255 range, while M = 1 would mean you'd use a smaller range?
Many reasons. The output values are not in [0, 255] when in the destination frame buffer; they are HDR, so math happens on them afterward based on tone mapping, etc. But before that, the source lightmaps are probably stored in a compressed format — for example DXT5, where the RGB values are quantized one way, and the M is quantized another way (as alpha channel). It can get messy pretty fast.
It’s nice to see progress, even if the changes are little, the impact may be big. Especially if lightning really matters regarding the gameplay.
BTW, justin, are you some kind of spam bot?
Jon, since you are using light mapping so extensively, I was wondering how during development you pick a baseline hardware to test performance. PC hardware is so diverse that even if you have a lot of graphics preference settings, there is just some hardware people have that the game won’t run on right? I’m curious how you think about this and if it is a big consideration.
This seems like it will even be more of a factor when you have higher poly meshes that may require more detailed light maps to represent all the faces.
Thanks
One of the reasons lightmapping is appealing is that it is kind of cheap (it’s an extra texture you read from) so it actually works fine on low-end hardware. Number of polygons in a mesh doesn’t matter.
But the more general answer to this (not specific to lightmapping) is that we are kind of ignoring low-end hardware right now.
Late to this party but you might be interested to know. I solved this problem in another way which you may be interested in. Instead of encoding a multiplicative factor into alpha, I encoded a division into it. So a value of 255 (maps to 1 on the GPU) divides by 1, a value of ~128 (0.5) divides by 0.5 (or multiplies by 2), and so on. This really helped a LOT to reduce the effects of transitions between different factors when the map is sampled.