||[Sep. 16th, 2008|02:20 pm]
Spectre Gruedorf Challenge Blog
Spotlights work. Yay. Took me about 2 hours, which is a pretty good testament to the extensibility of the deferred renderer.|
It's interesting to note that with the way things are set up, and even using a 16 bit floating point buffer for our depth information, you can still get banding artifacts. Yes, still. 32-bit kills them stone cold dead, but who wants to start lugging around that much data per pixel? Not me. So it's time to research depth precision.
Of course one of the problems is that when we store the depth value, we store it post-projection transform as pixelPos.z / pixelPos.w. Obviously this is a Bad Thing. It really does reinforce the fact that your depth buffer does get less precise the further away you get from the near clip plane when you see it, in action, in the form of giant banding artefacts. I have to go through and puzzle out the mathematics to store linear depth and to be able to compute view space position for a screen pixel from its NDC position and only a linear space depth, not an NDC space depth. Then, at least, we will get crappy precision on everything. :-)
Other notable accomplishments: extrusions talk to the triangulator, meaning you can do things like draw an arch, extrude it, and drop it into place. It doesn't replace having decent CSG ops, which I really need to get working on, but it is a step in the right direction.
EDIT: We also now have bump mapping. Unfortunately there is no way to really correctly associate a bump map with a given texture, so it's time to write the material editing system now.