?

Log in

No account? Create an account
A convolution of events today (literally) led to me finally figuring… - Spectre Gruedorf Challenge Blog [entries|archive|friends|userinfo]
Spectre Gruedorf Challenge Blog

[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

[Aug. 24th, 2008|12:23 am]
Spectre Gruedorf Challenge Blog

spectreblog

[mad_and_crazy]
A convolution of events today (literally) led to me finally figuring out how I want to do the Spectre lighting. In fact, this will work out fabulously. The events that happened:

1. The artist for our first game actually being released using the technology coming to me and saying 'Dude, I need more lighting! We need more lights. The more lights the better.' Needless to say, I have a different view on the situation.
2. Me bitching at myself about 'oh, god, terrain blending and lighting and fuck.'
3. Me needing a buffer full of depth information for volumetric fog and water renderers.
4. Me reading a paper from SIGGRAPH '08 (technically, the notes from a lecture) about how Starcraft II does their lighting.

Interestingly enough, Starcraft II uses a deferred renderer.

I thought about it - having talked through all the pros and cons of deferred rendering with Sean earlier, and then said to myself "Well, crap. How bad can it be if Blizzard is doing it?" While this is completely NOT what I was planning on doing today, sometimes a man just have to go where his fancy takes him. So, uh, yeah. Spectre now has a deferred lighting pipeline.

The internal implementation basically works: we render out all our 'G-buffer' information to several render targets at once, offer the scene graph the chance to put additional information in the 'unlit/emissive' and other material buffers, and then just start dumping lights all over the place. Right now I have the simplest case working: diffuse, directional lighting. I still have to finish all the view-space pixel position calculation song and dance, and then point shapes, spot shapes, shadow mapping, scissoring, and all the rest of it, but so far I am amazed at how painless it is. It really is incredibly natural to render the scene geometry, then render all your damn lights. :-)

The only annoyances I have right now are the fact that already we're hitting precision and banding issues because I'm using fixed point render targets. Moving the engine to floating point has been something I have been putting off for a bit, but I suppose now might be the right time.

Token screen.

I'm really more excited about having the viewspace depth and pixel positions at my fingertips than I ought to be.
LinkReply

Comments:
[User Picture]From: nothings
2008-08-24 09:29 am (UTC)
I really have trouble believing it's worth giving up MSAA, but I guess you'll see!

That said, I can certainly imagine how it feels nice in every other aspect.

Edited at 2008-08-24 09:30 am (UTC)
(Reply) (Thread)
[User Picture]From: mad_and_crazy
2008-08-24 11:44 am (UTC)
Well, a few hours later, and several exciting, fun and wacky excursions into the mathematics of NDC space, we have point lights working. The banding is fucking atrocious, and there's some sort of buggery going on still on orthonormal viewports that I hope to God is just precision issues and not, you know, the end of my hopes and dreams (or, more likely, my needing to store a precision buffer instead of computing viewspace coordinates in the fragment shader.)

Other than that, though, no complaints still. Again, we'll see when I actually start running it on complex scenes where I want MSAA, but if that's the only sacrifice I have to make I'm a very happy man.
(Reply) (Parent) (Thread)
[User Picture]From: mad_and_crazy
2008-08-24 11:52 am (UTC)
The orthogonal distortion turns out to be the entire scene moving through one very, very large banding artefact.

Hah. Hah hah hah hah bloody hah.
(Reply) (Parent) (Thread)
[User Picture]From: nothings
2008-08-24 12:23 pm (UTC)
The blizzard article mentions using 16-bit-per-component screen buffers because of precision issues.

The banding artifact reminds me of my experience doing GeForce dot-product lighting... I was using object-space lightmaps and moving the camera around and everything was constantly flickering a little as it moved around instead of being nice and stable... it turned out the problem was I was downloading the half-angle vector with 8-bit components so they weren't getting normalized, so that was just constantly getting a little brighter and darker from not really being unit-length.

Precision issues suck.
(Reply) (Parent) (Thread)
[User Picture]From: nothings
2008-08-24 09:35 am (UTC)
I assume the problem with straight (non-deferred) rendering is have to chunk up the geometry and do lots of state changes to change the lights?

One wacky idea I've thought about, which is particularly applicable to a primarily-2D world, is to have, say, 256 lights, put all their information in textures or constant arrays, and then have a 2d or 3d point-sampled texture whose RGBA values are treated as 4 integers indexing into the list of lights. Then per-pixel you sample the 4-lights texture to decide which lights should apply at a given pixel, thus avoiding the need for state changes to switch lightss. If you can have branching in the shader you can even have the lights be of different types.
(Reply) (Thread)
[User Picture]From: mad_and_crazy
2008-08-24 12:34 pm (UTC)
For me, the straw that really broke the camel's back was trying to figure out a way to get any sort of sane lighting working with the terrain engine for the tactics game, which is a splatted texture renderer a la Charles Bloom PLUS the code to draw the hard 'edges' for the tactics bit PLUS a decal system that runs over top of it PLUS another layer of textures that aren't splats that my insane level designer made me add. There is actually no sane way to light that dynamically other than deferred rendering; you basically have to say 'fuckit' and toss the whole thing in an FBO, then compute your lighting elsewhere and read back from your FBO texture in screenspace when it comes time to multiply your diffuse by your albedo. At that point you may as well just bite the bullet, go the whole hog, and get all the cool things like being able to throw seventy gazillion lights everywhere.

Plus, quite frankly, the whole thing with the chunking and state changes didn't really thrill me either. The part of the engine where you have to cull every object against every light is bad enough, but then you start trying to efficiently combine lighting operations to reduce shader complexity and it just gets worse and worse, and I kept on putting it off because I didn't want to deal with trying to write code to dynamically generate shaders from graphs on the fly for various combinations of materials and lighting strategies. And, again, you have to either drop lights if you overflow the # of lights your shader can handle or have an accumulation buffer anyway. I think that the flaws with forward rendering are all fairly well-documented by now. :-)
(Reply) (Parent) (Thread)
[User Picture]From: mad_and_crazy
2008-08-24 12:41 pm (UTC)
I actually thought of another point: if you do HDR and have a decent tone map operator, chances are that it'll be non-linear anyway. Accordingly, your MSAA is pretty much already hooped unless you have a custom resolve filter and are supporting DirectX 10 (in which case, you can probably also use it to fix your deferred rendering MSAA. Although I haven't tried.)
(Reply) (Parent) (Thread)
[User Picture]From: nothings
2008-08-24 01:09 pm (UTC)
I kind of doubt that. Imperfect AA is still better than none. As long as your tone map is monotonic the basic properties are achieved.

It's kind of like AA without gamma-correction--it's sub-ideal, but good enough in most cases.
(Reply) (Parent) (Thread)