?

Log in

No account? Create an account
When last we spake, it was December. Now, it is is March. However,… - Spectre Gruedorf Challenge Blog [entries|archive|friends|userinfo]
Spectre Gruedorf Challenge Blog

[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

[Mar. 6th, 2009|01:33 am]
Spectre Gruedorf Challenge Blog

spectreblog

[mad_and_crazy]
When last we spake, it was December.

Now, it is is March.

However, I am not dead. My game company still seems to be chugging away on Dredmor, and we have actually let builds out of the offices for people to play test. Automatic logs are being uploaded to the new webserver and Daniel and David are busy fighting, tooth and claw, to further refine the accessibility of the UI and to balance any last niggling little concerns regarding playability, fun, and so forth. As for me, I'm just happily banging my way through bringing any old systems in line with Daniel's newly designed (and so far, quite fun!) combat system. At this point, the last stragglers are bringing the weapon and armour artifact describers back online (and making sure that unique artifacts spawn correctly, and in an entertaining fashion), implementing the non-implemented random quests, and also adding an options menu so you can actually adjust, y'know, the volume of things.

We have fabulous unique artifacts, too. Writhe beneath the pastry-filled fury of the Shield of Caketown! Turn objects into delicious Lutefisk with the Horadric Lutefisk Cube! (Eat lutefisk!) Try and figure out where you left the Invisible Shield! Marvel at the Helm of Sir Albrecht, who ignored the devil and ran away from Death himself by exploiting its poor peripheral vision! Wield Triangulon's Staff of Greater Triangulation! ... and so forth.

We are nerds. The items in the game are distributed according to the Maxwell-Boltzmann probability distribution. There will be videos as soon as I can get the damned videos exporting correctly... le sigh. Life, she is not always fair.

Now, that said, while this is going on my mind is snapping back to Spectre.

So, Spectre. Oh, how I have missed thee.



When last we left Spectre, I had three major selling points that I wanted to hack into play. The first: just-in-time level editing. The second: online asset database negotiation and object sales. (I think I had just taken a strong hit off of the IMVU bong at the time.) The third: dynamic lighting via deferred renderer.

Of these, I only implemented the third. This one I am still fine with. People are shipping games with deferred rendering (LittleBigPlanet uses a deferred renderer, interestingly enough; Killzone 2 has now finally shipped, and so forth) and I still think that yes, it's a good solution for me. srbaker is big on "accomplishing tasks without accumulating technical debt" (he's an Agile consultant, which means that he gets paid large amounts of money to stop people from writing software, because that could be dangerous!), and I think that in terms of minimizing my technical debt versus a forward-rendering lighting solution, deferred rendering is looking like a strong winner.

The gotchas with deferred rendering are materials, transparency, and anti-aliasing. I miss AA, but not that much. Transparency is an interesting problem. I am sort of considering rendering all opaque primitives into the standard deferred rendering pipeline, and then building a "deep deferred renderer" that takes, say, 4 pixels worth of transparent data on top of one another. Lighting passes run through the DDR then light all four pixels and we accumulate lighting values that way, then compare to see if the top most transparent pixel is not blocked by an opaque pixel in the normal deferred renderer. If so, we merge it gently into the scene. This makes some requirements of level designers - specifically, don't build levels where you can have more than four transparent surfaces lined up at once, because we're just going to throw out anything beyond that! - but I don't actually think that this is a major restriction. The actual buffer construction involves doing a particularily painful manual unrolling in the pixel shader - but, thankfully, that doesn't show up in the lighting process. Win for deferred shading.

Materials I have no good thoughts on, other than trying to make the material system we use for the lighting as general as possible. We could write out a BRDF per pixel (well, a lookup into a table of BRDFs stored as a volumetric texture) but I shudder to think at what our bandwidth requirements are going to start looking like. There are also some interesting banding issues that I need to sort out caused by writing my depth value for the G-buffer into the wrong sort of buffer (a hardware work around on my laptop. Nnngh.)

In terms of renderer TODOs, the major thing that the lighting system still needs is shadows. I'm increasingly liking variance shadow maps, using the standard model (plus possible a perspective skew) for spotlight primitives, and then dual paraboloid models for point lights. Looks good, solves the problem. An interesting paper I read recently was on "Reconstructible Geometry Shadow Maps", which presents a shadow map where you store both depth and the 3 vertices of the polygon associated with that depth value; you then use this to produce crisp hard-edged shadows a la shadow volumes. It looks good but it has no extension to soft shadowed algorithms.

So I said there were three things that deferred rendering doesn't do that, in an ideal world, it would. I lied, there's a fourth thing: indirect lighting. There are two models for adding one bounce worth of indirect lighting into a deferred renderer: screen space operations (similar to SSAO) and the virtual point light model: points on directly lit surfaces become "indirect point lights", and we use something like shadow maps in order to infer what surfaces we can hit from a given point. The first paper on this is called, er, Reflective Shadow Maps, and the new approach (and my favourite!) is the wonderfully titled "Imperfect Shadow Maps" approach. Neither paper looks hard to implement, and in my case it's a definite win.

Even one bounce worth of indirect illumination lets me make my world look better, and I need all the help I can get given that, as an indie developer, most of my world geometry is actually going to consist of boxes. (This is Valve's strategy, and the reason for their radiosity-normal-mapping technique. It worked for Portal.) Scarily enough, we can get that now. Performance is not bad, and certainly good enough that I don't mind leaving it in as an "advanced" feature (like HDR used to be) while we wait for it to transition down to the masses as a "basic" feature. The kick in the pants here is that any virtual point-light technique requires me to distribute "surfaces of interest" in a uniform manner over my world, which sure looks like a precomputation step to me: a "uniform enough distribution" of point lights on triangles can be found by picking triangles at random, where the probability of picking triangle T is equal to its area over the surface area of the scene, and then picking a point on the triangle itself as random. (Obviously big triangles are not going to be making you happy, but apparently this isn't a big deal. Maybe the developers in these papers are just well-tessellating their scenes and not telling us about it.)

If the uniform approach fails, we go back to doing it in screen space. It's a delicious hack and certainly looks no worse than screen-space ambient occlusion.

I also find myself wondering if going ahead and implementing Sparse Virtual Texturing is an Indie Win. Yes, it's cool technology, but does it make it easier for the independent developer to make a damn video game? (Shipping costs alone are prohibitive.) Will I do it anyway? Perhaps...




JIT is still something that needs to happen. I maintain that this is a good idea. However, based on my experiences with Dredmor (namely, trying to make sure that everybody actually has the right version of everything in their source tree at all times!) I want to build in an additional asset management system into the editor. In that case, JIT and asset management really become two sides of the same coin. We track assets, and we let them move in and out of the game as we're working with it; at the same time, we track assets so that we can build the game and export the game from within Spectre as a set of packages (or deltas to packages.) (The other lesson I also really want to add the log-replay system from Dredmor into Spectre from the get-go, so that every game that I write using Spectre can benefit from being able to build databases of player testing experiences. I spent a long time this week knocking bugs out of that thing.)

The other level editing thing that needs to happen involves Ruby, and figuring out how in the world Ruby is supposed to interact with Spectre. Under the current Spectre model, there are various places where a Ruby script just, well, runs. For instance, on level load. Clockwork Fantasia operated by just putting in hooks to call other Ruby scripts when rendering a frame or parsing mouse input. That worked as an interim solution, but it does not operate well for where we want a nice, clean artificial division of labour. For instance, suppose I have objects in the world like monsters. I want to program Ruby behaviours for them. Do I tick the monster AI code every render frame? On a fixed cycle? On a variable cycle, depending on the function? On a collision? And how does the Spectre object model interact with the Ruby object model?

Okay, I kind of know how that one works. Spectre dynamically executes Ruby code to create Ruby objects wrapping Spectre objects; it imports information from the level editor by requiring the existence of accessors for the variables on the objects that it creates. If not, it becomes insolent and petulant. This has the advantage of making object serialization across the network really easy; we export the serialized object using Ruby, bit-delta it versus the last acknowledged serialized object state, and fire it across the network.

The other thing I don't know how to do with Ruby is how to embed support for debugging Ruby applications into Spectre. Obviously, this ties into the object model as well. Right now, you're stuck thrashing about on the console with eval functions. I have a nasty feeling that to do it right is going to require me to start fooling around with the Ruby lex definition so that I can determine, given a source file, how to fake "stepping into" and "stepping out of" blocks. Watches and all of that are easy. Ruby has a debugger, so maybe that's a good starting point.

Daniel's wishlist: better Ruby debugging, multi-core support (ooooh boy), and web app integration.



So, yeah. Not dead. Just not Gruedorfing.
LinkReply