Just for some of the same old stuff, I opted to post new screenshots of my engine. I recently decided to look into deferred lighting, and this is the result. Overall, the performance is about equal to the standard lighting methods (rendering all the geometry per light pass), but it really shines with more light sources at once. I can zoom out of the stage and still get a constant frame rate, even with 17 lights showing at once. The frame rate is fixed (as is the logic/input update steps), I'm using 20 FPS currently because it provides a smooth rate of motion, without missing any user input. I also support a "pluggable" post processing step (though the steps themselves are currently fixed, I simply have to derive a new filter from my base class, and add it to the list manually). Here are some specs:
Plugable (insert system here), I use plugins for many things, including input, audio, logic, world data, mesh data, effects, etc.
Cross platform compatible base code, currently only supporting Win32, but versatile enough to support other OS's.
Plugable rendering system, using Direct3D v9.0 currently, with and OpenGL plugin in the works (it doesn't crash anymore, but still doesn't render anything yet).
Uses vertex/pixel programs for rendering, including a custom fragment parser/combiner:
I use the parser for the mesh/world plugins, they store a fragment to transform the vertex positions into world space, and to retrieve the texture coordinates, normals, etc. The core engine then uses that base and works from there to perform transformations, per pixel lighting, etc. The only downside is that the transformations get split into two steps, instead of a single transformation. Deferred lighting reduces the waste greatly, by only requiring the geometry to be rendered once, but I am still working on optimizing this (I know I can easly just transform by the final transformation matrix, but it doesn't fit well into my implementation so far).
3D Studio Max export plugins for everything, including static meshes, skinned actors (and animations), and the world data.
World data currently supports a portal visibility system, using author defined sectors for the world data.
Static meshes are meshes that aren't animated, like crates, chairs, etc.
Skinned actors use 3DMax's built in bone system.
Almost completely programmed from scratch. All my core libraries, math, file loading, error handling, UI, etc, are programmed by myself. Only a few 3rd party libraries are used currently, DevIL for image loading/saving, ZLib for ZIP archive support, OggVorbis for OGG audio file streaming, and TinyXML for loading UI elements (okay, maybe a handful, not just a few).
So far everything is working fine, without falling under my desired frame rate. Next step is the physical aspect, which is why I'm currently trying to implement a 3rd party physics library into my engine. I'm looking into ODE or Tokamak at the moment. I've implemented both before, just with unsatisfactory results, so now I am going to work on fine tuning either to see if I can get it working like I want (or at least close too). I'm also looking into HDR as a post processing effect, just for that added splash.