Well, let me begin with saying, I'm actually writing this techfile for the second time. Last time it was stripped from the e-mail I sent to Kurt and myself (I'm sure I actually attached it). I think that is actually a slight windfall for the readers, because for once, I get to think this one over twice ;).
Anyway, as I have mentioned lately (to anyone who cared to listen to a drunken man's rambling debacle of speech), that I've started to focus more on my engine project. Ray tracing is good for a bit, but I needed to do some engine coding again.
Now the engine's rendering is based around a fairly simple paradigm. The world is made from objects I like to call HCMs (highly connected meshes). That means that each object has many shared vertices/edges, and is perfect for modern hardware rendering. It enables fairly complex worlds to easily be shoved through an API well. It also works well with LoD techniques ala VIPM.
One of the problems with HCMs however is lighting. It can be fairly hard to light map HCMs, as they can not easily be converted to a bi-parametric mapping automatically (that doesn't ruin the connectivity). And although vertex lighting can be nice, adding splits at discontinuous places are needed to properly add shadows.
One idea I hit upon was to force most surfaces to be bi-parmetrics. Bezier patches for example are bi-parametric. The advantage of bi-parametric surfaces is that a perfect mapping of a lightmap exists, and you don't have to worry about problems at edges etc. However, this to me wasn't really a suitable solution, as it applies too much restriction.
Vertex lighting with splits sounds interesting enough - And carries across well to HCMs, but the principle problem comes with the creation of extra triangles. Also, neither of these methods are great for dynamic updating. Various projections/light texture ideas sound good in theory, but create more state changes lower precision etc etc.
In the end, I had an idea I liked. Its sort of a compromise, but I have a feeling it may end up working rather well. Basically, subdivision principles would be applied to increase the number of triangles if needed. The subdivision, although not a splitting method, would be used to create a pseudo discontinuity mesh. The subdivision would occur on static objects. Dynamic objects wouldn't be subdivided. Calculating which surfaces are subdivided for light is an interesting idea - The concept of photon mapping comes to mind - In that it can calculate where subdivision is likely needed.
Another method I have in mind is a concept similar to photon mapping - except more related to direct illumination than global (the complexity of the world probably limits that possibility). If I'm right, I'll explain in more detail.
This leaves an interesting possibility though - Those extra triangles could be utilised in some way to add more detail (possibly noisy) to a mesh, although as of yet - I'm not sure. It might also mean I have to add a light discontinuity component to my LoD error metric.