Well, I've been looking at the lay of the land on the Claustrophobic Irony front, and decided, that as that land lies in a urban fashion, that I am going to write a fractalic city generator, that generates some cool buildings, and roads and such. I'm looking at generating the roads first, and the buildings second, and possibly adding sub terranian stuff as well.
I've thought of some Ideas to produce some really cool buildings, mainly using some techniques considered the ground rules of fractals. Eg the roads can be made by breaking lines at "random" points. Eg you have a triangle, and you choose 2 edges of that triangle, and split them at random points, and connect them. Pretty simple. You then use this pattern for the base shape of your building.
Of course you mix some up. I'm not sure how many of you are familiar with Koch (sounds rather |)|R+Y) patterns. Well, some interesting random variations, with random stopping points and random split points, can create nice shaped buildings. If you wanted to create a nice futuristic environment, I am sure some Koch buildings would be cool. Another very si-ba Punk thing would be to do some nice perlin noise based "dirty" detail textures, to really give that gritty feel. Of course the atomspheric lighting needs to be used as well...
And I am still thinking about how to do that. I thought really hard about calculating finite diffuse using fractals and rays to see what happened. I am also thinking about different thinkgs. I like the idea of using vertex shading, and creating spider webbed highlights, and LOD based geometry lighting (with some pregenerated, some on the fly). I also had a quick look at canned light sources. But I saw the word lumiagraph, and was scared off by the size ;-)
Spherical subdivision, (splitting up a sphere into segments and such) seemed like a good idea. One thing that can be done with this is using a 4d "pulse" representation, and splitting the sphere at shadow occluders and using polar, or unit coordinates to represent the sphere segments. Being a pulse, these spherical segments also follow wave laws, for both diffuse and specular systems. Although, it is important to note that diffraction for light can be pretty much ignored. This is because diffraction is inversly proportional to the size of the gap, and proportional to wavelength (if I remember correct). Light has a very small wavelength, in the nanometers. So you need a rather small gap to get actual noticable diffraction, and then we wont be able to probably see it any way.
The representation I prefer involves using splitting planes from the lightsource, creating beams, that get larger away from the lightsource. We split surfaces depending on where these beams hit them. We can then use linear shading and get a reasonable result. Also note, that because a beam can be split up, we can create soft shadows be toning the linear shading down like antialiasing. So this can allow for realtime soft shadows (eg something enters one of these "beams" then the things behind it are also shadowed). Of course, we have that demonic thing called LOD, which helps us make this viable, by checking the difference between nearby points. So you end up with more detailed lighting closer up ;-) While this is not radiosity, it is pretty cool.
And I like the idea of using 2 passes currently. First on texture+vertex diffuse lighting, second on detail mapping. The idea of using specular lighting based on a third pass is viable. But it could be applied to vertex lighting, be simply ripping values out of a phong map, and using a color look up. I'll see. A n number of environment map passes for specular lighting is ridiculous. Thank god for OGL's easy optimization though. Does anyone every use saved lists? If you used OGL rotation and transformation, then you could compile each polygon statically as a display list couldn't you? I can use vertex arrays otherwise. Compiled vertex arrays are not supported by all drivers.