Well, maybe it is a something I inherited from an ancestor somewhere, but I'm beginning to wonder about all these innovative things I'm contemplating. I know it is good, but I'm beginning to wonder if I'm letting my success with some innovations cloud my vision to the failures of others. Don't get me wrong, I'm not exactly a middle aged scholastic, clinging to an abstract theory impossible to prove, but easy to dismiss. I've very much founded my whole programming career in providing the most practical solutions I can give, given a set of circumstances, which is why I innovate. But, when you make it so you only *ever* do something different, you end up just like the Flat Earth Society, clinging to an idea, even if it is proved wrong. So don't feel that you always have to beat out a path. Often, there is someone who has done the work before, and done it in more detail. Some researchers might take a year and a massive amount of research just on one project. They learn a lot about that special area along the way. Doesn't mean you don't have any chance to do better, or that you can't create your own that is better suited to your needs, but it does mean you have to take stock of the work, and see if they have something that is best for you.
Now that my little perspective on a segment life has been purged from my unholy cerebrum, I can start talking about the interesting stuff.
Been looking into some more stuff on light. I've found a major problem in using a physically correct lighting system in 3d hardware. That is, the black absolute. I was going along the line of a unified lighting model, that neither had a specular or diffuse component, but a level of diffuse component. This component not only determined the level of diffuse on the surface, but the shinyness. The system does cannot represent a one hundred percent mirror, but this wasn't a problem, as they don't exist. You can get as finetly close as a floating point number allows, meaning you get sharp reflections, but not the equivelent to a zero diffuse planar mirror. The problem didn't lie in the fact of distance attenuation either (specular intereflectance doesn't attenuate from its point of reflection spherically, so you cant use the 1/4Pir^2 formula used in diffuse attenuation calculations), as that problem was also solved, using a representation based on the level diffuse component.
In fact, the equation it self is fine. The problem itself lies in the fact that it is very hard to represent it as a unified component. I chose lumiaries to represent the unified component, as only a 4d radiance function would work. In true lighting, the colour of the surface (which is lit) is the colour it is because it absorbs some wavelengths (energy) of photons, and reflects others. You see, you must used some kind of multiply blend, but one that isn't capped at 1, but allows floating point numbers. However, using a 2x modulate blend, this isn't much of a problem. The problem I didn't see, which I should have was the problem that black is an absolute on your video card. Three zeros represent your RGB. This is of course, not the case. Black is merely a surface which absorbs most wavelengths of light (has a low threshold frequency). This is why black surfaces get hot, they absorb more energy. But, it is a value more than zero, and it can be modulated to produce brighter results. However, in your video card, no matter what you modulate black with, you get black. This is bad when you have a very low amount of diffuse on a surface represented one a surface with a black base colour. You end up with your lovely view dependant highlight going across the surface, in a nice slidy fashion, except where your texture is black.
Although, even worse was the fact it wasn't me who figured it out :) I was telling my friend Jack, about how light could be precalculated and represented in a view dependant fashion using a lumiary. He got the idea that it was true enough quickly. He asked how I would blend it with the base. I told him. He asked about black. That was when I realised I had been thinking in physics terms instead of graphics. That meant back to the drawing board.
The another of things I've been looking into today (apart from the mentioned light stuff) are optimised rendering and caching of bezier surfaces (of which I'm getting pretty finely honed on). There is one fact thats bugging me though. The two fastest ways in OGL I can think of for rendering such a surface are using compiled vertex arrays and strips, or vertex arrays with strips and using world space back face culling. OGLs backface culling occurs in screen space. Needless to say, this means that vertices that are never visible are transformed if you don't cull in world space. The problem as I see it is you have to generate the normal (un-normalised luckily) and then do two dot products. This is pretty expensive. Not to mention that by doing this to avoid transforming un-needed vertices, you loose the advantages of compiled vertex arrays, and so each point has to be transformed twice (thank god for strips).
But, with compiled vertex arrays you have the disadvantage that even if all the triangles in the bezier are back facing, they are all still transformed. One way I can think of to get around part this was to tesselate a bezier in precompilation, and then sample if it was back facing to an octree or kd-tree. This is pretty simply and memory saving way to go. You can also add it to your visiblity computation stuff. After all, beziers are expensive surfaces, and culling them (eg, a bounding box, or square through a portal) can be well worth it. I'm not sure, but I think that Q3:A suffers from the slow down of transforming backface beziers. The vertex culling extension is a way to get around it, but it is no good for dynamic tesselation. However, I think Q3:A actually tesselates into an array, using power of two sizes, meaning that you can cut out rows and columns adaptively, but you cannot lod to any level. It means that points exist in constant places though, and normals can be precalculated. Q3:A Doesn't use the extension, but I imagine it would offer a speed up.
And finally, I've been looking into virtual machines, languages and various things. Most of the ideas I have are based on advanced object orientated models, working on a thread based system, where each object has conditional priority. The good thing is, a virtual machine means you can write it so objects are part of the nature of the system. One idea has come from finding a quantum computer language a guy designed. I think this is a cool idea, and it is possible that this could actually be faster for things like AI and such than a standard VM. Also, been thinking about complex instructions for thread to object messaging etc etc, which would allow each object to post a message to another object, that could be picked up when that object enters the schedular cycle. I remember a conversation I had a while ago with my father about multi-processor machines he had seen (he has been around computers for a long time), and one of the most interesting was one with threading and multiprocessor control in hardware. It had a fast stack, and each processor would go through whatever thread was on top of the stack, until it hit a point wwith a locked memory position, or it had gone over too much schedular time, on which point, it would be pushed back on to the stack. The system was designed very well from what I could gather. Something for me to consider.