I'm back. Whoohoo!
How've you been?
How are you now?
Having spent almost a year unable to do any serious coding at all, I'm now back with a vengeance :)
Rendering and content generation engine idea
NOTE: I ramble on a bit here - i'll tidy all these ideas up for my next tech file :)
I've had some very interesting ideas about dynamic visibility checking and dynamic content generation [in fact it was only a couple of days ago :)]. I've started work on designing [although not programming yet - someone will probably need to pay me to see this one through ] a content creation and delivery engine [I haven't thought of a better name yet]. The basic jist of the system [note that i'm describing this in terms of a quake type environment, but it's equally applicable to other things] is that approximation and assumption on the part of the computer will not only cut level building times and requirements [you'll only have to hire one artist instead of 30 :)], but will make it possible for anyone to build game content, at the same time as expanding what you can actually do with game type environments.
I'm not doing a good job of explaining this :) I'll split things up into the different strands of this problem i'm looking at:
)Level generation, rendering and urm... stuff
When we human beings stand in a room, anywhere in the real world, we know we're standing in a room. We know where the floor is [down :)] where the ceiling is and where the walls are [no matter their angle]. We can distinguish between decoration and basic structure. We can tell the difference between a picture on the wall and the wall itself.
Usually 3D engines in games, even at the highest levels, the highest level structure you have is a brush, or cube, sometimes only partitioned polygons. As far as information goes, this is such a waste and i plan to correct that. Imagine - a human level designer thinks of a level in terms of rooms and corridors, he/she then has to do the rather harsh translation to polygons [including having to handle the limitations of whatever engine they're designing for] - then the machine has to process this big list of polygons with very little information about them, then some poor human games player has to translate it all back again :). Although you can't currently cut out the last step [roll on direct brain interfaces], there is no reason why a level designer shouldn't be able to build their levels like a human, rather than like a machine.
This though brings on an interesting possibilities for the engine that has this extra information, for example:
High level connectivity information means no more bsp's/octree's e.t.c. - you'll probably send large lumps of cgi'd mesh to the accelerator, having the information you do, it'll be easy to find out when to synth out more information.
The most important change it'll bring is the ability of the computer to synth an entire level and content from just a few commands. If the computer knows about rooms, corridors, walls, chairs, floors, paintings, ash trays e.t.c. then someone like me who spends most of time programming and very little of the time doing and content generation will be able to generate levels too :).
Of course, you need someone to tell the computer about walls, floors e.t.c. but you only have to do that *once* :)
)The Problems of getting Physics and Visibility to mesh in with the above
There are two areas that currently take up most of the thinking time of 3D engine programmers, visibility and physics. The structure of the engine i am [trying to :)] propose makes visibility handling simple. Basically you end up treating levels as blocks of static [generated] data, connectivity information between blocks is implicit [i suppose it's a sort of weakly bound portal engine - a little maths will be required in the same vein as a portal engine but you can spread the calculations over several frames easily]. I strongly believe that coarse visibility determination is the *correct* solution to the visibility problem. Doing work per poly now is stupid, and in a few years when accelerators are 10x faster, it'll be even more inefficient.
Anyway, once you're treating levels are blocks, you know where your player is and how fast they can move etc, caching data into ram, synthesizing geometry and textures etc becomes easy [esp. for quake like environments]. Note that the exact requirements of this depend on how you're using the system - for a client/server type quake game you need to have arbitrary bits of the entire level 'active' at the same time so you can calculate collisions for non-visible players.
I think that keeping geometry for rendering and geometry for physics [collision handling etc] separate is a good idea. You can avoid having to generate the geometry for rendering on the server of the example above. I also like the idea of dumping the precise, very mathematical physics simulation engines that are fashionable at the moment - they may be physically correct but they look like shit - in my opinion, home computers just aren't powerful enough to run proper physics simulations right now. When an object falls to the floor, i'd like it bounce once and stop, not jitter about all over the place - I want to put some impressionism into physics simulation :)
I currently favour a more approximate type of physics calculation engine - I'm thinking about using voxels to allow dynamic deformation, but without using voxels for rendering. Remember, the engine will know what is a ceiling, what is a wall, what is a supporting structural beam :) You won't have to do horrible analytical physics to be able to collapse a building by blowing it's walls out. Using a sparse voxel grid containing information about density and forces for points in the world will allow approximate [and the approximate bit is important - your average gamer doesn't give a fuck if the rocket launcher is 0.0001too weak to demolish an armoured wall] guesses at how strong various bits of your level are. Updating the grid shouldn't be too hard or take too much time either - your CPU is hardly doing any work these days anyway :)
Again I have to stress that heavy approximation and assumption on the part of your computer are at the core of all these ideas.
Anyway, all this stuff is in it's earliest stages of development [as i said before, only a few hours worth of thinking :)], I'll write another post once I've thought about it all some more.
Visual C/C++ Project structure/Workspace structure
Since the last time i updated this tech file i've started (and finished :)) 6 different programming projects. Over that time i've refined the directory structure i use to hold my smaller projects. Most of you probably use this or better structures already but for those of you that don't, this may help.
For people asking about my audio work
Currently, I've suspended work on the audio driven game I was working on, mainly because I got worried I wouldn't be able to build the 3D content it required myself, and partially because I've come up with something more interesting to work on :). I will be going back to this as soon as i can find some talented [and free :)] artists to help me out. I received a large amount of enquiries about both the low level win32 code required to get sound data and the DSP required to do anything useful with it. I've not uploaded the code to the audio project yet, and I probably won't until I start working on it again, but until then here are some very useful links to information about this subject.
This is *excellent*, lots of information about dsp maths, how and [more importantly] why things are done.
This is another excellent site - it has useful info on recording and playing sounds via the Win32 API + information on DSP [not so much on the theory though].
Also check out this book, it starts out fairly simple and explains everything you need to know about this topic:
First Principles of Discrete Systems and Digital Signal Processing - Robert D.Strum, Donald E. Kirk [Addison Wesley] ISBN 0-201-09518-1
What Am I Doing Now?
I'm working on a new website to replace the rather old and fairly inaccurate one. I'm going to upload all my older source code and executables to it. [Various 3D engines, the audio stuff e.t.c.]. I've found that now i'm programming again, making time to build a website is pretty difficult, so don't expect it too soon :).
The art of the 64k intro
Fr-08. I'm sure you've all seen it - for those of you that haven't, it's a 64k intro/demo by 'farbrausch consumer consulting', and it's rather good :) I was thinking about building a 64k intro for ages, after watching this i finally pulled my finger out and started. 3 to 4 years ago i was really into writing assembly language programs, but since then it's been pretty much reserved for writing the inner loops of texture mappers, which has always seemed a bit of a waste.
Most of the intro code i've written so far is x86 assembly language, although i've not done too much yet [sound synthesiser and mixer are done]. I settled on a hybrid project structure that should enable me to write the intro much quicker than just in assembly language or C/C++ alone. The higher level sections of the demo are all written in C++ [Direct3D / DirectSound interface, memory management]. The rest of the intro is written in assembly language, both of these from within msvc.
I've designed things so that the intro can be compiled/assembled efficiently to either an executable or a dll, and so that debug builds have error checking code and full error messages [and also use the C runtimes memory management along with it's lovely leak checking :)], where as release builds are stripped down [using my own simple memory manager].
The aim of enabling the code to compile to a DLL was to make it accessible from Visual Basic - why would i want to do that? So i could write the Sound/Texture/3D Content/Music Editors in it without having to duplicate code. It also makes debugging very easy - i can debug all my intro code from the intro executable, debugging DLL's that are called from VB, especially when you're using DirectX can be excessively time consuming [even more so when developing on Win98] - I've cut out that debugging step [or most of it anyway :)].
Anyway, I'm up to a 5k executable at the moment - that's with sound synthesis, DirectSound and Direct3D setup and shutdown, sound mixing, timer and font handling code. Designing the code so that it's efficient in the demo and when used by the editors in the DLL is challenging, it's music rendering and texture generation next.
One small bit of advice for those of you using DirectX, in the DX8 DSK there is an executable in the utilities section called 'killhelp.exe'. If your program closes badly and doesn't shutdown DirectX properly, running killhelp can save you crashing your machine.
One other other small bit of advice - read the bloody SDK documentation, 99% of questions i've seen on programming message boards about DirectX could be solved by doing that :).
P.S. Feel free to email me.
P.P.S. I paid my £80 and bought me Perfect Dark for my N64 - It's the best piece of game software engineering i've ever seen, i recommend you take a look if you set yourself high standards :)
P.P.P.S. Black and White looks set to be even more impressive than that :)