flipCode - LithTech 2 Tech Preview (Part III) [an error occurred while processing this directive]
Update On The LithTech 2.0 Engine - 11/05/1999


I fired off an e-mail to Mike Dussault to see what's been going on with the LithTech 2 engine since our tech preview a while back. Thanks again to Mike for taking the time to respond. Check it out:

How is LithTech 2 coming along? Any major changes or added features since the interview we did a while back?

Currently, we're wrapping up a big R&D phase. For the past several months, all I've been doing is adding tons of new features, and some instability crept in here and there. So for the next few weeks, I'm only fixing bugs and updating features that are already in there. I'm glad to be doing this because everything is starting to look very solid.

Here are some of the new features that have been added since the LT2 interview in June.

S3 texture compression has been added. This gives us either 4x or 8x texture compression (depending on if a texture has an alpha mask). This allows us to build all our textures in NOLF at twice the resolution we would otherwise have made them, so either S3TC-enabled cards or cards with a lot of memory can really get sharp looking textures (and we're doing a few tricks with detail textures that make them look even sharper!)

Even though the textures are larger, DEdit still treats the textures as their regular resolution, so the level designers don't have to scale down the larger textures on every surface they apply them to.

The biggest structural change to the engine is that now it uses a quadtree as its search structure. I'm so glad this got in! Previously, the whole world was represented as a BSP, but that really didn't work well for terrain and had some other difficult issues we were always dealing with. The quadtree has worked out beautifully for things like raycasting, collision detection, and queries (like "give me all the objects touching this box").

For indoor areas, we still use the same PVS visibility scheme as we did before, but the indoor areas simply sit on nodes in the quadtree and control visibility for those nodes. Anything not on those nodes uses the quadtree for visibility.

Terrain objects are chunked into sections corresponding to cubes at a certain level of the quadtree, then at runtime they are simply stuck on their appropriate nodes.

Our model objects are now always loaded into a single contiguous chunk of memory. They used to do hundreds of allocations as they were being loaded, but with the way they're loaded and integrated with the tools, it's very difficult to optimize the structures to be more allocation count-friendly. Now when a model is saved from a tool, it figures out how much total memory it will need and stores that with the file. At runtime, we make one big block for each model and store it all there.

Model LOD was reimplemented. It uses a very simple precalculated-LOD system which works great for NOLF. The precalculated LODs are exactly as expensive to render as how many triangles they have (so if you have a 1 million poly model and it's using an LOD with 100 tris it will be very cheap to draw). NOLF uses LOD offsets for medium and low detail, so the artists have a lot of control over what the model looks like at those detail levels. LODs are interpolated between so you can barely tell it's being done.

One problem in our Shogo LOD system (which was similar to Intel MRM) was that the texture coordinates got messed up as you applied LODs to the model. In the new system, the texture coordinates are barely affected so an LOD with less triangles looks better.

Separate from the LOD, I found a big optimization that allows us to render our skeletally-animated models about 40% faster.

In order to make our weapons look wider, the weapon models can now be drawn with a custom field of view. This sort of undoes the perspective distortion on close-up things and makes the weapons feel a lot more solid (hard to describe.. you have to see it to know what I mean).

Models can now draw with pseudo-specular-highlights which helps them look shiny and metallic.

Chromakeying has been added for models and WorldModels. This allows us to have transparent objects without all the sorting problems that translucent objects always have.

We now have a really cool lightmap animation system in place. Any game object can create a bunch of lightmap animations during preprocessing, and animate those at runtime (all these lights cast shadows). This allows us to do all sorts of things like:
  • keyframed lights with shadows
  • flickering lights with shadows
  • the whole time of day system can use shadows (so the world shadows move as the sun passes overhead)
  • light floods out as doors open
  • fans show rotating shadows as they rotate
  • The way the animations are stored is really cool too - for each light and each poly that the light sees, we store a runlength encoded map of which points lie in shadow. So at runtime, we decompress this and do some dynamic lighting. This allows us to change the radius and color of the light at runtime and the compression ratio is ridiculously high (between 20:1 and 100:1) (Billy Zelsnack gave me this idea.)

    You can view level lighting in DEdit now. This has really helped intweaking the lighting in a level.

    You can also actually manually edit the lightmap data in DEdit. The interface looks really cool and it's fun to do, but I have doubts as to whether or not this will actually be useful since your changes go away if you reprocess the level.

    The engine interfaces starting to look a lot like COM, and I never thought I'd say it but that's actually great! Using abstract interfaces solves a lot of the annoying quirks of C++ (like being able to see a class's private data).

    I doubt we'll use anything like the AddRef/Release system COM has but I think we may add an IUnknown::QueryInterface kind of thing. This would be particularly useful for the tools callbacks so the tools can get an object and ask if it supports the lightmap animation interface, custom property interface, etc.

    DirectX7 got integrated at the end of this R&D phase. I'm generally happy with it. The API is a lot cleaner. We're taking advantage of its improved texture management. I still have some work to do to take full advantage of it.

    We have procedural textures now. I really like the way it's implemented. The game has a cool system for adding new procedural textures to its library. Procedural textures always use 32-bit data and are converted to the card's format as needed.

    Interface surfaces can be translucent so we can have UI elements fading in and out.

    The model animation has gotten another upgrade and we're now running 4-5 animations on each model at once. Multiple animations can be controlled from the server and it all gets to the client.

    Brad added some awesome new sound stuff:

    Sounds can use ADPCM (4:1) or MPEG3 (11:1) sound compression. Sounds can be decompressed while they play, when they load, or the first time they're played.

    Loop points can be embedded in the wave files which reduces overhead due to the number of simultaneous sounds from rapid-fire weapons.

    All sounds can be pitch-shifted.

    We have a new guy working on DEdit now named Kevin Francis. He's been doing an incredible job and has totally exceeded our expectations. He's added WAY too many things to list but they're all really useful things like rotation trackers, tooltips, new texture alignment functions, etc.





    Original Interview: A Sneak Peak At LithTech II

    Return to flipcode

    Preview by Kurt Miller (Psykic)





    [an error occurred while processing this directive]