flipCode - LithTech 2 Tech Preview (Part I) [an error occurred while processing this directive]
. a     s n e a k     p e e k     a t     m o n o l i t h 's     l i t h t e c h    II     e n g i n e .

Ever since the release of Shogo: Mobile Armor Division, Monolith's Lithtech Engine has been viewed with much respect as a top contender in the ongoing "war of the engines". After seeing the power of the original LithTech engine, its only naturally for one to expect great things from LithTech 2.0. In addition to all of the excellent features of the original LithTech engine, version 2.0 boasts some of the most exciting bleeding-edge technology to be found in a modern game engine. To find out just what's cooking behind the new engine, I asked Mike Dussault, programmer at Monolith and tech lead on LithTech. I would like to take this opportunity to thank him for taking the time to answer my questions!

Before we get to talking about the LithTech II engine, would you be so kind as to tell the world a bit about yourself? How old you are, how many years you've been programming, favorite compiler, what you do on the LithTech project, etc...

I'm 22, I've been programming since I was 13. The first programs I wrote were character generators and small text adventure games. Before I came to Monolith, I worked at Media Vision on a game called Quantum Gate. Then I worked at Zombie on a game called Locus. While at Zombie, I met the founders of Monolith, and when Monolith got started, I came over. I've been working on Lithtech for about 3 years.

My favorite compilers are the MSVC and the Intel compiler. I like the code the Intel compiler generates better, but MSVC compiles much faster.

I'm the tech lead on Lithtech. I've been working on it since day 1, and I basically oversee development of new technology in Lithtech. Each week, the team goes through old and new feature requests, and decides who's going to do what. I usually focus on graphics, physics, networking, and the code that binds everything together.

Brad Pendleton implemented our sound system and wrote some of the physics code. Brad and I used to do all the DEdit work, but now Scott Pultz is doing most of that. Brad is now on a game team, and Bryan Bouwman will be filling in his shoes.

. v i s i b i l i t y .

     What sort of visibility scheme does the LithTech II engine work with that allows the production of such beautiful scenes both indoor and out, and how does that affect the lives of the level designers?

It uses a combination of BSP trees, a PVS, and portals to do its visible surface determination. The level designers choose what geometry is included in the PVS generation (since a lot of it doesn't ever block the view). They can also create portals to help the renderer chop out areas.

What sort of polygon counts can we be expecting the engine to pump out smoothly?

This changes all the time and is very processor/video card dependent.. can't really say :) I can say Lithtech2 is definitely faster than Lithtech1. Almost all of our graphics features are scalable so the poly and texture counts can be smoothly decreased for slower systems.

. c h a r a c t e r     a n i m a t i o n .

Its been said that LithTech II sports a very sophisticated character animation system which allows for various body parts to move simultaneously rather than switching an entire mesh or segment. Can you explain how that works (and how incredibly cool it is)?

It's very cool :) We're still exploring the possibilities. Basically the Lithtech2 models use skeletal animation. We only store the animation data in the model's skeleton, so the animation data is very compact.

We also have a more sophisticated system of attaching things to models - a model can have an infinite number of attachment points (ok only 2^32 attachment points...) There is no cost incurred by a new attachment point.

A model can play any number of animations simultaneously. A good example of this being used is that our run animation is separated from what the upper half of the body is doing so we don't have to have extra animations for "running and firing", "running and facing left", "running and looking up", etc. We just have a run animation, and animations for firing, facing left, and looking up. This saves our animators an ungodly amount of time, and saves a lot of memory for the extra animations.

The way the animations are blended is pretty cool too, the blending weight can be set on a per-node basis, so it can gradually blend between two animations instead of just cutting off at the waist.

Nodes on the model can be controlled by game code. This means we can move eyes in sockets, move heads, move limbs, or move the torso around dynamically at runtime. We can also do things like have different recoils each time you shoot a guy.

Have you placed more attention on detailed textures, detailed models, or has the issue not really come up?

Definitely. We're trying to stay a level above what our content would normally be. So if we were going to use a 256x256 texture, we'll use a 512x512 texture (and use a mipmap for today's systems).

[an error occurred while processing this directive]