Not logged in, Join Here! or Log In Below:  
News Articles Search    

Submitted by MoJo, posted on November 19, 2000

Image Description, by MoJo

These are some screenshots of a QuakeII Level Viewer. It used BSP/PVS for rendering, and included loading of textures and lightmaps. The Textures are hashed out when rendered, and the lightmaps are scaled to 16x16 and banked onto a 256x256 texture, and the lightmap banks are hashed when rendered, to reduce state changes like with the textures.

Also, I have animated textures and the warp effect implemented and functional. At the moment I am working on optimizing the code and cleaning up what I have. I have decided to use a class structure for the entire engine, which has it's pros and cons I have noticed.

I am also working on a game logic module, including entity parsing and spawning. I only have a few entities implemented, BSP models, rotating functions, and player starting locations and MD2 loading/placement. Unfortunately, I do not have MD2 rendering implemented because I use OpenGL and Direct3D for rendering, which means I am using a vertex array rather than per vertex rendering, and I am trying to still take advantage of Quakes OpenGL optimizations, but still make it ompatible with my engine. I am getting decent FPS at the moment, but I only have a P333 and Diamond Monster Fustion (uses the Voodoo Banshee chipset). I am also working on implementing MultiTexture support, I have it (but can't test it because my computer doesn't support it) so I don't have it implemented into the released engine.

Last but not least, the basic features of the engine are: Q2 BSP/PVS Rendering with Textures/LightMap, Animated Textures, Rear View Mirror, Direct3D/OpenGL Support, DirectInput, and a few others. At the moment I am working on, but have not implemented collision detection, and like I mentioned, optimizations. Also, one problem I am having is that first, Quake uses the Z Axis as the horizontal axis, (which isn't difficult to deal with) but mostly that it also seems to have everthing 100 times larger, and I have to scale everthing by 1/100 to make it look right, and i hate doing this because I am losing a lot of precision and it just takes that much extrawork to load and display, if anyone can help me, pleaz do! :-)

You can download the source code at

Image of the Day Gallery


Message Center / Reader Comments: ( To Participate in the Discussion, Join the Community )
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
Nick Maxwell

November 19, 2000, 03:31 PM

That's quite a nice sounding bit of code you have there. I will go to the website and check it out right away! That is strange that q2 uses the z axis as the horizontal, I don't think I have ever heard of that being done. Anyway, off to the website!

Nick work,

Nick Maxwell

Nick Maxwell

November 19, 2000, 03:32 PM

I mean 'Nice work'
gosh my ego is huge today...


November 19, 2000, 03:33 PM

Very nice, I'm downloading it now. =) How bout making a QuakeIII level viewer? ;)


November 19, 2000, 03:55 PM

I looks just like the real stuff :)

Where did learn how to render the Quake II maps (BSP/PVS)? Is there a resource where it is all explained in detail?


November 19, 2000, 03:59 PM

Sweet looking shots.... About the z-axis thing--actually, quite a few programs (3dsmax for one) use the darn z-axis as horizontal. Makes life that much more unpleasent for guys like me ;-) For the scaling, are you sure it's not the position of your near/far viewplanes? That might be producing the magnifiying effect you're talking about. Can't wait to see some collision detection.... =)



November 19, 2000, 04:18 PM

Don't you mean the Z axis is Vertical? In almost every API and modeller the Z axis is horizontal, including OpenGl and Direct3D. 3DSMAX uses the Z as the vertical Axis, as does Quake2.
If Im wrong flame me and let me know, maybe all of these years Ive just been goofy-footed. =)

Nice screenshots by the way.


November 19, 2000, 04:49 PM

i'm fairly sure that bit64 is right... following the reasoning that in 2d you use X and Y, then when you move to 3d, you have an additional Z axis which is perpendicular to the screen, and therefore is horizonal.


November 19, 2000, 05:13 PM

Ugh...brain not work too good now.... Yes, 3dsmax uses the z-axis as the VERTICAL axis, not horizontal.... What I meant was is that was weird--normally, z-axis should be horizontal, yet some programs do it inverted for some reason.... Going to catch up on some sleep now....;-)



November 19, 2000, 05:46 PM

COOL!!! Im in the process of making a q2 map viewer also and i like to see other people make them also and release the code, so that way i can figure out how to do something. :) Your engine looks exactly like the real q2 engine. It looks very nice. Good work.



November 19, 2000, 05:47 PM

I think I got two reasons why to have the z-axis pointing up:

- If you make a mathematical function in two variables x and y and plot it, you put the x-y plane flat and the z-axis up because you can then clearly see how the function value changes in the x- and y-direction without having to bend your neck. Both Maple and MathCAD plot it this way. If you find a mathematical program or book that does this in a different way please tell me.

- If you were an architect you would draw the floorplan first, and call that the x-y plane. When you draw an extra floor, you go up the z-direction. Both AutoCAD and 3D Studio MAX do it this way.

The only reason why to put the y-axis up in 3D engines is because it's more logic if you look at your screen as a x-y plane and look into the depth z.

Therefore, in my engine, I flip coordinates while projecting to the screen. I don't have to do this, but it's easier for the above reason. It sounds logic that if you go from a 3D x-y-z world to a 2D projection of it you divide by z so that you discard the last coordinate:

float _Z = 1.0F / v.y; v.x = w + xd * v.x * _Z; v.y = h - yd * v.z * _Z; v.z = 65535.0F * _Z;


November 19, 2000, 05:50 PM

If you look at the z-axis as being perpendicular to the floor instead of your screen, it all makes a lot more sense don't you think?

Mark Friedenbach

November 19, 2000, 07:07 PM

Very nice. (maybe it's just the gamma setting on my monitor, but this shot looks better than actual shots of Q2 I've seen...)

About the z-axis thing, bit64 is right, and here is why:

Imagine a piece of paper lying flat (horizontally) on a desk. Any point on this piece of paper may be expressed using two x and y values (x,y) of the Cartesian coordinate system. Now extend the coordinate system to three dimensions - you get a z-axis coming vertically straight out of the paper. This is what the Greeks math-gods did some several thousand years ago, and thus what one would see in a math textbook today.

In early computer graphics, however, someone determined that the vertical computer monitor was analogous to our imaginary horizontal piece of paper, and labeled screen pixels with (x,y) coordinates. Early pioneers in 3d computer graphics thus exended the z-axis the only way possible - horizontally "through" the computer screen. One could think of this as picking up our imaginary piece of paper and holding it against the monitor. Although incorrect, it worked fine for most applications.

Architects, and other real-world engineers, on the other hand, had been trained with pencil and paper techniques out of math textbooks that taught the y-axis as being the depth axis, and not the z-axis (which would be "up") as most of us here believe. Thus many CAD and modelling application programmers have decided to use the mathmatically correct coordinate system to be friendly to their users. Clearly Carmack simply wanted to follow suit.

And there really isn't any reason why the rest of us shouldn't do so as well - I've been using the mathmatically correct coordinate system since about two engines ago.


November 19, 2000, 07:51 PM

Personally, I would never even consider using the z axis as horizontal. It seems to me that using it as anything but vertical is a very twisted way of thinking. Maby that is just because I have actually taken any math/physics, but that is simply the way that I have always though.


November 19, 2000, 09:05 PM

I've been having that annyoing scale problem as well. When making something simple with MilkShape3D and exporting it to my engine (OpenGL), I would get a huge object which I would have to scale down a milion times (x/100, y/100, z/100) to get it to the right size, losing some precision in the process. Any work-around for this that you people might know?

Nate Miller

November 19, 2000, 09:55 PM

You can check the scale of your model in the editor. Goto Tools->Print Model Statistics and it will give you information. You can also write your own export plugin and scale down the model that way.

Nate Miller

Frank Krueger

November 19, 2000, 10:54 PM

But if we all switch to Y being the depth component, the Z-Buffer would cease to exist! That would be horrible!

I guess "depth buffers" were already taking over the world though...

Very nice shots by the way, keep up the hard work!


November 20, 2000, 06:09 AM

I believe this is an old throwback to when Carmack programmed Doom.
Most of the content in Doom was two dimensional. The renderer was
3D. Most of the data was manipulated in 2D before passing it off
to the 3D renderer.

The main example would be the maps. 2D vertices existed, where
heights were given to a set of vertices in order to calculate the
third dimension, which would have to be the same for a closed
polygon set of vertices. (There is a little more to it than just
a closed polygon- but the concept remains.)

It was just easier to append a third coordinate than to change
the entire coordinate viewing system. Quake carried over this
tradition even though it was 'true' 3D.


November 20, 2000, 08:31 AM

Nice. Just a quicky question... why do people program a q2 map renderer nowadays?

From my point of view, coding a whole engine, including BSP/PVS preprocessing, lightmap generation, etc... would bring you more, then a simple q2 renderer, which only takes the stuff and pushes it through the pipeline.



November 20, 2000, 09:20 AM

I learned a lot when I coded my Quake 2 BSP viewer, like planar texture mapping, reducing state changes, multi-texturing as well as some combination ray-tracing/BSP stuff.



November 20, 2000, 11:46 AM

I believe the reason is simply enough, lack of content. Often you can have a really good engine hampered by having no art available - Or no tools available to make art. But, the Q2 engine has tools, and the formats are documented. A good way for coders with no art tools/art skills to prove their metal.

A serious tool is needed that ISN'T written for Quake/Unreal engines and works more how us coders would like is needed. But who wants the job?

Mark Friedenbach

November 20, 2000, 11:47 AM

It gives one a good working knowledge of how the pipeline works, what goes into a renderer, and design inspirations for when they do choose to create an engine.

Now, why Q2? Well, Q3 is quite complex (take the shader language, for example), and would most likely require an entire engine behind it just to render a little level.


November 20, 2000, 08:32 PM

yupyup, a few things. Sorry about not replying to anything, had computer problems. Also, yes, I meeded up with the ZAxis thiney, Q2 uses it as the vertical axis instead of the depth (not horizontal like I said, sorry, nacolepsy kickin in!) And I think I did just mess with the projection matrix when I used it, has small numbers and made the world from the perspective of an ant, he he, which might be interesting to play Q2 as an ant! Anyways, Some mentioned QIII maps, and I am planning on making a viewer for Q3 levels because they means I have to learn curved surfaces and volumetric fog, etc, which I have been looking into lately. And yes yes, I know, just another Q2 level viewer, but I have started to implement actual game logic rather than just a renderer, and I'm only using Q2 levels because I don't know how Aliens vs Predator stores the maps! (Which I would love to make an AvP level viewer!! including the different visions!) And someone mentioned the gamma, I am actually using different blending than most people seem to use. Most people use Src-SrcColor Dest-One blending, and just make the origional palette brighter, but I just leave the origional palette and use One-One blending, so the lightmaps are added to the textures and it looks much nicer, and you get to skip the palette adjustment step!


November 22, 2000, 07:05 PM

Hmm.... strange enough that in almost all mathematical/physics applications, x and y remain as horizontal/vertical respectively and z is an added third axis defined as into/out of your 3d space... Z going up is actually a different rotated 3d space from the standard mathematical model. Just my two cents.


November 25, 2000, 08:30 PM

Sorry everyone. Geocities seems to be continually changing it's URL method. the url is just (no more ~ anymore) Sorry if anyone tried to get it but couldn't find it :-)


December 23, 2000, 06:43 PM

it does initially, but when you want to do 2d->3d transformations (e.g. 2d coordinates -> 3d ray) it's more difficult to get your head round it

This thread contains 25 messages.
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.