Not logged in, Join Here! or Log In Below:  
News Articles Search    

 Home / 3D Theory & Graphics / BSP scenes: looking for good file format & map editor Account Manager
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.

May 17, 2005, 09:42 AM

Hi flipcoders.

I am starting a 3D engine targeted at the FPS genre. Basically it would be an "Unreal 2" like 3D engine with improved lighting.
So what I need is BSP partitionned 3D scenes with cells, portals, brushes...

In order to avoid the timely consuming process of developping a dedicated map editor, I am wondering what free/open BSP based map editors and BSP based map file formats are available. I want to avoid the quake 3 file format and its editors that are getting old nowadays. The best, in my case, would be to have some specifications of the UT2 file format but I have found none and I don't believe theses specifications to be available to the public anyway.

Thanks in advance for your help.


May 17, 2005, 09:54 AM

Hi there!

Isn't there some way to export to an ASCII format from within UnrealEd ? At least there used to be something like that in the first Unreal's editor.

Anyways, if you are going to go for BSP there probably aren't much more sophisticated programs available than the Quake / Unreal editors. You might also try WorldCraft and the various qRadiants. I'm more fond of the unreal editor though, it seems to be more comfortable to use (especially camera movement).
Also have a look at Deledit (

However note that BSPs are a dying technology for world modelling these days -simply because the Z-Buffer does a good job already, and BSPs tend to get really huge for high-poly scenes. Recent titles - such as Doom 3, I think - don't use a bsp for rendering anymore, but just refrain to good portal/antiportal use and the occasional unnecessary poly.

If you are going to be up-to-date then simply go for static mesh instanciation with some way to add occluders or seperate collision geometry (which may be a simplified, bsped version of the original geometry). A simple instanciation tool can be written in just few weeks, and it isn't hard to add things like custom entity properties, or realtime preview of lighting.

I hope this helps,
- Wernaeh


May 17, 2005, 12:13 PM

Thanks for your answer.

I will be looking to the editors you pointed to.

Concerning the usage of BSPs nowadays, it is clear that BSPs cannot be used to modelize detailed geometry. In that regard, the Unreal engine (v2) is not that anachronic thanks to its hybrid architecture (BSP/instancing).



May 17, 2005, 02:31 PM

Could you please explain what instancing is? I might know it, just not by the name. Currently, I am using BSPs in my engine, it's going to be a pain to convert to anything (my engine is HIGHLY bsp dependant) but if it is worth the accessability and speedup i wouldnt mind at all.

Why can't one use BSP's alongside portals, for better face culling?



May 17, 2005, 03:09 PM

Hi there =)

For your first question, instancing is the process of just inserting high polygon meshes into your level (similiar to normal entities, such as player models).

Those high poly objects may be used for anything - from detailed walls to decorational objects such as cables hanging from a ceiling. For example, all TES Morrowind levels are created by instancing: You have a set of room pieces, and may join these arbitrarily together to create a larger level.
Think of instancing in the terms of 3D tiles - just like the 2D tiling used in very old games, but with polygons instead.

This is a very fast way to create actual game content, since you need not actually model each level by hand, but just stick together the pieces. You can also use very few pieces to create very large maps. By varying the number of different models, versus the level size, you can also adjust how diverse all things in your world are.

Nobody keeps you from using a mesh just once. For example, build each room as a single model, texture it, and insert them. Also, if you do want each piece of your level to look different, you might just divide it up in some regular way to create your static meshes automatically from the existing geometry - though you might want to add some more details.
Using this method, you can build almost any level you could build with a BSP based approach. Just see that everything lines up alright in you editor.

When it comes to rendering, this approach has the advantage that it doesn't cost any extra CPU time. You just do some rough visibility testing (see portals, antiportals, etc) and then render each static mesh as a whole. You may also use a lightmap for each static mesh (=> Unreal 2), or do realtime lighting (=> Morrowind).

This also explains why you need not use BSPs these days for face culling: When bsps first were invented, we still had software rasterizers, and rendering a single more face could have a serious impact on your game's performance. These days, just a single more face won't probably be noticeable anyways (unless you are going for very expensive pixel shaders, that is).
A graphics accelerator should be able to cope with some 100.000 faces per frame.

However, since CPUs didn't advance in the same ways graphics hardware did, it is better to offload some more work on the GPU, and keep the CPU free for more "interesting" stuff, such as physics.

If you're interested in the instancing part in detail, I suggest you have a look at how others successfully did it. Have a look at Unreal 2 (or any game on the same engine), or at Morrowind. Especially in Morrowind, instancing is often very obvious, since they just used "few" tiles in relation to their world's size.

Also there used to be a paper about "good" engine design on NVidia's developer page. Have a look at it. They describe in detail the partitioning approach I suggested above - which transforms existing scene geometry into a geometry much more suited for rendering. They also handle lighting and collision detection in the paper, and its a handy reference.

Sorry this got kinda long, I got carried away ;o)

- Wernaeh


May 17, 2005, 03:49 PM

Yea, BSPs were originally used for sorting faces.

But you can use them to calculate potentially visible sets and for speedy collision detection right?

I don't really see why the are a dying breed.

Are occluders really that much better?


May 17, 2005, 04:30 PM

I'll try to keep this one short =)

Speedy collision detection: As I said before, bsps are very good for that, BUT you shouldn't use your standard scene geometry, but a simplified version. This simply is because your GPU can handle more tris for rendering, than your CPU can for doing logics. Also, consider cases where there is a small grate or similiar on the floor. Players might easy get stuck if using too scene-accurate collision detection.

With PVS, its almost the same problem as with BSPs. Why bother to do per-face visibility testing, if your graphics board will not mind a few extra tris? Also, consider the amount of memory used. Afaik, PVS memory is best case n, worst cas n ^ 2 with polys (each convex cell needs a list of all visible polys). This is simply unfeasible for high polygon scenes.

Further think about the size of your convex cells: you will have similiar problems to stencil shadows (not using carmack's reverse) to determine in which cell your camera is. And you have to have a index list or similiar for each convex cell.

Finally note the long compilation time for a PVS. I remember former times, when a single Half Life level took several hours to build on my home machine. Now consider your level designer has to wait for testing his results, and consider his productivity if he doesn't get instant in-game feedback.

I should also mention that todays hardware is totally capable of rendering your usual quake 3 levels without any visibility determination at playable framerates.

- Wernaeh


May 17, 2005, 05:15 PM

Each convex cell does not need a list of visible polys, only a list of (potentially) visible cells. And if each cell has a vertex buffer associated with it, see if it's in your frustum, then just render that buffer. I'm not sure that polygon count affects it as bad as you think it does, though I can see how some high polygon scenes might create some incredibly small cells.


May 17, 2005, 05:25 PM

The problem with rendering each cell is that convex cells have just few polys (i.e. 5 to 10, I guess). So you would need a single API call for each cell, which is doing to sum up and create a CPU performance bottleneck =)

- Weraneh


May 17, 2005, 06:50 PM

Not necessarily if your background vertices are stored in a vertex buffer in the managed memory pool (I'm assuming DirectX here) and that you dynamically create an index buffer during your rendering. While you run through your scene you fill the index buffer and call the API only once the frame is over or the buffer is full. As the vertices probably still reside in the video memory cache you'll only transmit the indices to the hardware which is a significant saving.
You can further improve that technique for several material IDs by maintaining multiple buffers at the same time. You have to allocate a buffer per material ID. You can implement an LRU policy to decide which buffer you want to render and free if you have too many material IDs simultaneously. Doing that will automatically batch your api calls. To make that fast you have to be cautious with the implementation as you don't want to perform searches per face to know in which buffer you have to append your triangle. Maintaining this kind of data structure is suitable for best performances :

  2. #define MAX_MATERIAL_ID 40
  3. #define NUM_BUFFERS 5
  5. typedef struct
  6. {
  7.   int MaterialMappedToBuffer; // -1 if no material
  9.   int numTriangles;
  10.   //... some mundane index buffer stuff
  11. }Buffer;
  13. int BufferMappedToMaterial[MAX_MATERIAL_ID]; // -1 if not allocated
  14. Buffer buffers[NUM_BUFFERS];
  15. BufferLinkedList MostRecentlyUsedBuffer; // used to sort buffers for LRU
  16. // for each 'material change' the newly mapped buffer is moved to the head
  17. // if the material we want to switch to is not mapped to a buffer, we simply
  18. // chose the one on the tail (we previously render it if it is not empty).

with that you can insert you triangle in your rendering buffer in O(1).



May 17, 2005, 07:03 PM

Okay, then using occluders would still require some kind of organization like an octree, bsp tree or kd-tree or whatever to still be useful right?
What other data structures would be useful to make best use of occluders and portals?


May 17, 2005, 07:47 PM

Yes, yet in my case, each face rendered is VERY expensive, since i have stencil shadows (well the fillrate for those is horribly expensive) also specular highlights, and dot3 bump mapping, so the less faces, the marrier for me. Im using PVS in the BSP Tree, I thought about using portals but i have no idea how they work (little idea, but not enough for implementation) so i thought that what u described above might be my case of saving some pollies from rendering, but it is opposite, u seem to render extra without much of a problem.

I guess BSP trees are the most suitable application of space partitioning in my case. Thank you very much for explaining instancing... it seems to be a VERY easy technique, but doesnt exactly fit my case.

Thanks again


May 17, 2005, 11:22 PM

Do you mean that you only cast shadows form you visible faces with stencil shadows?
How do you handle stencil shadows with your culling/occlusion? Do you use the light view and frustum to cull your shadow casters?
You message seems to indicate that you extrude a shadow volume per triangle. Is that true?



May 18, 2005, 06:36 AM

Well, my shadowing code is VERY inefficient. First, I check all the polygons within my light's bounding sphere, to see whether they face the light, if so i extrude them.

When i extrude the shadows for the level geometry, i do it per triangle, but for models i use a sillhouet (or i dont know how to spell it) and extrude that.

I just realized that my stencil shadows dont have much to do with the way i partition space =) lol i just listed the things that affected performance, but it is the calculation of my specular that makes a large performance hit, and bump mapping (extra passes and all).



May 18, 2005, 07:33 AM

Hi again =)

The point actually is that if you do some sort of "simple" front-to-back presorting, most of your fragments will be removed by the z-buffer and frustum culling anyways before the expensive pixel shaders get applied. (Unless you do evil things with your z values inside the shader, that is =) )
There also should be enough pixel fillrate to allow for the occasional double or so z-testing.

Using portals and sectors is a very simple system. You define sectors as some convex space. Each entity is listed in any sector it is within. Also, there are portals, manually placed, which link sectors together. Now when rendering, start in the sector the camera is within, and with the camera frustrum. Tag the current sector as "to render". Then

  2. foreach (portal in sector)
  3. {
  4.    if (portal in frustum)
  5.    {
  6.        tag connected sector as "to render"
  8.        adapt frustum to new portal (i.e. cut off everything thats outside the
  9.        portal polygon)
  11.        recurse on the connected sector with new frustum
  12.    }
  13. }
  15. foreach (entity)
  16. {
  17.     for (each sector entity is partially within)
  18.     {
  19.         if (sector is tagged)
  20.         {
  21.              tag entity
  22.              break
  23.         }
  24.     }
  26.     if (entity is tagged)
  27.     {
  28.         render
  29.     }
  30. }

Rendering other in-sector geometry works similiar.

See Doom3 for an example. As far as I know, they only use manually placed portals and still have a lot of shaders for their realtime lighting ;)

How do you create your PVS sets without using portals? I thought the usual practise would be to start with a poly on each plane of each convex PVS cell, and then cut it down until its only in non-solid space?

- Wernaeh


May 18, 2005, 07:49 AM

Hi again =)

I am currently going for a simple convex volume based system. The user may place convex volumes (aka sectors) to designate special areas (such as rooms). These volumes may also get physical properties applied to (gravity, etc) to simluate water, or other "voluminous" effects. That is the main reason for them to be convex, and not just simple boxes.

Each convex sector may also link to an arbitrary amount of portals, which in turn link to other convex volumes. For rendering, each convex volume is only represented as its bounding box and the associated portals. An entity is considered to be inside the sector if its bounding box is inside the bounding box of the convex volume. Then, I recursively walk all sectors, culling my frustrum to the portals, and tagging all entities in visible sectors for rendering. After that, I presort and render all tagged ents. Note that one should _not_ cull against portals that are very near to the camera. Just assume that the associated sector is visible with the current frustum, and get a few more polys than to deal with ugly numerical instability.

All portals may be autogenerated, or user-specified. There also may be special portal types to introduce mirrors or "warp gates".

I am still working on implementing occluders in a reasonable way, since the current setup does not lend itself to outdoor scenarios too easily. (It does indeed work for simpler "box with windows" scenes very well, but not for more complicated shapes)

- Wernaeh


May 18, 2005, 07:53 AM

Ah ok that definately is a very good improvement =)

I'd still stick with static buffers for static geometry tough to maximize throughput ;) (i.e. do no extra work on the CPU on a per-face level)

- Wernaeh


May 18, 2005, 08:58 AM

Well if you use instancing ar a similar chunky division techique, you're right. You'd really better use static buffers or display lists.
The techinque I describe is only valid to improve rendering a BSP with the old school full tree run and per cell rendering. Because in that case (If you render each cell independently) the pipeline would be broken at each cell and the API would be way too solicited. Implementing this reulted is a 30% speedup in one of my former BSP engines. Today I would not use BSPs anymore for rendering.



May 18, 2005, 09:04 AM

Well, the compiler just discards of them after the PVS has been created. I have no use for them afterwards anyway. Wouldnt manually placed portals be hell to do? Sounds like a pain (im talking from the artist's point of view).

Maybe im stupid, could u please give me some papers or references? thanks.



May 18, 2005, 09:34 AM

Well, you just need to manually place the sectors, and in most cases, simple boxes around rooms or corridors will do well enough (unless you are more into some organic dungeon look with many open spaces, pillars, and the likes - but PVS won't do you any good with that either). Anyways, using these boxes you can automatically create the required portal polygons - or you simply designate all touching bounding box sides as connected portals.

Afaik there were really, really good tutorials about PVS creation and portals written by a Dan Royer. They aren't online anymore though. Dan seems to be visiting these forums from time to time, so you might try dropping him a mail.

Apart from that, the only thing I can suggest is to think of portals as "openings", i.e. polys floating in all of your doors, and all T-sections in tunnels or hallways for example. Now, you only render everything behind a portal if the portal is at least partially visible. Since your portals provide connectivity info on your geometry, you may recursively process all your portals this way. Also, for further improvement, you cut off anything of your frustrum thats outside the portal you just passed.

Consider this example

  2. +-----------------------+
  3. |     __-- #    #       |
  4. |  *< __   +----+.......+---------+
  5. |       -- |    |       #             => much more geometry that way
  6. +----------+    +-------+---------+
  7.    RA        HA    RB        HB    <= These are sectors
  9. # indicates a portal
  10. *<  is the camera with its viewing frustrum        

In the example, you start with the sector RA. You tag everything inside as renderable, since your camera is inside that sector. You now notice there is a portal between RA and HA, and it is completely inside your frustrum. Then you cull your frustrum with that portal. The resulting frustrum has the same depth as your original frustrum, but since the portal poly is completely within the original frustrum, the resulting bounding planes run through the edges of the portal polygon. After that, you recurse on HA, using the new frustrum. You notice that there is another portal between HA and RB, and cull your new frustrum with that one, creating yet another frustrum. The right edge of that frustrum is represented by the dotted line. Now, you consider all portals in RB. However, the portal connecting RB and HB is not inside your current frustum anymore (though it was in the initial frustum). Consequently, you know that HB is not visible, and neither is any geometry behind it (unless you reach HB through another portal, that is)

Mathematically speaking, you could consider each sector as a node in a graph, and all portals as edges in the graph. For rendering, you simply need to walk the graph where possible and tag everything along the way.

A further improvement would be to pack all static geometry within each sector into static (hardware) memory, so you can render each sector with very few API calls.

I hope that helps,
- Wernaeh


May 18, 2005, 09:37 AM

Ok =)
I am sorry, I didn't completely get your point on your earlier post ;D

I hope I did not sound to offensive or anything, it was definately not my intention :D

- Wernaeh


May 18, 2005, 09:45 AM

What I do, ( and mind you, this is slow ) is:
1. Generate bounding planes for each node
2. Generate evenly distributed points throughout the convex node.
3. Keep those points that are inside the bounding planes.
4. Cast rays from each point in node A to each point in node B.
5. If any collide it's potentially visible else it's not.

The density of points determines the precision.

Portals sound faster though auto generating portals seems tricky.


May 18, 2005, 12:24 PM

Yeah, that sounds awfully slow :)

It is actually pretty easy to generate those portals if you have a leafy BSP tree.

What I did back in the days when I played with BSP/PVS/etc was:

1. for each node in the tree I created a large polygon
2. this is "pushed through" the entire tree recursively, until a polygon reaches a leaf, at which point it is clipped by the leaf's polygons -- and voila , you've got your portal.
3. figuring out where the portal leads should be self-explainatory.

To compute a PVS for each leaf:

1. start at the leaf, and follow the portals around it recursively.
2. in this way you'll eventually reach each connected leaf in the tree.
3. the walking around, you should have your recursive function "remember" the path it used to reach the given leaf. To determine whether the original leaf potentially can see this leaf:
a. Compute the edge-planes of the convex hull formed between the first portal and the last portal
b. in cases where they don't form a valid convex hull, you can conclude there's no visibility.
c. now take each of the in-between portals a match them against the convex hull. If ALL of them touch it, there's visibility.
d. voila, you've got your PVS.

You'll probably notice that many of your leaves have PVS' that look pretty much alike. Quake (grandfather of everything PVS) used this fact to group these similar-PVS'ed leaves into "hulls" -- i.e. each leaf didn't have a list of all potentially leaves, it simply had a list of all potentially visible hulls. (of course not a list, just a binary field).


May 18, 2005, 03:10 PM


Thanks! Thats a much more descriptive post. I guess it's possible to get highly precise polygon culling that way. But something tells me it will be slower then rendering the triangles lol.

Thanks, I might actually look into this. It sounds very interesting.



May 18, 2005, 03:24 PM

That sounds like a great idea. However I don't know what you mean by "push a large polygon" or how it would be guaranteed to generate the right portals.

Would seeing how the bounding boxes for each node overlap accomplish the same thing?


May 18, 2005, 03:36 PM

That is very interesting indeed. Thanks for that great description!

I'm assuming a bsp or some other hierarchical structure is still needed to find which sector the camera is in, along with any other models that aren't static.


May 18, 2005, 05:46 PM

Okay, we are sure that all portals between convex spaces of the map, all will be located on a node splitting plane, right?

The large polygon created for each node plane should simply be huge, to make sure that it covers an entire cross-section of the map. You can create such polygon easily by taking the cross product between the plane normal and some random non-parallel vector (an axis). Let's call the result cross product A. Then take the cross product between A and the plane normal, and you've got vector B. The points: P+A*VERY_HIGH_NUMBER, P+B*VERY_HIGH_NUMBER, P-A*VERY_HIGH_NUMBER, P-B*VERY_HIGH_NUMBER defines your "large polygon", where P is a point on the node splitting plane, and VERY_HIGH_NUMBER is a very high number :)
The order in which the points goes depends on your coordinate system.

Now for the "pushing it through the BSP tree".

** Long pause **

Hmm, okay, much more difficult to explain :)

I guess I'll cheat and just redirect you to some resource you've probably already read yourself -- I have to sleep :(

That was just a quick google, so I don't know how good it is, but it seems to cover it.

Regarding the bounding box thing... Assuming you're talking about axis-aligned boxes, finding the space overlapped by such two, will simply result in yet another axis-aligned box. I can't see how such a thing could be useful for portal generation.


May 19, 2005, 05:50 PM

I was talking about a bounding box for the node that isn't aligned to an axis. Just the minimal box that contains all the verts for that node.

That website helped me to understand automatic portal generation a lot thank you.


May 20, 2005, 04:08 AM

For your first question, instancing is the process of just inserting high polygon meshes into your level (similiar to normal entities, such as player models).

That's exactly how the DeleD 3D editor works. Simply create your prefabricated object (prefabs) once and insert them into your scene where needed. They're available through a treeview interface for easy access. So if you're looking for a map editor, DeleD ( might suit your needs. :)


May 20, 2005, 01:32 PM

You spamming b*st*rd you :P

This thread contains 33 messages.
First Previous ( To view more messages, select a page: 0 1 ... out of 1) Next Last
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.