Not logged in, Join Here! or Log In Below:  
 
News Articles Search    
 


Submitted by ClusterGL Team, posted on October 07, 2001




Image Description, by ClusterGL Team



ClusterGL is a Real Time Ray Tracing library with an OpenGL like API, developed at University Of Salerno by Rosario De Chiara and Ugo Erra.

The CPU power needed to obtain real time performance with an computationally heavy algorithm such as real time, is provided by a cluster made up six PCs (PIII 650Mhz) on a 1.28Gbit LAN(Myrinet) under Linux.

ClusterGL uses the SEADS (Spatially Enumerated Auxiliary Data Structure) to speed up the ray-triangle intersection and obtain a load balancing among the nodes. ClusterGL can manage shaded, mirrored, trasparent triangles and colored light.

In ClusterGL is implemented a subset of OpenGL commands that permits fast traslations for simple OpenGL sources. ClusterGL is full CPU power no 3d accellerator needed.

More to come:
  • NURBS
  • Texture mapping
  • etc...
  • Contact address: clustergl@hotmail.com
    Rosario De Chiara
    Ugo Erra
    ClusterGL Team


    [prev]
    Image of the Day Gallery
    www.flipcode.com

    [next]

     
    Message Center / Reader Comments: ( To Participate in the Discussion, Join the Community )
     
    Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
     
    Nate Miller

    October 07, 2001, 10:56 PM

    What is it with all of the negativity here? Donít you people have anything better to do with your time than to talk down to those who are contributing? You are forgetting that without people sending in these images this site would lose something that is really neat.

    So what if you donít like the image or if you think your programs are better. Just keep your comments to yourself. I am all for constructive criticism and comments or questions that could help the person who sent in the image, but no one wants to read replies that have no other purpose than to inflate the someoneís ego.

    I guess that is all I have to say. I figured I would say something since this environment of negativity has prevented me from contributing things on a number of occasions.

    Nate

     
    Mike Taylor

    October 07, 2001, 11:55 PM

    I agree with Nate, this seems pretty cool to me. For those who are knocking the small scene, this is a simple demo shot. I would wager these guys can handle a much greater amount of complexity. Second, for those who say this sucks compared to the demoscene stuff, I can assure you that general real time ray tracing, and not just supersampled low res, is far, far better looking than blocky demos. not to say that Heaven Seven isn't amazing, its just an amazing hack. Lastly, I like the idea of this being a library-oriented project. I would love to have a stable, well written foundation for me when I have to do a stochastic ray tracer next semester. They are making all this performance with OGL!!! That cuts out a LOT of cool shortcuts, but really makes for a stable, portable code base. Congrats ClusterGL.

    -Mike Taylor

     
    Jukka Liimatta

    October 08, 2001, 12:02 AM


    I don't quite follow you, sorry mate, OpenGL and DirectX work in screenspace. Raytracing does not: it has to consider the whole scene for generating an image.

    If DirectX or GL don't need any changes (for the API part), the applications using them definitely do! All primitives in the scene would have to be described again, and again, and again for each frame. Then the intersection acceleration database (octree,bsptree,..) would have to be re-built for each frame, even for stuff which is static.

    I don't see this in anyway efficient, a native API designed to run raytracing core engine would be better.

     
    Altair

    October 08, 2001, 12:23 AM

    I guess it would make more efficient implementations possible if tasks would be spatially more consistent, like you could cull away geometry not in a specific region thus Maya's approach sounds more efficient.

    Also it's better not to divide tasks beforehand to be processed by certain processor, but just have some sort of task queue where each processor picks up next task when it finishes one. When tasks are divided into small enough ones, the workload caused by single task (or processor performance) doesn't really matter, but all processors are working fulltime.

    Cheers, Altair

     
    Altair

    October 08, 2001, 01:13 AM

    Jukka: I don't quite follow you, sorry mate, OpenGL and DirectX work in screenspace

    If you are using untransformed vertex formats (like you should nowdays), they do not. Sure todays gfx hw transforms vertices to screen space and does inverse transform from screen to texture space and all that, but it's only implementation issue.

    Jukka: If DirectX or GL don't need any changes (for the API part), the applications using them definitely do! All primitives in the scene would have to be described again, and again, and again for each frame

    Yes, you should send all the scene data to the API every frame, though in DX7 MS brought in vertex buffers and in DX8 index buffers. I wouldn't be surprised if soon the API allows you to send all the scene data through it. At that point you could relatively easily implement DX API by using raytracing, because spatial partitioning is then implemented behind the API.

    Of course DX API is a bit too broad to allow robust raytracer implementation, but atleast in theory it would be possible. Also there exist now high-order surfaces in DX, which raytracers can easily rasterize but which doesn't suit that well for screenspace based rasterizers. There is lack of some powerful raytracing features in current DX APIs though, like CSG, because of their focus on nowdays gaming gfx hw.

    So, I don't think it's really that weird idea to have raytracer implementation for DX/OGL APIs, but I don't know if anyone would want do that (:

    Cheers, Altair

     
    DaRkWoLf

    October 08, 2001, 01:42 AM

    Welllllllll
    It looks like good work I guess, but a cluster of six pc for that looks a little bit weird as many people said.
    I advise you that demo : ftp://ftp.se.scene.org/pub/demos/scene.org/parties/2000/mekkasymposium00/in64/h7-final.zip

    It's was released at mekka-synopsium 2000 and it was ranked first.
    It's a demo using realtime raytracing, it's pretty amazing and it's running well on almost every pc !

    Check it out guys :)

     
    Thomas Young

    October 08, 2001, 04:17 AM


    This is what is great about this forum.
    It is very interesting for me to know the facts about whether a given iotd is really pushing the hardware or not.
    If the guys saying 'thats easy I can do it on a 486' are wrong then someone will come along and point out why they are fools.
    But without the negative comments there would be no discussion and therefore less useful information coming out of the forum.

    So I say: keep up the negative comments (and rebuttals)!
    Also: whatever happened to the baxton, or did he just metamorph into mr Jun? :)

     
    Jari Komppa

    October 08, 2001, 07:18 AM

    Heh.

    I find it sort of embarassing that my simple textmode hacking gets a lot more positive response than realtime raytracing using a cluster of 6 pcs.

    Getting the code work that the cluster works is, in itself, a nontrivial thing.

     
    Jukka Liimatta

    October 08, 2001, 07:37 AM

    >Jukka: I don't quite follow you, sorry mate, OpenGL and DirectX work >in screenspace
    >
    >If you are using untransformed vertex formats (like you should >nowdays), they do not. Sure todays gfx hw transforms vertices to >screen space and does inverse transform from screen to texture space >and all that, but it's only implementation issue.

    That's NOT what I mean. Sure, the vertex stream is transformed.. no big deal there. I mean, and you know, that the rasterization is done in screenspace.

    Raytracing also works in screenspace for rasterization-- for knowing where the pixel is at, to get a ray, to shoot it into the scene and start serious tracing. This means *rasterization* does NOT work in screenspace for raytracing like it does for DirectX, GL, et cetera.

    DirectX is good for keeping render states and accepting vertex streams and doing scanconversion. It requires a fundamental change for this to be *efficient* (which was what I posted about, not that DX cannot be used to describe a scene). You have eye for commenting on the most irrelevants of details, congratulations. ;-)


    >Yes, you should send all the scene data to the API every frame, >though in DX7 MS brought in vertex buffers and in DX8 index buffers. >I wouldn't be surprised if soon the API allows you to send all the >scene data through it. At that point you could relatively easily >implement DX API by using raytracing, because spatial partitioning >is then implemented behind the API.

    Yes, I thought about that. But sir! That's just one method of streaming data.. the API is not designed to have scene capture capability! This is absolute must for raytracing. Whole scene, all materials, etc. must be visible for the raytracer (or be possible to retrieve from request from external storage).

    DX nor GL does cater for this basic requirement in any way. Any way. At best, at end of the frame we can evaluate what information we have and do our best.

    But what if the polygons *behind* that corner were never sent to the DirectX API? How can raytracer hit rays to something that was never sent to the API? It can't. DX just can't do it, it's not DESIGNED for this.

    For me this way obvious, just mentioning it was enough, or so I thought.


    >So, I don't think it's really that weird idea to have raytracer >implementation for DX/OGL APIs, but I don't know if anyone would >want do that (:

    Someone just did, well, good job I guess but it's in vain. ;-)


    J

     
    Jukka Liimatta

    October 08, 2001, 07:56 AM


    So how would *I* implement a raytracing API?


    1. Basic primitive would be "object" or "node", terminology doesn't matter just the principle that we have objects which have services the tracer can use. The most basic service would be to intersect a ray with the object, and handle the intersection.

    I would have controls to change the transformations of the objects in different frames of reference, scale, rotate, translate, the usual stuff you can do in a 4x4 matrix.

    This arrangement would give headroom to implement the "object" anyway imaginable, to place acceleration at different levels.

    2. Grouping of primitive types. Primitive type, object, could be "trimesh", "nurbs surface" or "triangle list" (trimesh would actually just be list of "triangle list"'s with associated properties).

    This begins to sound more and more like OpenInventor, doesn't it?
    This is the level where ray-tracing engine would be best implemented in, not in the "vertex-triangle-material" -level, because that's too little information for raytracer to be efficient.

    ..

    Now to the practise. The raytracers I wrote before pretty much been "triangle soup" renderers.. I take this pre-transformed, in-worldspace triangle soup, where each triangle has pointer to material and such quite trivial basic arrangement. Because never needed to be a bit more "advanced". Even the latest traver, which I didn't write, but we use where I work at, is using the same basic scheme.. it's just used to calculate preprocessed lighting (lightmaps you could say) for datasets we use in realtime productions.

    The stuff is really simple, I'm first to admit that, but it gets the work done before the universe collapses, so that's good. ;-)

    Now when I think that I have the whole scene.. I go and describe it primitive-by-primitive to the DirectX 8 when rendering (only parts which contribute to the image), it works in realtime. That's something we can live with. When raytracing, *every* primitive does contribute to the image, or atleast, we cannot be sure before the part is required. With DirectX/GL like API we just must bend and send everything in, because we can't request it when it's needed. This is overhead I can't see how it could be acceptable with very high-density scenes.

    I believe firmly on the lazy evaluation principle, but I guess I'm the stupid one.

    I'd be much more inclined to think that it would be nice, if the tracin' API could capture the scene. OK, the description could be procedural, through API calls, fine. You could modify the scene through API calls, again, fine.

    But scene description and rendering should be kept separate interfaces. I think that's what should be the ticket.


    ... my opinion only, as usual, wrong. ;-)

     
    NeoKenobi

    October 08, 2001, 09:39 AM

    Some of you guys have interpretet my post wrong...

    What I meant to say is this:

    The games industry is only interested in ray-tracing if the process if fast enough. Fast enough to produce stunning effects in real-time.

    One of the options that enables this is to take the ray-tracing load of the CPU and make it hardware accellerated. For example you can put the task of ray-tracing onto the GPU or a special ray-tracing GPU.

     
    ClusterGL

    October 08, 2001, 09:43 AM

    We've read some comments (tahnk you very much!) and we'll try to answer almost everybody in a one shot manner.
    Ray tracing is fully CPU bound so there's no way to use 3D hardware accelleration (if someone has some idea to permit this please email us).
    ClusterGL >doesn't use< precalculation phase (this is for demo writer eheheh !) so everything you calculate is used for the frame and then it's ignored for the next one, ClusterGL is used >exactly like< OpenGL so the polygons in your scene have to be defined for each frame, so ClusterGL is general purpose as OpenGL.
    The target we focused on is the >scalability< this means you can reach whatever fps amount just using more node for calculations.
    ClusterGL is realtime as OpenGL is realtime ... OpenGL will hangs with a billion poligons, ClusterGL will hangs with thousands polygons.
    The number of polygons is not a measure for ray tracing, in fact the lights example (3 lights) uses 2 triangles, OpenGL have to use hundred polygons to obtain a good approximation.
    ClusterGL uses SEADS to speedup the ray scene intersections, every frame in the monolith demo requires 1.5Mega Intersections without SEADS and just 800K Intersections using SEADS.
    SEADS are used for load balncing and because it's simpler to use than BSP or Octree.
    ClusterGL divides the rendering job among the nodes dividing the scene in a balanced way calculating the pixel weigth using SEADS(again :) )
    ClusterGL uses MPI, CH version specialized for the Myrinet LAN Card.

    More to come soon.

    That's all folks.

    Rosario De Chiara - Ugo Erra
    ClusterGL Team

     
    Altair

    October 08, 2001, 09:54 AM

    Jukka,

    Umh, DX/OGL API doesn't specify how triangles are rasterized. They just specify transformations, materials and things like that. The way they end up to the screen is just implementation issue.

    Jukka: It requires a fundamental change for this to be *efficient* (which was what I posted about, not that DX cannot be used to describe a scene). You have eye for commenting on the most irrelevants of details, congratulations. ;-)

    Well, I must concratulate you for the same reason, because I told that this could be done easily if all the scene data would be send through the API in which case the spatial organisation could be done in implementation of the API. Sending vertex & index buffers and alike are only part of the solution, but extending interfaces so that sending all the scene data through the API would be possible, then raytracer implementations would be more obvious to implement. Changes to clients would be necessary of course, because it would be similar to the step to HW T&L age.

    Jukka: Yes, I thought about that. But sir! That's just one method of streaming data.. the API is not designed to have scene capture capability! This is absolute must for raytracing. Whole scene, all materials, etc. must be visible for the raytracer (or be possible to retrieve from request from external storage).

    Yes, but neither was DX designed for HW T&L or pixel/vertex shaders. It's not that big deal to add that scene data interface for descriping the whole scene to some DX version. Anyway, I didn't say that DX is ideal interface for raytracer, but that it's possible to implement, particularly after some extension to DX interface which can be used to describe the whole scene. Did you skip every second word from my reply or what? (:

    Cheers, Altair

     
    Jukka Liimatta

    October 08, 2001, 10:11 AM


    I didn't skip any other word. Let's rewind!


    They wrote own OpenGL -like API which raytraces the scenes. They can make design choises which impact the feasibility of efficient implementation.

    I wasn't commenting on whether it is possible to write efficient DirectX runtime driver which raytraces insteads of simply scanconverts. This can already be done with DirectX 8 very easily with the DX8 DDK as writing stand-alone software rasterizations is made very much easier.

    I said this is not efficient through the API's as they stand today.

    Your suggestions rely on the fact that the API's can be altered and modified to suit raytracing requirements. I don't see this happening, and alas, these guys wrote their own OpenGL *like* API, which is *NOT* OpenGL runtime driver or implementation.

    I haven't seen the API, so I don't know if they have glBegin() glVertex3fv(), etc. calls with precisely under those same names or not, even which is irrelevant because the fact is that they chose to implement raytracing engine using this model of interface, which IMHO, is not the ideal for raytracing API.

    Which was what I wrote about. I never denied that it couldn't be done, permitted that applications then use the API in a way that is required - I criticized the overall efficiency. Their solution is scalable, but what isn't, when it consists only of few core calls.

    Raytracing shouldn't be only scalable, but also FEASIBLE. I take this as research into the future, ie. pure scientifical R&D which doesn't give anything directly applicable or productive, but may lead into such results later on.

    If the system is distributed like theirs is, more important is how the workload balancing is implemented (and that the design is feasible), not how the scene is described - their goal was not best possible performance TODAY, but to achieve the tracing through GL -like API. So the result wasn't important but the act of doing it in the first place.

    Like they said ; no precalculation is done, and most raytracer implementations rely on precalculation (scene spatial subdivision atleast) to achieve the extra performance. There also has been effort into parallelizing calculations of multiple rays even on single host systems through SIMD optimizations, I posted the white paper on my website a month ago if anyone still cares to remember. The meat of this was that vectorizing single ray calculations wasn't as efficient as vectorizing calculation of multiple rays at the same time, for the case of Intel SSE four rays gave the best balance.

    Anyway, whatever. ;-)


     
    nufan^skp^pb

    October 08, 2001, 10:30 AM

    Yeah, it is nice and has some pretty cool features (like the cluster stuff and the gl-like API design). I didn't say anything against this, but some people here don't seem to have seen realtime raytracing yet and there are things out there looking much better. That's it, nothing more, nothing less.

     
    Andreas Magnusson

    October 09, 2001, 08:04 AM

    matches?

     
    Prunesquallor

    October 12, 2001, 04:16 PM

    In fact excellent ray tracing hardware exists. Consider for example the products of Advanced Rendering Technology:
    http://www.art.co.uk/
    The came out some time ago with RenderDrive, and now there's a ray tracing PCI card. Of course, both are very expensive, but for ray tracing it's a better value than a cluster of computers. A rack of RenderDrives could ray trace a pretty complex scene in realtime.

     
    This thread contains 47 messages.
    First Previous ( To view more messages, select a page: 0 1 ... out of 1) Next Last
     
     
    Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.