Not logged in, Join Here! or Log In Below:  
News Articles Search    

Submitted by Andy Maddison, posted on April 27, 2001

Image Description, by Andy Maddison

Here are 4 screenshots grabbed from a video (avi) I made for my final year project when I was at university. The project's subject area was Augmented Reality, which is the composition of computer generated graphics with real images at interactive frame rates. The project involved moving a computer generated object (cube) around a real scene (lego arch on a piece of board) in real time. The cube was moved around the scene using a mouse and collision detection was implemented so the virtual object would 'collide' with the real object. The screenshots show inter-object shadowing between the scene and the cube, and occlusion of the cube by the scene.

The original idea was to have a virtual ball rolling around the scene, but the hardware I had to work with (combined with my inefficient code) was not up to the task.


Image of the Day Gallery


Message Center / Reader Comments: ( To Participate in the Discussion, Join the Community )
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.

April 27, 2001, 05:04 PM

Im First,

Thats cool, i remember lego, i also remember a Ms movie creater just like this.

Nice :D

James Fingleton

April 27, 2001, 05:14 PM

I'm second!

That's really cool. I especially like the soft shadows.

Jan Niestadt

April 27, 2001, 05:45 PM

Wow, this looks and sounds extremely impressive. Do I understand correctly that the only information the computer gets about the real-world objects is an image from a camera? And that it somehow figures out the 3-dimensional structure from this? Isn't that almost human-quality vision? Well, even if it's simpler than that, it looks great, quite natural.

Mark Friedenbach

April 27, 2001, 05:51 PM

I especially like the soft shadows.

Dude, those are real shadows. Notice that the cube (which is what he is rendering) casts hard shadows. As does the legos when they 'cast' shadows onto the cube.

Doesn't make it any less cool though.


April 27, 2001, 05:52 PM

hey we are the third !
thats most impressive ;)
just sitting here BLooD and me, 2 total impressed and happy flipcoders
we like your work very much and we would be happy, if we could download it...
it must give the user a strange feeling I think...

just sitting here BigBen and me, today we had our first real life meeting cause we got to know over flipcode =)

grEEtz from us BigBen/BLooD 2 all & happy coding :)


April 27, 2001, 06:01 PM

Not to be especially rude, but he explicitly says they use a MOUSE to move the things with. Read the comments!


April 27, 2001, 06:10 PM

Very impressive, I've done some reasearch on this subject once but I'm amazed at what you acheived. How do you find the occluders? (well how do you construct your 3d world using the 2d picture?) Do you have multiple cameras for one picture or just one? And have you tried your simulation using a picture with more objects in the scene (with different shapes)? I'd be interrested to know how well your algo performs.

Nice work btw

Alexander Stockinger

April 27, 2001, 07:14 PM

What's wrong with you dude? He doesn't even ask how he moves the cube! Read the posts. And I guess I want to be a bit rude here...

Andrew Cox

April 27, 2001, 07:50 PM

How about some technical details then.

For instance, did you use big Duplo bricks or just the regular little ones?

Oh, and I can make much better lego models than that anyway. I used to have a whole lego town in my room with a spaceport and everything, and a moonbase on my shelf, and I did all my own designs. And I've been to Legoland in Demark and driven a lego car so there.



April 27, 2001, 07:56 PM

Do you have a 3D model of the real scene as well? You must have some way of getting depth information + shadow volumes for the real scene.


Jukka Liimatta

April 27, 2001, 08:10 PM


we always built big spacecrafts, and their fate was to crash-land on the carpet, I mean, a hostile planet. Every single time.

A good tip for today's Junior's, pay attention, you dad's little darlings: the "flashlights" and "videocameras" make excellent weapons.

Enjoy! ;-)

disclaimer: the tip might be slightly out-of-date, but applies well to lego's made in late 70's and early 80's. I noticed that novadays there are allkinds of robot kits, star wars legos and allkinds of crazy stuff. sigh.


April 27, 2001, 08:19 PM

interresting thing...

why on the fourth picture part of LEGO blocks also casts hard-shadows?

Vander Nunes

April 27, 2001, 08:43 PM

Well thats interesting, but my guess is that the idea is simpler than expected by many.

I guess the poster used a virtual model of the real scene to interact (mask and cast shadows) with the cube. In other words, the "vision" part is done by the author, not the program.

The virtual scene is not rendered, but the real one is drawn. After that the rendered/masked/shadowed cube is drawn over the real scene.

I will be very very very surprised if it includes any type of real, automatic vision.

I think it's nice anyway!


April 27, 2001, 09:00 PM

Yea, pull your ignorant head out or your smelly ass.

Mark Friedenbach

April 27, 2001, 09:41 PM

Why on the fourth picture part of LEGO blocks also casts hard-shadows?

Because thats the only shadow the Lego blocks cast. The "soft shadows" in this picture are real shadows that were captured by the camera when he took this picture.

Mr Floopy

April 27, 2001, 09:42 PM

When I was quite young, We used to make stop motion movies with our lego using an old Super8 Movie camera. Have about 15 minutes of space lego battles. Then get the old over head projector pens and draw in the lasers one frame at a time.

Lego land was very cool too. Those cars were awesome.

Poor guy. His lifes work (to date) on show for everyone and everyone's talking about the Lego. Such is life :)

PS It is very cool though.


April 27, 2001, 09:56 PM

nice pics,

the lego blocks remind me of the book 'microslaves' from David Copland. (With the guys and girls creating a lego building game ...)

keep up the good work,


Tim Wojtaszek

April 27, 2001, 10:17 PM

very interesting, but how did you get the shadow information when the cube is behind the lego's? Could it deal with changes in the light source? How do you determine collision exactly?...especially with the back edge of the lego that you can't see?? If this was for a project, do you have any documention on it...for some reason this really interests me. Did you ever do something with stereo images, that might allow for better calculation of objects withing the seen, hmm i dunno, but more info please.


April 27, 2001, 10:28 PM

You mean Microserfs by Douglas Coupland?

Vander Nunes

April 27, 2001, 10:46 PM

No, he said "the CUBE is moved using the mouse", not "THINGS are moved using the mouse". This is VERY different.

Not trying to diminish his work, but I really don't think he implemented a true Computer Vision system. Not even close to that. He did some simple tricks to give this *impression* to the expectator.

True Computer Vision is very complex, if not impossible (today) with just monoscopic, untouched, pure images. Humans are able to presume depth from monoscopic images based on ENORMOUS world knowledge and experience (yes, even a baby has HUGE world knowledge). Even with this, humans are not comfortable to get precise depth from monoscopic images.

Try to walk an entire day using only one of your eyes and you'll understand what I'm saying. :)


April 28, 2001, 12:34 AM

As people have said, you have a virtual model and light the virtual model makes shadows and works for collision. Thats why you don't make anything too complex. Still, to get it working as nicely as this is a reasonable feat.

Joachim Hofer

April 28, 2001, 04:51 AM

Or even try to play table-tennis. You won´t manage to hit the ball (I tried it).

Collision detection would also not be possible, as the computer can not know what´s behind those stones.

None the less it is _very_ impressive, especially the shadows from the stones casted on the cube.

Andy Maddison

April 28, 2001, 06:21 AM

Unfortunately the video is 65+ Mb and I have no way of posting it anywhere (and I only have a modem connection). The project itself belongs to Coventry University (UK) so I can't post it either.

The system workings are relatively simple:

The real scene (the arch) was modelled accurately with Max.

Each frame the arch model is rendering into the Z-buffer which takes care of the occlusion problem.

The light in the scene is coming from a spotlamp, whose position I measured so I could create correct looking shadows (even if they are hard edged).

The shadows are created by projecting caster polygons from the point light onto receiver polygon planes, and then clipping at the polygon edges.

The collision detection is very basic, i.e. box to box.

One of the biggest problems was getting correct registration between the virtual and real scenes. And setting up the virtual camera so that the whole scene looked ok.

I was also lucky enough to present a paper (Computer-Augmented Reality Visualisation for Architecture, Engineering and Construction) at a conference in March 2000. The paper was written by 2 lecturers and included my work. I think I'll find out if I'm able to post it.



April 28, 2001, 06:47 AM

A friend did a funky project for his 3rd year in our cybernetics department - the idea was:
- have a robot moving around, using ultrasonics to spot obstacles.
- let the user control the robot with a joystick. The user wore a ski mask, and could only see LEDs representing the ultrasonic pulses that the robot was receiving from any obstacles around.
- a camera viewed the room, and picked out the robot's position and direction using infrared LEDs on the top. This was used to compare the human's and the robot's movement patterns
- the data from the camera & the ultrasonic transducers (received via radio) was used to recreate the world from the robots point of view on the computer, with a 3D world & obstacles appearing as the robot spotted them.

It's kinda augmented reality...more or less. Maybe its inverted augmented reality. Ahhh well.



April 28, 2001, 09:19 AM

Hey everyone,
I have a theory, which so far has proven to be true:

Every coder did in his childhood play a lot with Lego!

So, are there any coders out there who did NOT build castles and spaceships using the little pieces of plastic?



Björn Aspernäs

April 28, 2001, 10:01 AM

Nice work, it's very impressive. Are you or anyone else developing it further, or using it somewhere?

At first I thought you were working on same thing as me, but I see that it's a bit different. My master's project at university deals with computer vision, with volumetric dense reconstruction (Not in real-time though). I've been thinking about making a contribution on the IOTD, I'll post it when my report's done.



April 28, 2001, 12:28 PM

Me. I was never a big fan of legos when I was a child. Though I was always the person my brothers came to when they couldn't get their new neato-keen lego set put together. :)



April 28, 2001, 01:29 PM

Dude, i think you are missing the point. Augmented Reality is not the same thing as Computer Vision. Augmented Reality could *use* computer vision, to construct the geometry it needs for the virtual-real object interaction, but it doesnt have to.

The geometry can be supplied by the developer, and then all you need is the users exact position and head orientation (which isnt too hard if they're wearing say a helmet).

I think this is a very cool IOTD, because AR is an extremely interesting prospect for the future. The possibilities for it are endless.


user name

April 28, 2001, 02:25 PM

hey, wouldnt this be cool to do something like this using a VR helmet?

imagine walking around your own house playing a version of "house of the dead"!

porn would be cool too though :)


April 28, 2001, 02:32 PM

Okay, there are lots of comments this time around! Let me ask my own thing, I don't know if it's been brought up already, because it'd take a while to read all the posts.

How do you get the 3D information into the program to determine the shadowing/interaction? Is it inferred from the image by some algorithms or is it known beforehand, either modeled separately or from the real-world lego thing?


This thread contains 56 messages.
First Previous ( To view more messages, select a page: 0 1 ... out of 1) Next Last
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.