Not logged in, Join Here! or Log In Below:  
News Articles Search    

 Home / 3D Theory & Graphics / Full speed on multiple monitors Account Manager
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
Dr. Necessiter

March 04, 2005, 03:09 PM

Is there a way to get full-speed DX acceleration on a secondary monitor? I'm still using DX8. DX9 doesn't magically solve this, does it?


March 23, 2005, 10:55 AM

Not from what I've seen. It is still slow, at least in our engine with no code modifications. You *might* be able to do something fancy with swap chains or something, but I don't know how it would be done.

Dr. Necessiter

March 23, 2005, 01:53 PM

Hey, thanks for responding. I was surprised nobody else has messed with this.

What makes me scratch my head is that in windowed mode the "flip" is just done with a glorified blit anyway. Why would DX care where that memory gets copied to? I'm sure there is some hardware "gotchya", but damn... it would be cool to have full speed on both monitors.

Jari Komppa

March 23, 2005, 01:56 PM

I doubt it's possible.

Take any 3d-accelerated, windowed application with FPS counter. Watch it for a while in primary monitor. Then drag the window to the secondary monitor, and watch the FPS counter again.

I'm not sure about all systems out there, but the ones I've played with have worse FPS in the secondary monitor.


March 23, 2005, 02:41 PM

This is possible to do. You need to reinitalize the whole device on the new monitor. So you need to detect when your window moves to the other device. This is the reason why you see the large drop in frame rate in most apps. There does exist examples of this in DirectX9. I think even all samples in DirectX9 works like this by default.

In windows mode it might very well be just a blit but you have to keep in mind that the gfx card needs to have all the textures in its memory. So if you move to another monitor he will be forced to read the frame from the first gfx card then blit it onto the second cards memory.

Dr. Necessiter

March 23, 2005, 04:12 PM

I'm sorry, I should have mentioned that the situation I have is two monitors on the SAME graphics card. As noted above, once a window touches the second monitor, performance drops by 80%. Just seems strange... it's all the same memory, after all (I think!)


March 25, 2005, 03:18 AM

If you want to run something at full speed on whatever monitor, like Joaeri said, just initialize the proper device associated with that monitor.

Having a window partially on one monitor and partially on another will always be slow.
The only way to solve that problem would be to create 2 windows, one for each monitor, each with their own device etc. and do the rendering seperately for each monitor.


March 25, 2005, 01:03 PM

If you want to go full-screen exclusive on multiple monitors connected to the same video card, you can specify D3DCREATE_ADAPTERGROUP_DEVICE when you create your D3D device.

This doesn't solve the windowed case. In windowed mode, you are stuck creating multiple devices, and replicating all your resources. I believe this is because internally Windows treats each head of a multi-head adapter as a separate device.


Dr. Necessiter

March 25, 2005, 02:28 PM

Wow... bummer. I can only imagine this is a "problem" with DirectX's architecture and not the hardware. It sure seems like in the windowed case it should be easy just to blit the finished frame from the backbuffer into whatever window you want.

John Dexter

March 31, 2005, 06:23 AM

You can do it full-speed as long as the window is always on one monitor or the other. If it spans both monitors then it'll suck. Maybe that helps you?
I thought some cards let you set your resolution as like 2048x768 to have a true single desktop with a startbar spanning the monitors. Maybe you can do it on that hardware?

Dr. Necessiter

March 31, 2005, 08:32 AM

Actually, that isn't the behavior I'm seeing. I only get full-speed if the window is 100% on the "primary" monitor. The performance is killed if any or all of the window is on the "secondary" monitor.

I'll look into that "wide" resolution idea, although I've never seen that in the setup on my Nvidia or AIT based boards. But, that's exactly how I would think it should behave. As I said... it's all the same memory, so it's odd that this would be such an issue.


April 02, 2005, 02:44 AM

im rendering videos at 200+ fps on the second monitor using multiple swap chains.... but it has to be fullscreen

Stephan Schaem

April 04, 2005, 08:39 PM

This Micrsoft architecture design problem will be fixed in Longhorn...
be patient...


Steven Hansen

April 05, 2005, 04:55 PM

I don't have any solutions, but here is an explanation lacking from the converstation.

Each display adapter (monitor) has its own exclusive display memory - even though a single video card drives both adapters. This memory is not shared between display adapters. I believe this shortcoming may be OS based. As you drag a window from one monitor to the next, the rendered frame must be copied to the other adapter's memory. Same video card - different adapter, different memory.

To bump up the speed once you move the window over to the secondary adapter, just recreate the device for the secondary adapter and do your rendering there. For most good video cards, rendering on the secondary adapter is as fast as for the primary adapter.

The directx framework usually recreates devices when windows are moved - you can look at their code for any insights.

Windows that span monitors will always be slow. The entire window must be rendered on one adapter, and part of the result is copied to the other adapter - ugh. Maybe a new OS will address this.

Dr. Necessiter

April 05, 2005, 08:17 PM

Awesome news about Longhorn. Are you a MS employee/MVP or otherwise have accurate knowledge of this? This is a BIG plus for one of my clients if it really works.


>Each display adapter (monitor) has its own exclusive display memory - even though a single video card drives both adapters.<

I assume you mean "logically" exclusive display memory? It would seem odd to require all these dual-headed adapters to have so much extra framebuffer memory onboard (5 meg or more) that is otherwise wasted in single-monitor mode. From what I know of graphics card vendors, they wouldn't waste memory like that unless they had no other choice. (Plus, it would contradict Stephan's comment.)

Even still, the odd thing is that the speed reduction is >far< more than the cost of the extra blit involved. I go from 160fps to 20. Surely the blit is not the problem. There must be some memory contention issue or fundamental architecture problem here.


April 06, 2005, 01:51 AM

isn't this problem with the d3d only?


Steven Hansen

April 06, 2005, 06:10 PM

I've been under the impression that D3D relegates the copy process to the operating system. In other words, the memory isn't copied from one area of video memory directly to another, but rather the entire frame is read back into system memory, processing is done to clip the needed portions, and then the information is copied back to video memory. This makes sense when you consider that spanning windows works not only for multi-head adapters, but for multiple video cards driving multiple heads as well.

Obviously, such a copy process is *incredibly* expensive - and would easily explain your dramatic frame-rate drop.

I think the fundamental architecture problem is that the OS is not really taking advantage of the fact that a single video adapter drives both monitors. Instead, the driver exposes two adapters, and the OS treats each adapter as though it were indeed a different hardware device. Thusly, to move data from one adapter to another, it must read the frame buffer, clip it, then send it to the other adapter. Sad.

This thread contains 17 messages.
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.