Not logged in, Join Here! or Log In Below:  
 
News Articles Search    
 

 Home / 3D Theory & Graphics / DX Render Targets Account Manager
 
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
 
jag

March 10, 2005, 05:43 PM

I'm trying to figure out a way to take in an 8 bit image, upscale it to 16 bits, do some manipulations, tonemap it down to 8 bits and then display it. I was playing around with render target formats and found if:

If i create a 16 bit render target using the D3DFMT_A16B16G16R16F format in DX9, and I then render an 8 bit image to the target before splatting it on the screen in the 8 bit default rendertarget everything turns out fine. How is this working without tonemapping. What would I need to do to be able to access the 8 bit input image in a shader as if it were a 16 bit image?

 
Axel

March 10, 2005, 05:59 PM

I don't quite understand what you mean, but the shaders use 32(nV)/24(ATi) bit precision internally and every format will be converted accordingly for input and output.

 
jag

March 10, 2005, 06:26 PM

hey axel,


Do you know if the 16 bit texture that converts the 8 bit stores 0 in the high byte's or does it do a linear scaling of the bits. I would rather do a square or something to preserve dark and white's since I have a display whic has the ability to display 16 bit images

 
Reedbeta

March 10, 2005, 07:01 PM

The format you're using is a floating point format. It is not like just using 16-bit integers instead of 8-bit ones for the pixel intensities. When you render the 8-bit image into the 16-bit buffer, the 8-bit image is mapped onto the [0,1] range and stored as floating point. If you then render it to a screen, it gets clamped to [0,1] and converted back to 8-bit. Obviously, nothing is going to change.

Tone mapping is a way to map a much larger range of intensities, such as [0, 2^15] (or whatever the maximum magnitude of the 16-bit float is) down to [0, 1]. But if you never put intensities greater than 1 into your floating-point buffer, tone mapping will be useless.

 
jag

March 10, 2005, 07:23 PM

I see,

yea i will probably just need to need to do a linear map for now between the 8 bit to 16 bit, where every value is copied and the index for the 8 bit array is icnremented for every 256 values of the 16bit buffer.

 
This thread contains 5 messages.
 
 
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.