Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 5 of 5

Thread: 16 bits unsigned texture

  1. #1
    Intern Contributor
    Join Date
    Aug 2000
    Location
    Haifa, ISRAEL
    Posts
    82

    16 bits unsigned texture

    Hello all,
    I can't find a way to define a 16 bits unsigned texture without losing precision. In Any type of definition I try (GL_ALPHA16, GL_LUMINANCE16 etc), the hardware downscale the texels into 8 bits registers that when read by the fragment shader gives me no more than 256 values.
    I thought that maybe defining it as a depth texture I can get something (more bits to the depth values), but I am not sure this is the right direction.
    Any ideas will be highly appreciated.
    Thanks,
    Yossi

  2. #2
    Member Regular Contributor
    Join Date
    Apr 2002
    Location
    Austria
    Posts
    328

    Re: 16 bits unsigned texture

    I noticed a similar problem using GeForce FX hardware. Which hardware do you use?
    There is a theory which states that if ever anybody discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable.

    There is another theory which states that this has already happened...

  3. #3
    Intern Contributor
    Join Date
    Aug 2000
    Location
    Haifa, ISRAEL
    Posts
    82

    Re: 16 bits unsigned texture

    I am using the 3DLabs wildcat VP990 pro card based on the P10 processor.
    Yossi

  4. #4
    Junior Member Newbie
    Join Date
    Apr 2004
    Posts
    10

    Re: 16 bits unsigned texture

    P10 will only support 8-bits per component.

  5. #5
    Intern Contributor
    Join Date
    Aug 2000
    Location
    Haifa, ISRAEL
    Posts
    82

    Re: 16 bits unsigned texture

    Thanks,
    Will P20 support 16 bits per component?
    From what I've read it will support floating point per component pixels, so I guess there will be no accuracy problem related to integer maths.
    By the way doe's it means that computations through all the pipeline stages will be held in floating point and just before writing to the frame buffer will be converted to 32 bits, 8 bits pre component?

    Thanks again,
    Yossi

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •