[QUOTE=theeta357;1252880]I’m trying to run Opengl sample on a window with depth 32
I tried running the sample on a window with depth 24 and it runs fine.
the attributes for choosing the visual were
GLint att[] = { GLX_RGBA, GLX_DEPTH_SIZE, 24, GLX_DOUBLEBUFFER, None};
[/QUOTE]
GLX_DEPTH_SIZE is the number of bits in the depth buffer, not the “depth” (bits per pixel) of the colour buffer.
glXChooseVisual() doesn’t allow you to filter directly on the number of bits per pixel. For colour-index visuals, you can specify GLX_BUFFER_SIZE; for RGB/RGBA visuals, you specify GLX_RED_SIZE etc individually.
[QUOTE=GClements;1252884]GLX_DEPTH_SIZE is the number of bits in the depth buffer, not the “depth” (bits per pixel) of the colour buffer.
glXChooseVisual() doesn’t allow you to filter directly on the number of bits per pixel. For colour-index visuals, you can specify GLX_BUFFER_SIZE; for RGB/RGBA visuals, you specify GLX_RED_SIZE etc individually.[/QUOTE]
Thank you for making it clear.
But even after giving the GLX_BUFFER_SIZE as 32 the visual which gets returned doesn’t have a depth of 32 (vi->depth value is still 24)
below are the new attributes
GLint att[] = { GLX_RGBA, GLX_DEPTH_SIZE, 24, GLX_BUFFER_SIZE, 32, GLX_DOUBLEBUFFER, None};
GLX_BUFFER_SIZE is only applicable to colour-index visuals. It is ignored if you specify GLX_RGBA.
Also, I would expect the visual’s depth to be 24 for a 32-bpp RGBA visual, as X itself doesn’t use an alpha channel. From X’s perspective, the “depth” is the number of bits actually used for the RGB data, and doesn’t include any padding bits. On a modern system, the screen typically has a depth of 24 and a bits_per_pixel of 32.
the “depth”-value you see when using xdpyinfo is not related to the opengl depth buffer.
use glxinfo instead. personally, i have never seen a 32 bit depth buffer.
even on a nvidia quadro 5000 the depth buffer has only 24 bits.
[QUOTE=RigidBody;1252970]the “depth”-value you see when using xdpyinfo is not related to the opengl depth buffer.
use glxinfo instead. personally, i have never seen a 32 bit depth buffer.
even on a nvidia quadro 5000 the depth buffer has only 24 bits.[/QUOTE]
Well, I understood this part that depth value in visual is not actually the depth buffer size of gl
I solved my problem by making the window with visual depth 32 to visual depth 24 (technically I changed the visual of the x window before it gets realized)
and now GL prog runs fine on it.
but still haven’t figured out a way to query a Visual with depth 32 through glXChooseVisual(…)
will use glxinfo as you said to figure this out…
i was trying to say that your graphics card will most probably not support a visual with 32 bit depth buffer.
because 24 bit are enough for a z-buffer, and building a graphics card with 32 z-buffer when 24 bits are sufficient, would be some kind of waste.