Pixel Data Range problem

Has anyone got the pixel data range extension to work?

I can’t get asynchronous behaviour from glReadPixels.

The spec says I have to use BGRA which means I lose 25% performance since I don’t need the alpha channel.
The spec is here http://www.nvidia.com/dev_content/nvopenglspecs/GL_NV_pixel_data_range.txt

I am using detonators v41.09 on a GF4600.

I think I am getting synchronous behaviour because if I use the data from readpixels immediately after the call it is all there, which implies it has waited for readpixels to finish before continuing. It also runs exactly the same speed as when I don’t use the extension.

I’ve tried changing the read write priorities but it makes no difference.

I suspect glPixelDataRangeNV is failing but I don’t know why or how to debug it.

If the NVIDIA guys are reading, a demo would be good as well as acceleration for RGB

Here’s the code,

unsigned char* big_array = 0;

    wglAllocateMemoryNV = (PFNWGLALLOCATEMEMORYNVPROC)wglGetProcAddress("wglAllocateMemoryNV");
    if (NULL == wglAllocateMemoryNV) {PDROK=0;}
    wglFreeMemoryNV = (PFNWGLFREEMEMORYNVPROC)wglGetProcAddress("wglFreeMemoryNV");
    if (NULL == wglFreeMemoryNV) {PDROK=0;}
	
    glFlushPixelDataRangeNV = (PFNGLFLUSHPIXELDATARANGENVPROC)wglGetProcAddress("glFlushPixelDataRangeNV");
    if (NULL == glFlushPixelDataRangeNV) {PDROK=0;}
    glPixelDataRangeNV = (PFNGLPIXELDATARANGENVPROC)wglGetProcAddress("glPixelDataRangeNV");
    if (NULL == glPixelDataRangeNV) {PDROK=0;}

	big_array = (unsigned char *)wglAllocateMemoryNV((Accuracy2*Accuracy2*4)+32, 1, 0, 1.0f);


	big_array+=32-(((int)big_array)%32);

	glPixelDataRangeNV(GL_READ_PIXEL_DATA_RANGE_NV,(Accuracy2*Accuracy2*4)+32,big_array);
	glEnableClientState(GL_READ_PIXEL_DATA_RANGE_NV);	

	glReadPixels(AccuracyDiv2,
              AccuracyDiv2,
              AccuracyMult2,
              AccuracyMult2,
              GL_BGRA_EXT,
              GL_UNSIGNED_BYTE,
              (unsigned char*) big_array);	

btw Accuracy2=512

[This message has been edited by Adrian (edited 12-01-2002).]

>>The spec says I have to use BGRA which means I lose 25% performance since I don’t need the alpha channel.<<

i suck in math (though this is logic), is it 25% or 33%?

i believe anyways in most hardware (including gf’s) there will be no speed difference between the 2 cause RGB will actually get padded to RGBA so dont lose sleep over it.