using basic opengl rendering results for software rendering

hi,

I’m writing a software renderer which do not use polygons for the final render; however the first steps of this renderer actually use polygon rendering, and so it could be hardware accelerated using opengl.
I was wondering what was the best way to use opengl hardware acceleration capacities without using it as a final result, but being able to use opengl rendering for further software treatment.
As I see it, it could be great if opengl could do polygon transform and coloring; next i could deduce each pixel’s uv texture coordinate just by looking at the the color opengl drawn on it (if I draw gradients of Red and Green from vertices to vertices… reading red and green values would give me u and v coordinate).
However I don’t know if this is the best way; and one problem is that glReadPixel is very slow, so if I have to call opengl several time to render some polygon and next saving the result with glReadPixel it could be badly slow…
Is there something better than glReadPixel (ARB powered ?) and what are you general thought about this, do you have suggestions, etc… ?

Thanks !

Reading texture coordinates by colors seems pointless to me as normally you have to specify them, so you already have to know them, so it makes no sense to read them again . unless you are using any type of automatic texture coordinate generation.

But what are you wanting to do anyway? Does it really make sense NOT using OpenGL as the final renderer? Maybe you should explain what exactly you are planning to do…

Jan

“unless you are using any type of automatic texture coordinate generation.”

Yes that’s why I talk about gradient of Red and Green, which, automatically generated by opengl, allows me to retrieve every pixel’s uv coordinate without calculating them using the processor.

“But what are you wanting to do anyway? Does it really make sense NOT using OpenGL as the final renderer?”

Actually the problem is that the final render is not polygon based; it would be a kind of volumetric renderer, with displacements maps which would not generate new triangles but would be directly rendered with raytracing; however the first step of this rendering is polygon based; I define volumes using classical faces, and I need to know every uv’s coordinates on each pixel of theses transformed volumes.
Next I do further processing based on theses primary informations.

Hmm, if you got texture coordinates this way, their precision may not be too high (8 bit color buffers).
Together with a full screen (I assume) pixel read, maybe you would really be better off to compute texcoords automatically (would colors even return perspective correct texture coordinates, or are they interpolated using simple Goraud shading?)

I use a precalculated texture colour map to get around the 8bit resolution issue. I use 11 bits for U and 11 for V (2kx2k). There is a problem with using ‘ID rendering’ via texture maps though, NVidias intellisample set to ‘high performance’ breaks it and you cant switch it off.

Readpixels on NVidia Cards can reach about 200 Mbytes/sec, ATI about 70 Mbytes/sec. Take a look at NVidias PDR extension for maximum readpixels performance. Also search this board, there are plenty of posts about improving readpixels speed.