Fragment programs and gl{Draw,Copy}Pixels

Hi,

I am wondering whether I should expect the glDrawPixels and glCopyPixels operations to generate fragments and send them through a fragment program I’ve loaded.

My reading of the man pages and fragment program spec suggests so, but actually testing the code suggests not. I can set up the fragment program, and run three calls glDrawPixels, glCopyPixels, and glRect one immediately after the other, and only the glRect call seems to generate fragments that go through the fragment program.

On the other hand, the man page at http://www.opengl.org/documentation/specs/man_pages/hardcopy/GL/html/gl/copypixels.html says

The GL then converts the resulting indices or
RGBA colors to fragments by attaching the
current raster position z coordinate and
texture coordinates to each pixel, then
assigning window coordinates (x +i,y +j),
where (x ,y ) is the current raster position,
and the pixel was the ith pixel in the jth
row. These pixel fragments are then treated
just like the fragments generated by
rasterizing points, lines, or polygons.
Texture mapping, fog, and all the fragment
operations are applied before the fragments
are written to the frame buffer.

The extension document for GL_ARB_fragment_program also seemed to suggest that pixels from these calls would be fed through an enabled fragment program.

Any help with this, or general advice for feeding pixels I’ve already rendered back through a fragment program (maybe with texture hacks) would be appreciated. (I am working at a machine that generally has OpenGL v1.4 support.)

Kevin

Should work, but might not be accelerated in hardware on all implementations or not implemented in the ones you tested.
The fast way to do this is to download the data into a texture (glDrawPixels => glTexImage2D, glCopyPixels => glCopyTexImage2D) and draw a textured quad at the destination.
The pixels functions use the current RasterPos location and also generate z on the written area. Make sure you disable depth test if you don’t need it.

Ah, thanks, Relic. It’s a shame the OpenGL implementation I’m working with is broken this way. For reference, it’s a laptop with Intel’s 915GM chipset:

GL_VENDOR: Intel
GL_RENDERER: Intel 915GM
GL_VERSION: 1.4.0 - Build 4.14.10.3984

I suppose this leads to a follow-up question: generally, how does one know which parts of the OpenGL spec are likely to be implemented incorrectly or sketchily on common hardware? It is irritating that my hack with glCopyPixels is supposed to work but fails without any error indication on my hardware. (I tested the glCopyTexImage2D followed by textured rectangle strategy, and that does work, but obviously it incurs an extra copy.)

I never used an Intel OpenGL implementation which should suffice as comment. :wink:
If it doesn’t work, update drivers; if the error persists, file a bug.
The texture image detour is the workaround of choice because that’s definitely on the fast path since this is what apps using fragment programs do for standard rendering.
The texture download or texture copy operations should be similarly well optimimzed unless you use non default transfer maps and stuff.

Kevin,

glDrawPixels goes through the Rasterizer (pixels are raterized). Anything that comes out of the Rasterizer can be accessed inside the fragment shader. Don’t need to enable a vertex shader for this.

As fas as the Intel graphics chip set, there is nothing much you can do about it. Does it actually support Vertex/Fragment shaders ?

Yeah, the chipset does support v1.4 fragment programs (but not the shader language). I can put three calls in a row–

glRect(…)
glDrawPixels(…)
glCopyPixels(…)

–and all three cause drawing to happen in the frame buffer, but only the glRect fragments actually go through my fragment program; the fragments from glDrawPixels and glCopyPixels are drawn, but do not go through the fragment program (and apparently aren’t textured, lit, fogged, etc. according to the regular fragment path either). To put it another way, glDrawPixels and glCopyPixels appear to work like plain blits rather than going through any part of the pipeline.

I haven’t tried or used vertex programs yet. Most of what I’m thinking about doing is per-fragment stuff, for a 2D vector graphics renderer.