fake bump mapping?

Is it possible to make “fake” bump mapping in OpenGL (without extensions) ?

I remember I did it many years ago in VGA mode. I had bump map and when I draw texture I added bump value to texture coordinates.

In one tutorial I’ve read that I must draw object with texture then move coordinates, draw it again with substracting from previous one. But this will give me one-color emboss image. What should I do to have bumpmap and texture?

sorry if this is muddled but here’s my explanation - when you did it in software it sounds like you added the bump-map texture COLOR result values to the color texture map TEXTURE-COORDINATES as an offset. using the result of one texture lookup in the coordinates of another like this is dependent texturing, and requires very recent hardware and NV_texture_shader or ATI_fragment_shader extensions to do it in openGL. the technique you are talking about i think appears in direct-X 6/7(?)as BUMPENV or EMBM, dx8 in pixel shader instruction something-or-other, and on NV hardware for openGL in NV_texture_shader as OFFSET_TEXTURE_2D. in ATI fragment shaders i think you calculate a color result in a first pass thru the combiners and then use it as a texture-coordinate for the second pass. the technique does a not-very robust approximation of reflective bump-mapping for flat surfaces.

without those extensions the only example i have seen of doing dependent texturing in GL is a technique for environment-mapped-bump-mapping where you draw the object into the framebuffer using a rgb-encoded normal fetched from a normal map as the color of each pixel -i think you also write the pixels destination alpha to tag which pixels are part of the object - then you read the whole framebuffer (or the object bounding box) back into an array using glReadPixels(), then reset your projection and view tranformations to do window-space rendering and draw alpha tested/blended pixel-sized quads using the rgb result from the array at each pixel as texcoords into a cube-map. so you are actually redrawing the object one pixel at a time - using the color of the pixel in the first pass as the texture coordinate for the pixel in the second pass… it’s the only way to get per-pixel dependent texturing without extensions AFAIK.

i’m not sure how you could adapt this for offset-bumpmapping, because you need the original texture coords from each pixel as well as the offset from the bumpmap for each pixel. you also need to do a 2x2 matrix transform on the offset values to properly orient the offset so it matches the orientation of object and light. you could draw the object normally to the screen, but writing the RGBA color as:

r=ds (s offset from bumpmap texture)
g=dt (t offset from bumpmap texture)
b=s (original s texcoord for color texture at this pixel),
a=t (original t texcoord for color texture at this pixel)

this would require two passes - one with a red-green only write mask using the bump-map texture (encoded as ds and dt offsets in red and green channels respectively) to write ds and dt per-pixel, and one with a blue-alpha only write mask using a texture that just maps each fragments’ s&t texture coordinates in 0-1 range to blue&alpha colors in 0-1 range. now you’ve got ds,dt,s and t for each pixel in the framebuffer you read it all back using glReadPixels(). you would have to write to the stencil buffer in one of the first two passes to tag which pixels should be updated in the last pass (or you could use a third pass drawing the object writing just into destination alpha AFTER you’ve read back the framebuffer). then on the CPU you would transform your per-pixel ds and dt values (the read-back pixels’ red and green colors) by your 2x2 matrix, and then add the resulting ds to s (ie s = ((red&green)(top_row_2x2_matrix)) + blue) and resulting dt to t (ie t =((red&green)(bottom_row_2x2_matrix)) + alpha) and use the results as s and t texcoords for (stencil tested or alpha blended) pixel sized quads, drawn in window coordinates, and textured with your desired color texture.

anyway, it would be better to do the RGB-normal based dependent texturing trick in 2 passes with a GL_SPHERE_MAP texture, which is in unextended openGL, instead of a cube-map; rather than the 3 or 4 passes i described for the offset technique … and it would probably look better than the offset anyway.

needless to say, reading back the framebuffer like this and rebuilding it with pixel-sized quads would be real slow. i saw a demo of this kinda thing on gf2 doing a single environment-mapped-bump-mapped quad, and it ran at about 20fps in a small window. i’ll see if it’s still up on nvidias site - then i can post a URL to some code which may make more sense than this reply.

all this is assuming you actually want to do reflective bumpmapping - AFAIK that what offset bumpmapping is usually used for. for straight emboss bumpmapping you don’t need to perturb the color texture - you just use the emboss results as a kind of cheap per-pixel lighting equation. if you just want to give a non-reflective surface the appearance of bumps, i’m sure just applying the color texture with GL_MODULATE on top of the greyscale emboss result (from the offset subtractive texture blend you described in your post) could look alright. it will still look kinda bumpmapped, just not the perturbed-reflection kind of thing you get from offset dependent texture reads.

also nvidia has an extension called NV_texgen_emboss that does something very like this - they don’t support it on the gf3/4 cos those cards can do better stuff, but they do support it on gf1/2 or maybe TNT - i don’t think it’s even in their extension spec anymore, but it should be in the extension registry. if you read the spec you might get some ideas. i know it describes the embossing technique, but without the dependent texturing you mentioned you previously implemented. so it could be just what you need.

[This message has been edited by vshader (edited 08-10-2002).]

[This message has been edited by vshader (edited 08-10-2002).]