huge quad

here´s a tricky one:
i´m currently writing a submarine game and everything is working fine, until i decided to include a sea surface. this is simply one huge quad with an alpha value of 0.7 or something i positioned a bit above my landscape which overlaps the extents of my world a bit - the quad is something like 8000x8000 units (maybe more) large and has a texture map applied to it. while this works fine for some openGL implementations (voodoo2), it delivers incorrect results on ATI and GeForce boards: the surface then is only visible if i view it from a certain angle which seems to be almost perpendicular to the sea surface, as soon as i alter the angle, the surface becomes darker and fades away. moreover, the quad is NOT affected by glFog, which i have not had any trouble with up to now.
my only suspicion is that openGL might have trouble with clipping large polygons, but that seems pretty ridiculous. i´ve played around with switching texturing, lighting, fog, materials, etc on and off but nothing at all worked.
any suggestions?

I doubt it is the size of the quad, size has no meaning in OpenGL. Last week I used a scale of 1 unit to 1 meter, the other day I had 1000 units to 1 meter and today I have made a quick game where the entire level which is about the same size as a quake 3 level is less tan 0.1 Opengl units across. The only thing with size is with depth buffer ranges.

Try make that huge quad several smaller quads instead.

stick this before u draw the quad
glDisable(GL_LIGHTING);
glColor4f(1,1,1,1);

Originally posted by Bob:
Try make that huge quad several smaller quads instead.

of course, that´s obvious… but that´s exactly what i wanted to avoid in order to reduce the amount of vertices of my scene. i see nothing wrong about my idea and wonder why some openGL implementations render it so differently.

btw, i´ve tried everything with glLighting, Color, and Culling, and nothing helps :frowning:

the voodoo2 cant handle textures bigger than 256x256 also its it tends to choke on to larger texture coords/acuuracy. perhaps the other cards are displaying it right and the voodoo is wrong

i doubt it.
the texture´s not hte problem, you can remove texturing and it still won´t work. (texture size is 128x128)

this is the code, looks blindingly innocent, doesn´t it?

GLfloat wh=1000.0;
GLuint xquads=256, zquads=256;
GLuint xwidth=xquads20.0, zwidth=zquads20.0;
glColor4f(0.7, 0.7, 0.7, 0.7);

               glBindTexture(GL_TEXTURE_2D, waterid);
glBegin(GL_QUADS);
	glNormal3f(0.0,1.0,0.0);
	glTexCoord2i(0, 0);
	glVertex3f(0.0, wh, 0.0);
	glTexCoord2i(xquads>>1, 0);
	glVertex3f(xwidth, wh, 0.0);
	glTexCoord2i(xquads>>1, zquads>>1);
	glVertex3f(xwidth, wh, zwidth);
	glTexCoord2i(0, zquads>>1);
	glVertex3f(0.0, wh, zwidth);
glEnd();

If you are only getting “correct” behavior on Voodoo2 and “wrong” behavior on everything else, then according to Occam’s Razor, it is probably the Voodoo 2 that is faulty. It is simply coincidental that it’s faulty behavior is the same as the behavior you seek.

Try using the floating point version of glTexCoord instead. I don’t quite understand why you are doing this part…

GLuint xquads = 256, zquads = 256;

glTexCoord2i(xquads>>1, zquads>>1);

Why not just use

GLuint xquads = 128, zquads = 128;
glTexCoord2i(xquads, zquads);

Ok, that would also force you to change your xwidth and zwidth formulas as well… It just seems kind of odd to me.

On other thing I’m not sure about as I’ve always used floats for texture coordinates, and I can’t seem to find any info on it at the moment is if the integer version of glTexCoord2i views the range 0 - 255 as the floats 0.0 - 1.0… I know that things like glColor3ub does that, but an integer has a much larger range. My guess is that it does, but it might be something to look into.

At any rate, you would only be using a portion of the texture for your quad (unless the range is actually 0-128), is that what you want? I might have to modify one of my apps to check that out…

Ok… I just did some playing around and got some interesting results… It appears that glTexCoord2f(1.0, 1.0); is identical to glTexCoord2i(1,1); I’m using Win2k and a GeForce256, and the 6.50 drivers.

Trying to use 255,255 did NOT get me the results I expected. It seems odd to me that it would be this way. How would you get fractional texture coordinates by using glTexCoord*i then? You cannot get anything between 0 and 1 with an integer.

[This message has been edited by Deiussum (edited 04-29-2001).]

texture confusion

ok why am i doing this: xquads=256; zquads=256; gltexCoord2i(xquads>>1, zquads>>1) ?
xquads and zquads are usually not 256 but some other value depending on how large my world is. never mind.
well, the reason why i use values larger than 1 is texture wrapping / looping, since my quad is huge and the texure is only 128x128, mapping 0.0 to 1.0 would look pretty stupid, so i set the TexParameters to GL_WRAP, map from 0.0 to width/2 and everything works fine.

but once again, the problem is NOT TEXTURING!
if viewed from any other angle than perpendicular to the quad, it just DISAPPEARS, it fades away with every adjustment to your viewing position no matter if texturing, lighting or materials are switched on or off.
that´s the haunting bit about it, i´ve got a fully working scene but this stupid quad is just driving me nuts

Perhaps you should show a screenshot or two of the problem.

As for the fog problem, most fog is computed per-vertex and interpolated across the polygon. So, if your quad is that large, the fog will certainly not work correctly.