Texture binding problem for glslang

I’m having some trouble with binding textures to glslang shaders. Because of the very separated nature of the code that is doing this, I can’t post all of it, but I do have a glIntercept trace of a frame loop. The program of interest is program id 12:


wglSwapBuffers(0x87010b46)=true 
glClearColor(1.000000,1.000000,1.000000,1.000000)
glClearDepth(1.000000)
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT)
glDisable(GL_DEPTH_TEST)
glDepthMask(false)
glEnable(GL_SCISSOR_TEST)
glViewport(0,0,640,480)
glMatrixMode(GL_PROJECTION)
glLoadIdentity()
glOrtho(0.000000,640.000000,480.000000,0.000000,-1.000000,1.000000)
glMatrixMode(GL_MODELVIEW)
glPushMatrix()
glLoadIdentity()
glLoadMatrixf([1.000000,0.000000,0.000000,0.000000,0.000000,1.000000,0.000000,0.000000,0.000000,0.000000,1.000000,0.000000,0.000000,0.000000,0.000000,1.000000])
glUseProgram(15)
glGetAttribLocation(15,"position2d")=0 
glEnableVertexAttribArray(0)
glBindBuffer(GL_ARRAY_BUFFER,6)
glVertexAttribPointer(0,2,GL_FLOAT,false,32,0x0000)
glBindBuffer(GL_ARRAY_BUFFER,0)
glGetAttribLocation(15,"color")=1 
glEnableVertexAttribArray(1)
glBindBuffer(GL_ARRAY_BUFFER,6)
glVertexAttribPointer(1,4,GL_FLOAT,false,32,0x0010)
glBindBuffer(GL_ARRAY_BUFFER,0)
glDrawArrays(GL_POLYGON,0,4) GLSL=15  Textures[ (0,1) ] 
glGetAttribLocation(15,"position2d")=0 
glDisableVertexAttribArray(0)
glGetAttribLocation(15,"color")=1 
glDisableVertexAttribArray(1)
glUseProgram(0)
glScissor(0,0,640,480)
glBindBuffer(GL_ARRAY_BUFFER,2)
glMapBuffer(GL_ARRAY_BUFFER,GL_WRITE_ONLY)=0x3040090 
glUnmapBuffer(GL_ARRAY_BUFFER)=true 
glBindBuffer(GL_ARRAY_BUFFER,0)
glGetUniformLocation(12,"tex1")=0 
glLineWidth(30.000000)
glUseProgram(12)
glEnable(GL_TEXTURE_1D)
glActiveTexture(GL_TEXTURE0)
glBindTexture(GL_TEXTURE_1D,1)
glUniform1iARB(0,0)
glGetAttribLocation(12,"position2d")=0 
glEnableVertexAttribArray(0)
glBindBuffer(GL_ARRAY_BUFFER,2)
glVertexAttribPointer(0,2,GL_INT,false,16,0x0000)
glBindBuffer(GL_ARRAY_BUFFER,0)
glGetAttribLocation(12,"texCoord1d")=2 
glEnableVertexAttribArray(2)
glBindBuffer(GL_ARRAY_BUFFER,2)
glVertexAttribPointer(2,1,GL_FLOAT,false,16,0x0008)
glBindBuffer(GL_ARRAY_BUFFER,0)
glGetAttribLocation(12,"color")=1 
glEnableVertexAttribArray(1)
glBindBuffer(GL_ARRAY_BUFFER,2)
glVertexAttribPointer(1,4,GL_UNSIGNED_BYTE,true,16,0x000c)
glBindBuffer(GL_ARRAY_BUFFER,0)
glDrawArrays(GL_LINES,0,2) GLSL=12  Textures[ (0,1) ] 
glGetAttribLocation(12,"position2d")=0 
glDisableVertexAttribArray(0)
glGetAttribLocation(12,"texCoord1d")=2 
glDisableVertexAttribArray(2)
glGetAttribLocation(12,"color")=1 
glDisableVertexAttribArray(1)
glUseProgram(0)
glLineWidth(1.000000)
glEnable(GL_DEPTH_TEST)
glDepthMask(true)
glMatrixMode(GL_MODELVIEW)
glPopMatrix()
wglSwapBuffers(0x87010b46)=true

The fragment shader is:


uniform sampler1D tex1;

varying vec4 fragColor;
varying float fragTexCoord1;

void main()
{
	gl_FragColor = texture1D(tex1, fragTexCoord1);
}

Now, I have verified that the texture coordinate is correct (by using gl_FragColor = fragTexCoord1). I have also verified that the texture’s data is correct (glIntercept’s reflection mode). I do not know what it is that I’m not doing.

Any ideas? Am I forgetting some texture parameter or something?

The initial value of the minification texturing filtering function is “GL_NEAREST_MIPMAP_LINEAR”. Without changing this value and without the mipmap complete, no texturing is applied. Maybe you just forgot to glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST) or glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)?

Wow, it’s like OpenGL doesn’t actually want you to use textures sometimes. I mean, wouldn’t it make sense that if you’ve bound a texture to a texture unit for a newly minted program object, that the parameters would automatically be set up to provide some form of texturing?

In any case, that fixed it. Thanks!

Huh? This is a weird thread.
Is Korval’s account hacked and cracked?

Surprised me too, but then again who hasn’t been the victim of silly forgetfulness :slight_smile:

No, OpenGL won’t paper over user errors, because then users aren’t aware they’re not getting what they’re asking for. I consider this a very good feature of an API. Step one is to make sure the user gets what he asks for (even if he’s asking for error). OpenGL is pretty good here. Step two is to make sure that the user can easily determine why he’s getting what he’s getting – OpenGL is not as good here, unfortunately.

I’ve always thought the default state should be to assume no mipmaps so it works from the off. Always puzzled me why they chose the opposite assumption.

I imagine that some people would do this

glGenTextures(…)
glBindTexture(…)
gluBuildMipmaps(…)

If the default was GL_LINEAR and GL_LINEAR, it won’t utilize the mipmaps. Many people would think that since they created mipmaps, it should use them automatically.

They should probably throw an error message like GL_INCOMPLETE_TEXTURE in this “mipmap chain not complete”

Maybe add glCheckTextureStatus()

So they made a hardware abstraction API default to the behaviour expected of an external utility library function?
Worst that can happen if you generate (optional) mipmaps and the default filter is non-mipmap is you get scintillating pixels (immediately obvious to the eye) and a drop in performance. What happens if you don’t generate (optional) mipmaps and the default filter is mipmap is that you get no texturing at all for an obfuscated reason.
I know which I’d prefer to be the default.

I was just giving the alternate scenario just to make the point that people will make a mistake.

A better solution would be just for the driver to change the filter mode automatically so that it silently works.