PDA

View Full Version : GLSL fragment shader not sampling texture



egable
03-18-2014, 06:26 PM
I am just starting out with OpenGL and shaders. My last look at OpenGL was around 2001, so it's been a while! I will try to be succinct. My actual program is quite lengthy, so I have only copied the section of code I think is relevant to the issue at hand, and I have simplified it considerably during testing to eliminate as many variables as possible. I am now left with just the code below, which does not work.



GLint texture_sampler_id = 0;
guchar data[786432] = {255}; /* Fake texture -- should be all white and 512 x 512 pixels */
GLuint texid;

glEnableVertexAttribArray(0);
if(bm_glerror("Unable to enable vertex attribute array 0")) {
goto fail;
}

glBindBuffer(GL_ARRAY_BUFFER, widget->vb_id);
if(bm_glerror("Unable to bind vertex buffer to attribute array 0")) {
goto fail;
}

glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
if(bm_glerror("Unable to setup attribute pointer for attribute array 0")) {
goto fail;
}

/* Use the widget's shader program when rendering */
if (glIsProgram(rd->shader_program_id) != GL_TRUE) {
bm_alert(BM_FATAL, "Invalid shader program for widget [%s].", bm_str_nil(widget->utf8_name));
bm_shutdown();
goto fail;
}

texture_sampler_id = glGetUniformLocation(rd->shader_program_id, "texture_sampler");
if(bm_glerror("Unable to locate texture sampler in widget shader program")) {
goto fail;
}

glUseProgram(rd->shader_program_id);
if(bm_glerror("Unable to use shader program")) {
goto fail;
}
glUniform1i(texture_sampler_id, 0);
if(bm_glerror("Invalid uniform variable specified")) {
goto fail;
}

glActiveTexture(GL_TEXTURE0 + 0);
if(bm_glerror("Unable to activate GL_TEXTURE0")) {
goto fail;
}

glGenTextures(1, &texid);
glBindTexture(GL_TEXTURE_2D, texid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

glEnableVertexAttribArray(1);
if(bm_glerror("Unable to enable vertex attribute array 1")) {
goto fail;
}

glBindBuffer(GL_ARRAY_BUFFER, widget->uv_id);
if(bm_glerror("Unable to bind UV buffer to vertex attribute array 1")) {
goto fail;
}

glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, NULL);
if(bm_glerror("Unable to specifiy vertex attribute pointer parameters")) {
goto fail;
}

/* Draw the triangles */
glDrawArrays(GL_TRIANGLES, 0, 6); /* Starting from vertex 0; 3 vertices total -> 1 triangle */
if(bm_glerror("Unable to draw widget")) {
goto fail;
}

glDisableVertexAttribArray(0);
if(bm_glerror("Unable to disable vertex attribute array 0")) {
goto fail;
}

glDisableVertexAttribArray(1);
if(bm_glerror("Unable to disable vertex attribute array 1")) {
goto fail;
}


All this C code is supposed to do is make a fake white texture in memory and apply it to my two triangles which make up the surface of my widget. Obviously, I want to eventually use a DDS file for the texture, but since that was not working, I eliminated that code and moved to an in-memory all-white texture to simplify things.

Vertex Shader:



#version 330 core

// Input vertex data, different for all executions of this shader
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;

// Output data; will be interpolated for each fragment.
out vec2 UV;

// Values that stay constant for the whole mesh.
uniform mat4 MVP;

void main()
{
// Output position of the vertex, in clip space : MVP * position
//gl_Position = MVP * vec4(vertexPosition_modelspace, 1);
gl_Position.xyz = vertexPosition_modelspace;
gl_Position.w = 1.0;

// UV of the vertex. No special space for this one.
UV = vertexUV;
}


Fragment Shader:



#version 330 core

// Interpolated values from the vertex shaders
in vec2 UV;

// Values that stay constant for the whole mesh.
uniform sampler2D texture_sampler;

void main()
{
// Output color = color of the texture at the specified UV
gl_FragColor = vec4(texture(texture_sampler, UV).rgb, 1.0);
/*if (gl_FragColor.r == 0.0f) {
gl_FragColor.r = UV.x;
}
if (gl_FragColor.g == 0.0f) {
gl_FragColor.g = UV.y;
}
if (gl_FragColor.b == 0.0f) {
gl_FragColor.b = 0.5f;
}*/
}


So, if I try to render the fake white texture to the triangles, no triangles show up -- the screen is simply blank. If I uncomment the set of "if" statements in the fragment shader which set the output color based on the UV coordinates for each texel, then I get both triangles showing up in a very nice rainbow of color (as expected). If I remove the "texture(texture_sampler, UV).rgb" and replace it with a vec3() color, it shows up in that color.

So, I have figured out that my sampler is not sampling the fake white texture. I have no idea why. I am sure I am missing something stupid and simple, but I have been pouring over this for probably 12 hours over the past 4 days and I have not been able to figure out what I'm missing.

Debugging output from my program:

Vendor: ATI Technologies Inc.
Renderer: AMD Radeon HD 5700 Series
Version: 3.3.12618 Core Profile Context 13.251.0.0
Shader version: 4.30
Unknown GLerror in glewInit() at bm_display.c:bm_display_thread:72: Invalid ENUM

This is the only GLerror that shows up in my program, and I have no idea what causes it, but I doubt it has any impact on the texture sampler.

Thanks in advance for any assistance on this.

Agent D
03-18-2014, 06:35 PM
...
guchar data[786432] = {255}; /* Fake texture -- should be all white and 512 x 512 pixels */
...


AFAIK, this initializes the first element to 255 and the rest to 0.

egable
03-18-2014, 09:05 PM
AFAIK, this initializes the first element to 255 and the rest to 0.

See, that's exactly what I meant by missing something really stupid simple.

That get's me back to my original issue...

Removing this code:



glGenTextures(1, &texid);
glBindTexture(GL_TEXTURE_2D, texid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);


and replacing it with this:



glBindTexture(GL_TEXTURE_2D, rd->texture->tex_id);
if(bm_glerror("Unable to bind texture")) {
goto fail;
}


Where rd->texture->tex_id is created by my load_dds function:



struct bm_image *bm_load_dds(const gchar* file)
{
FILE *fp = NULL;
struct bm_image *image = NULL;
gsize read_bytes = 0, bufsize = 0;
gchar errmsg[256] = {0}, *texture_file = NULL;
guchar *buffer = NULL;
guint level = 0, offset = 0, width = 0, height = 0;

texture_file = g_strdup_printf("%s/%s", globals->cwd, file);

fp = fopen(texture_file, "rb");
if (!fp) {
strerror_r(errno, &errmsg[0], 256);
bm_alert(BM_FATAL, "Error opening image %s in %s: %s\n", bm_str_nil(texture_file), __FUNCTION__, errmsg);
goto fopen_fail;
}

image = g_slice_new0(struct bm_image);
if (!image) {
bm_alert(BM_FATAL, "Memory allocation error reserving %lu bytes for loading image %s in %s.\n", sizeof(*image), bm_str_nil(texture_file), __FUNCTION__);
goto image_malloc_fail;
}

read_bytes = fread(image->filecode, 1, 4, fp);
if (read_bytes < 4 || errno) {
strerror_r(errno, &errmsg[0], 256);
bm_alert(BM_FATAL, "Error reading filecode for image %s in %s: %s\n", bm_str_nil(texture_file), __FUNCTION__, errmsg);
goto filecode_fail;
}
if (g_ascii_strncasecmp(image->filecode, "DDS ", 4) != 0) {
goto filecode_fail;
}

read_bytes = fread(&image->header, 124, 1, fp);
if (errno) {
strerror_r(errno, &errmsg[0], 256);
bm_alert(BM_FATAL, "Error reading header for image %s in %s: %s\n", bm_str_nil(texture_file), __FUNCTION__, errmsg);
goto header_fail;
}

image->height = height = *(guint*)&image->header[8];
image->width = width = *(guint*)&image->header[12];
image->linear_size = *(guint*)&image->header[16];
image->mip_map_count = *(guint*)&image->header[24];
image->four_cc = *(guint*)&image->header[80];
bufsize = (image->mip_map_count > 1 ? image->linear_size * 2 : image->linear_size) * sizeof(guchar);

buffer = (guchar*)g_slice_alloc0(bufsize);
if (!buffer) {
bm_alert(BM_FATAL, "Memory allocation error reserving %lu bytes for loading image %s in %s.\n", sizeof(*image), bm_str_nil(texture_file), __FUNCTION__);
goto buffer_malloc_fail;
}

read_bytes = fread(buffer, 1, bufsize, fp);
if (errno) {
strerror_r(errno, &errmsg[0], 256);
bm_alert(BM_FATAL, "Error reading image data for image %s in %s: %s\n", bm_str_nil(texture_file), __FUNCTION__, errmsg);
goto buffer_read_fail;
}

image->components = (image->four_cc == FOURCC_DXT1) ? 3 : 4;
switch(image->four_cc) {
case FOURCC_DXT1:
image->format = GL_COMPRESSED_RGBA_S3TC_DXT1_EXT;
break;
case FOURCC_DXT3:
image->format = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT;
break;
case FOURCC_DXT5:
image->format = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT;
break;
default:
bm_alert(BM_FATAL, "Unsupported DDS image data format [%c%c%c%c] for image %s in %s.\n", image->header[80], image->header[81],
image->header[82], image->header[83], bm_str_nil(texture_file), __FUNCTION__);
goto image_format_fail;
break;
}

glGenTextures(1, &image->tex_id);
glBindTexture(GL_TEXTURE_2D, image->tex_id);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

image->block_size = (image->format == GL_COMPRESSED_RGBA_S3TC_DXT1_EXT) ? 8 : 16;

for (level = 0; level < image->mip_map_count && (width || height); ++level) {
guint size = ((width + 3) / 4) * ((height + 3) / 4) * image->block_size;
glCompressedTexImage2D(GL_TEXTURE_2D, level, image->format, width, height, 0, size, buffer + offset);
offset += size;
width /= 2;
height /= 2;
}

bm_slice_safe_free1(bufsize, buffer);


goto done;
image_format_fail:
buffer_read_fail:
bm_slice_safe_free1(bufsize, buffer);
buffer_malloc_fail:
header_fail:
filecode_fail:
bm_slice_safe_free(struct bm_image, image);
image_malloc_fail:
done:
fclose(fp);
fopen_fail:
bm_safe_free(texture_file);
return image;
}


I assume I must have done something wrong when creating this texture, because it shows black if I try to use it in place of the fake white texture (which now works, after using memset to initialize the data). The original version of this function was written in C++, and I translated it to C, so I probably screwed something up somewhere. This is the first time I have worked with a DDS file, so I am not that familiar with how they are structured.

arekkusu
03-18-2014, 09:53 PM
You forgot to set MIN_FILTER. It's the same very common mistake (http://www.opengl.org/discussion_boards/showthread.php/183754-Array-Texture-confusion?p=1258173&viewfull=1#post1258173) everyone makes.

egable
03-19-2014, 07:32 AM
You forgot to set MIN_FILTER. It's the same very common mistake everyone makes.

Thanks! That did it.