PDA

View Full Version : How to Blend Points to Make a 2D Heat Map



sausage
04-11-2017, 03:12 AM
Hi,

I am trying to make a monochrome heat map which would look something like this
https://4.bp.blogspot.com/-PpX8f54ou5s/U3DrtBGMddI/AAAAAAAAAm8/FKkQCDWak_w/s1600/heatmap6.png
(LINK: heatmap6.png) (http://4.bp.blogspot.com/-PpX8f54ou5s/U3DrtBGMddI/AAAAAAAAAm8/FKkQCDWak_w/s1600/heatmap6.png)

I am struggling to tackle this problem though, in particular is it (realistically) possible in openGL to blend
points to get the kind of style shown in the picture? If not how would you approach this challenge?

Thanks!

Dark Photon
04-11-2017, 05:33 AM
Sure. You can encode your heat map function in a shader and compute the values explicitly, or if you don't have an explicit function but just a set of points that sample the heat map curve, you can encode these into a 1D or 2D texture and let the GPU's texture filtering interpolate neighboring values for you.

sausage
04-11-2017, 06:05 AM
Hi Dark Photon,

Thank you very much for your reply. Happy to know that it is feasible,


You can encode your heat map function in a shader and compute the values explicitly,.

If possible could you elaborate a little, I am still fairly new to openGL. Would that involve drawing many very small triangles instead of points so that the shader automatically interpolates the colour similar to the standard "Hello Triangle" tutorials?


or if you don't have an explicit function but just a set of points that sample the heat map curve, you can encode these into a 1D or 2D texture and let the GPU's texture filtering interpolate neighboring values for you.
Thanks for the idea, I found this example on open.gl. I think that you mean to setup the texture something like this?



// Black/white checkerboard
float pixels[] = {
0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f,
1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f
};
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 2, 2, 0, GL_RGB, GL_FLOAT, pixels);

I will give it a try now

Dark Photon
04-12-2017, 05:07 AM
You can encode your heat map function in a shader and compute the values explicitly, ...
If possible could you elaborate a little, I am still fairly new to openGL. Would that involve drawing many very small triangles instead of points so that the shader automatically interpolates the colour similar to the standard "Hello Triangle" tutorials?

What I meant by this first part is if you have an analytic expression for your heat map (e.g. intensity = (value - 1000 ) / 3000, or something considerably more complex), you could just compute this expression in a fragment shader and then use the result it produces directly.

The main alternative to this is pre-sampling this function (perhaps it's a piecewise linear curve for instance), storing that in the texture, and then doing a texture lookup in your fragment shader to compute the value of this function as opposed to computing it analytically. For this latter case, you'll probably need a little bit of prep work on the domain value to get it to a 0..1 texture coordinate so that you can sample from the texture.

sausage
04-13-2017, 03:56 AM
Sure. You can encode your heat map function in a shader and compute the values explicitly, or if you don't have an explicit function but just a set of points that sample the heat map curve, you can encode these into a 1D or 2D texture and let the GPU's texture filtering interpolate neighboring values for you.

Thank you very much, using your hint about textures I found a helpful tutorial on open.gl. However I can't get textures to work when I am defining them in an array. The code is not too complex are you able to spot what is wrong? I have tried to cut down the code to the essentials. There is no glGetError() and the shader does not return an error on compiling or linking. If I change the fragment shader to return a solid colour instead of a texture then that will work.




//Vertex Data with rectangle co-oordinates and texture co-ordinates
GLfloat screen2DVertices[] =
{ //x, y, z //u, v
-0.8, -0.8, 0, 0, 0, // bottom left corner
-0.8, 0.8, 0, 0, 1, // top left corner
0.8, 0.8, 0, 1, 1, // top right corner
0.8, -0.8, 0, 1, 0, // bottom right corner
};

GLushort screen2DIndices[] =
{
0,1,2, // first triangle (bottom left - top left - top right)
0,2,3 // second triangle (bottom left - top right - bottom right)
};

//Texture width in pixels
const int TEXTURE_WIDTH = 20;
const int TEXTURE_HEIGHT = 20;

//3 RGB values per pixel
float texturePixels[TEXTURE_WIDTH * TEXTURE_HEIGHT * 3];

//Test with all white r = g = b = 1.0f
for (int i = 0; i < TEXTURE_WIDTH * TEXTURE_HEIGHT * 3; i++)
{
texturePixels[i] = 1.0f;
}


//Vertex array object
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);

//Vertex buffer object
GLuint vbo;
glGenBuffers(1, &vbo);

// Create an element array
GLuint ebo;
glGenBuffers(1, &ebo);

glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(screen2DVertices), screen2DVertices, GL_STATIC_DRAW);

glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(screen2DIndices), screen2DIndices, GL_STATIC_DRAW);



// Load texture
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, TEXTURE_WIDTH, TEXTURE_HEIGHT, 0, GL_RGB, GL_FLOAT, texturePixels);


// Specify the layout of the vertex data
GLint posAttrib = glGetAttribLocation(screenShader.Program, "position");
glEnableVertexAttribArray(posAttrib);
glVertexAttribPointer(posAttrib, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (void*)0);

GLint texAttrib = glGetAttribLocation(screenShader.Program, "texcoord");
glEnableVertexAttribArray(texAttrib);
glVertexAttribPointer(texAttrib, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (void*)(3 * sizeof(GLfloat)));

while (!glfwWindowShouldClose(window))
{
glfwPollEvents();

glClearColor(0.5f, 0.5f, 0.5f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);

screenShader.Use();
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, (void*)0);

glfwSwapBuffers(window);
}


//---------------------------Vertex Shader------------------------------------------
#version 330 core

in vec3 position;
in vec2 texcoord;

out vec2 Texcoord;

void main()
{
Texcoord = texcoord;
gl_Position = vec4(position, 1.0);
}

//---------------------------Fragment Shader------------------------------------------
#version 330 core

in vec2 Texcoord;
out vec4 outColour;
uniform sampler2D tex;

void main()
{
outColour = texture(tex, Texcoord) * vec4(1.0f);
}

sausage
04-13-2017, 07:23 AM
Thank you for your reply. Thanks to your hint I was able to get an idea of how to proceed, however I am struggling to get textures working. I know the shader is compiling and linking properly, if I switch the output colour in the fragment shader to just return a solid vec4 colour it works with so I know it must be the texture causing the problem . Any ideas on what I am doing wrong?


//Texture width in pixels
const int TEXTURE_WIDTH = 20;
const int TEXTURE_HEIGHT = 20;

//3 RGB values per pixel
float texturePixels[TEXTURE_WIDTH * TEXTURE_HEIGHT * 3];

//Test with all white r = g = b = 1.0f
for (int i = 0; i < TEXTURE_WIDTH * TEXTURE_HEIGHT * 3; i++)
{
texturePixels[i] = 1.0f;
}

GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);

// Load texture
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, TEXTURE_WIDTH, TEXTURE_HEIGHT, 0, GL_RGB, GL_FLOAT, texturePixels);


GLuint vbo;
glGenBuffers(1, &vbo);

// Create an element array
GLuint ebo;
glGenBuffers(1, &ebo);

glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(screen2DVertices), screen2DVertices, GL_STATIC_DRAW);

glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(screen2DIndices), screen2DIndices, GL_STATIC_DRAW);


// Specify the layout of the vertex data
GLint posAttrib = glGetAttribLocation(screenShader.Program, "position");
glEnableVertexAttribArray(posAttrib);
glVertexAttribPointer(posAttrib, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (void*)0);

GLint texAttrib = glGetAttribLocation(screenShader.Program, "texcoord");
glEnableVertexAttribArray(texAttrib);
glVertexAttribPointer(texAttrib, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (void*)(3 * sizeof(GLfloat)));

while (!glfwWindowShouldClose(window))
{
glfwPollEvents();

glClearColor(0.5f, 0.5f, 0.5f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);

screenShader.Use();
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, (void*)0);

glfwSwapBuffers(window);
}


Vertex Shader

#version 330 core

in vec3 position;
in vec2 texcoord;

out vec2 Texcoord;

void main()
{
Texcoord = texcoord;
gl_Position = vec4(position, 1.0);
}


Fragment Shader:

#version 330 core

in vec2 Texcoord;
out vec4 outColour;
uniform sampler2D tex;

void main()
{
outColour = texture(tex, Texcoord) * vec4(1.0f);
}

Dark Photon
04-14-2017, 07:00 PM
I am struggling to get textures working.

I know the shader is compiling and linking properly, if I switch the output colour in the fragment shader to just return a solid vec4 colour it works with so I know it must be the texture causing the problem .

Any ideas on what I am doing wrong?

Here are a few ideas:

First, I don't see a glUniform1i call where you populate a value of 0 for the texture unit number on the "tex" sampler2D uniform.

I also don't see a glBindFragDataLocation where you set 0 as color number for the "outColour" fragment output. That said, sounds like you're already getting color output from your fragment shader, so this isn't "the" tip you're looking for.

Also:



...
//3 RGB values per pixel
float texturePixels[TEXTURE_WIDTH * TEXTURE_HEIGHT * 3];
...
// Load texture
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, TEXTURE_WIDTH, TEXTURE_HEIGHT, 0, GL_RGB, GL_FLOAT, texturePixels);


Here, you're providing RGB FLOAT values (0..1) as input to glTexImage2D. However, you're not requesting a specific internal format. You've only specified GL_RGB, which isn't a specific internal format, so the implementation is free to substitute whatever format it wants. It'd be better to be specific.

My recommendation would be to use one of these combinations:

1) internal format = GL_RGB8, format = GL_RGB, type = GL_UNSIGNED_BYTE, OR
2) internal format = GL_RGB16F or GL_RGB32F, format = GL_RGB, type = GL_FLOAT.

If you choose #1, then change your input data from an array of floats (0..1) to an array of unsigned chars (0..255).

Also, since your texture doesn't have MIPmaps, change the GL_TEXTURE_MIN_FILTER to GL_LINEAR (as the default is GL_NEAREST_MIPMAP_LINEAR). The default GL_TEXTURE_MAG_FILTER is already GL_LINEAR.

You may also want to set the GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T to GL_CLAMP_TO_EDGE, as the default is GL_REPEAT.