Opengl texture, explain?

So I read a lot of things about textures, but it’s still blocks me out… So please can somebody explain me something, on how it works?

Probably, but I’m not sure I understand the question.

The first thing to understand is that all 3D graphics ever is drawing triangles. That’s why the first tutorial will often be a triangle. I always say, “Show me how to make a triangle with the system and I can do 3D graphics with it.” So, it’s really all about vertex buffers and using them to draw triangles.

And so you need to understand vertices. Vertices are pieces of data that represent the corners of said triangle. Obviously if you can specify the positions of three corners, you can draw lines between them. But you can also shade in the area inside those three lines. You can do it with a solid color. That’s ambient or silhouette shading.

If you define a vector normal as a light, you can have an imaginary mathematical light that shines in a direction. Same direction throughout the entire scene regardless of location. Think how the sun shines in the same direction no matter what street you are standing on (in the same city). And this best simulates sunlight rather than indoor lighting. But you can calculate the direction your triangle faces, and then you can apply the color of the light depending on whether the triangle faces into the light or away from it. Face away from the light and the triangle gets 0% of the light and remains black or whatever silhouette color you gave it with ambient light. If the normals indicate that the triangle faces straight at the “light” direction (which is nothing more than a direction and color), then the triangle is shaded with 100% of the color of the light. Between 0 degrees directly into the light and 90 degrees away you give it a percentage of the light color based on the angle. So, if the triangle faces 90 degrees or more away from the light it receives none of the light’s color. It only receives a percentage based on how much it faces the light.

It’s what makes 3D graphics possible. In the early days this is all they had and you had graphics somewhat like the original Tron movie. It was all very faceted and mostly all just one color for each object. Although, with a little work you can assign different colors to different triangles in the same object. But this lighting effect is what makes model look 3D along with some other techniques that help even more. This is still the most important technique for 3D graphics.

So then comes along Henri Gouraud. Somewhere in there, he or people of that time came up with the idea for interpolating across the face of the triangle. This is just a weighted average. You can assign each corner (vertex) a value such as a color. Then you can interpolate for each pixel in the triangle that you are shading in between those 3 colors. The closer you are to a given corner, the more of that corner’s color the pixel receives. It’s a weighted average pure and simple. That’s called interpolation or LERP (linear interpolation). By doing this, you can assign the normals that describe the direction that the triangle faces to the vertices. So, instead of a face normal that describes the direction the face of the triangle faces, you have 3 normals, one for each vertex. And you can average the face normals for triangles that are next to each other so that the vertex normals are averages between all the faces connected to that vertex.

Now you can interpolate the direction each pixel faces using this same weighted average. The closer a pixel is to any given corner, the more it gets the direction of that vertex. So, it’s a weighted average of the 3 directions of the 3 vertices depending on how close the pixel is to every corner. This allows for smooth shading on triangles that are in reality flat. You can see this is an illusion if you can see the edges of the triangle because they will still be straight. But the face of the triangle will look curved.

I think Bui Tuong Phong came up with the mathematical equation to add specular highlight to this smooth shading to control whether a surface looked matte or glossy. It’s basically controlling the glossiness. Blinn came up with another equation later that was basically the same thing with slightly different math.

So, now this gets back to texturing. If you map a vertex (corner of the triangle) to a position in a photograph, you can use the weighted averages of the positions to map every pixel on the surface of the triangle to a spot (texel) in the photograph. The affect when you draw it will be to make it appear that you are stretching the photograph across the face of the triangle. Instead of other lighting methods to determine the color of the pixel, you just map it to a spot in the photograph and “sample” the color in that spot to set the color of the triangle’s pixel when shading in the area between the vertices. The position is a weighted average of positions. This mapping is called UV mapping. When you build your vertex buffer or when you model your model in the modeling program, you will assign every vertex a UV coordinate that maps that vertex to a specific XY position in the photograph. Interpolation will map the areas in between the vertices.

The values you get in a GLSL fragment shader are already interpolated for you unless you turn off interpolation. So, you are basically getting a pixel value in the fragment shader. For example, the position will be a pixel position. The UV coordinate will be a pixel’s UV coordinate. You call a sampler to pull that coordinate’s color from a photograph when shading in the triangle.

I mix all the aforementioned techniques together and combine the colors from my ambient, Gouraud, and texture shading.

The next trick is to start using these photographs for data rather than for just color. Really, the texel color it’s using is just color data. It’s already data. We’re just mapping pixel positions to positions in the photograph. What if instead of a color, we store in the texel position a normal that describes the direction that pixel faces. Now you can control pixel facing individually instead of using a weighted average. Instead of a smooth transition between vertices for the facing used in lighting calculations, you can do bump mapping to make the flat triangle surface appear to have any sort of bumps across it by controling the normal or direction the pixel faces. Then when you compare that to the light direction to decide how much of the light’s color to give that pixel the direction is controlled per pixel.

Use a modeling program to bake a normal map and you can take very high poly models with lots of detail and project the surface direction down to a low poly model and store this data as a normal map. Each pixel/texel in the normal map UV maps to a pixel on the surface of the model through vertex UV coordinates and interpolation. Now your low poly model can be lit up and appear to have all the surface detail of the high poly model at a fraction of the GPU cost in processing.

Such maps can store other data such as ambient occlusion, specular amount, and a lot more. Really whatever data you want to store in the picture file. They all work off the same UV map that the color texture does in basically the same way.

without a piece of code that you want to make “work as intended” its hard figure out what you dont understand
a texture itself is nothing more than data

now it depends on what you want to do with the data, lets say you have an image you want to use to cover a primitive, like a rectangle
then you have to send with the vertices some “texture coordinates”, these explain from where in the texture each cornerpoint samples a color value

struct Vertex {
vec3 position;
vec2 texcoord;
};

your fragmentshader is responsible for coloring the primitive, so it needs access to the texture:

layout (binding = 3) uniform sampler2D myimagedata;

consider this variable as a “pointer” to the texture
to make the texture sit on a certain memory location, you have to “bind” it to a “texture unit”, here it is 3

glBindTextureUnit(3, texture);

now the fragmentshader has access to the 2D texture that is bound at “textureunit 3”

gl_FragColor = texture(myimagedata, texcoord);

the “texcoord” variables have to be passed from the vertexshader to the fragmentshader (because you want to interpolate them within the rectangle)

take a look at this example:

textures can be also used to have access to pre-calculated values of complicated functions (to avoid doing the same calculations again and again and again …)

A texture is essentially a function defined by interpolation and extrapolation of gridded data.

Given a set of texture coordinates, you get a value corresponding to those coordinates. Typically, the value will represent intensity or colour, but (particularly with modern OpenGL) it can be almost anything which can be represented as between 1 and 4 scalar values (integers or reals).

To say anything else useful, we’d first need to know whether you’re interested in the legacy fixed-function pipeline (where e.g. glEnable(GL_TEXTURE2D) automatically results in primitives being affected by the current texture(s)) or shaders (where textures are simply objects which can be used from within a shader, and the actual results depend upon the shader).

I cant understand how I can transform a PNG in glTexImage2D image data, last parameter

A PNG file contains compressed binary image data, and is optimized for storage on-disk.

You cannot send PNG data directly to OpenGL. You need to use an image-loading library to convert it to RGBA data first.

I’m no expert on glTexImage2D (or maybe anything else for that matter), but here’s my thoughts. The PNG has data in one format that may include compression and so forth, and glTexImage2D is building an OGL texture, which is basically just an array of color texel (texture pixel) data, as I understand it. The interpretation of this color data array is subject to interpretation. All those parameters work together to determine that interpretation.

I wouldn’t suggest understanding every possible format right off the bat; maybe just learn one or two that get the job done and expand your understanding from there. Color theory and such can get complicated.

I’m using FreeImage to read photograph/texture files such as PNG. In theory, you could write your own code to do it, if you understand the PNG format inside and out and how to turn it into the appropriate color data. Of course, you also need to learn the format of any other image file you intend to support like JPEG or TIFF or anything else. Personally, I think using an image file library is the way to go, because there’s plenty of game programming stuff to learn without getting into image file processing. You could probably dedicate a year of your life just learning how to read in all the popular image formats and store them to disk. I’d rather spend that year learning game programming. And I tend to like to learn the very low level stuff like this. Sometimes it gets even a little too low level for my tastes. Or at the least, I have to prioritize my time because I only have so many life times in which to learn this stuff.

Anyway, here’s my code, as one example of what one might do. It uses FreeImage.

Texture2DClass.h


#pragma once
#include <string>
#include <glew.h>

#include <FreeImage.h>
/*This software uses the FreeImage open source image library.See http ://freeimage.sourceforge.net for details.
FreeImage is used under the(GNU GPL or FIPL), version(license version).*/

namespace OGLGameNameSpace
{

	class Texture2DClass
	{
	public:
		Texture2DClass();
		~Texture2DClass();
		bool Load(std::string File, bool GenerateMipMaps = true);
		void Bind(GLuint TexUnit = 0);
		GLuint GetHandle();
		GLuint GetTextureID();

	private:
		GLuint TextureHandle;
		GLuint TextureID;
	};
}

Texture2DClass.cpp


#include "Texture2DClass.h"

using namespace OGLGameNameSpace;


Texture2DClass::Texture2DClass() : TextureHandle(0)
{
}


Texture2DClass::~Texture2DClass()
{

}


bool Texture2DClass::Load(std::string File, bool GenerateMipMaps)
{
	int Width;
	int Height;
	bool TextureLoaded = false;

	FREE_IMAGE_FORMAT ImageFormat = FreeImage_GetFileType(File.c_str(), 0);
	FIBITMAP* ImageBitMap = FreeImage_Load(ImageFormat, File.c_str());

	FIBITMAP* TempPointer = ImageBitMap;
	ImageBitMap = FreeImage_ConvertTo32Bits(ImageBitMap);
	FreeImage_Unload(TempPointer);

	Width = FreeImage_GetWidth(ImageBitMap);
	Height = FreeImage_GetHeight(ImageBitMap);

	GLubyte* Texture = new GLubyte[4 * Width*Height];
	char* Pixels = (char*)FreeImage_GetBits(ImageBitMap);

	for (int x = 0; x < (Width*Height); x++)
	{
		Texture[x * 4 + 0] = Pixels[x * 4 + 2];
		Texture[x * 4 + 1] = Pixels[x * 4 + 1];
		Texture[x * 4 + 2] = Pixels[x * 4 + 0];
		Texture[x * 4 + 3] = Pixels[x * 4 + 3];
	}


	TextureID = 0;
	glGenTextures(1, &TextureID);

	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
	//glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, Width, Height, 0, GL_RGBA, GL_UNSIGNED_BYTE, (const GLvoid*)Texture);
	glTexImage2D(GL_TEXTURE_2D, 0, GL_SRGB8_ALPHA8, Width, Height, 0, GL_RGBA, GL_UNSIGNED_BYTE, (const GLvoid*)Texture);
	glBindTexture(GL_TEXTURE_2D, TextureID);
	if (GenerateMipMaps) glGenerateMipmap(GL_TEXTURE_2D);
	delete(Texture);
	TextureLoaded = true;
	
	return TextureLoaded;
}


void Texture2DClass::Bind(GLuint TexUnit)
{
	TextureHandle = TexUnit;
	glActiveTexture(GL_TEXTURE0 + TexUnit);
	glBindTexture(GL_TEXTURE_2D, TextureHandle);
}


GLuint Texture2DClass::GetHandle()
{
	return TextureHandle;
}


GLuint Texture2DClass::GetTextureID()
{
	return TextureID;
}

Partial Calling Code (with parts missing to aide clarity)


bool Game::LoadContent()
{
	bool NoCatastrophicFailuresOccured = false;	//Close the program if even one mesh fails to initialize correctly.
	float ScaleFactor = 200.0f;
	float TerrainSize = 500.0f;


	//GrassTexture.Load("Textures/Grass.dds", true);
	GrassTexture.Load("Textures/FloorTiles.jpg", true);
	//GrassTexture.Load("Textures/MudWall.jpg", true);
	
	GrassTexture.Bind(0);

	NoCatastrophicFailuresOccured = Shader.LoadShader("Shaders/BlinnPhong.vrt", "Shaders/BlinnPhong.frg");

	glClearColor(0.392156862745098f, 0.5843137254901961f, 0.9294117647058824f, 1.0f);	//XNA's "Cornflower Blue"


	
	GLfloat GroundVertexBuffer[] = {
		-TerrainSize, 0.0f,  TerrainSize,	       0.0f, ScaleFactor,	0.0f, 1.0f, 0.0f,	1.0f, 1.0f, 1.0f, 1.0f,
		 TerrainSize, 0.0f,  TerrainSize,	ScaleFactor, ScaleFactor,	0.0f, 1.0f, 0.0f,	1.0f, 1.0f, 1.0f, 1.0f,
		 TerrainSize, 0.0f, -TerrainSize,	ScaleFactor, 0.0f,			0.0f, 1.0f, 0.0f,	1.0f, 1.0f, 1.0f, 1.0f,
		-TerrainSize, 0.0f, -TerrainSize,		 0.0f,	 0.0f,			0.0f, 1.0f, 0.0f,	1.0f, 1.0f, 1.0f, 1.0f
	};

	GLuint GroundIndices[] = {
		0,1,2,
		0,2,3
	};
	Ground.DefineMesh(4, GroundVertexBuffer, 6, GroundIndices, &GrassTexture);

	
	return NoCatastrophicFailuresOccured;
}