Could you please help me create/fix my Texture loading/Storing/Using class

So I am creating my own sorta gl_engine from scratch. My goal is to be able to have different classes that define different shapes and how to draw them. I want each of these classes a class holding a texture, and a class holding a Shader program.

As of now I have a pretty boss shader loader!

However my texture loader… Is really not good. First of all even though I have downloaded many examples I don’t really understand how textures are passed to the gpu which makes things problematic. Furthermore I am having issues with a function that loads textures.

This is because I think I am supposed to call “glActiveTexture(GLenum)”. The problem is I dont know where to find the GLEnum without using values defined in glew.h. Could you please help me do this or work around it? BTW the only libraries I am using are glew.h, freeglut.h, and SOIL.h (however if their is a library that is better then SOIL that is open source I would be fine changing which library is in charge of file reading).

If you want I could also send you my VisualStudio Project.

Here is my code thank you for spending the time to look over it. Btw in the Texture.cpp I have made a comment block in the spot where I am confused:


//Texture.h
#pragma once
#include <map>
#include <fstream>
#include <iostream>
#include <string>
#include <sstream>
#include <vector>
#include "GL\glew.h"
#include "GL\freeglut.h"
#include "SOIL.h"

class Texture
{
public:
	Texture(std::string it);
	~Texture();
	void Load();
	GLuint texID;
private:
	int width;
	int height;
	int chanels;
	std::string filename;
};
std::vector<Texture> Textures;
int Amm = 0;

//Texture.cpp
#include "Texture.h"

Texture::Texture(std::string it)
{
	width = 0;
	height = 0;
	chanels = 0;
	filename = it;
}

Texture::~Texture()
{

}

void Texture::Load()
{
	GLubyte* piData = SOIL_load_image(filename.c_str(), &width, &height, &chanels, SOIL_LOAD_AUTO);
	if (piData == NULL) 
	{
		std::cerr << "Cannot load image: " << filename.c_str() << std::endl;
		exit(EXIT_FAILURE);
	}
	//vertically flip the image on Y axis since it is inverted
	int i, j;
	for (j = 0; j * 2 < height; ++j)
	{
		int index1 = j * width * chanels;
		int index2 = (height - 1 - j) * width * chanels;
		for (i = width * chanels; i > 0; --i)
		{
			GLubyte temp = piData[index1];
			piData[index1] = piData[index2];
			piData[index2] = temp;
			++index1;
			++index2;
		}
	}
	Amm += 1;
	texID = (Amm);
	glGenTextures(1, &texID);
	GLenum tex  GL_TEXTURE'AMM' 
	glActiveTexture(tex);
	/*^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	OBIOUSLY THIS LINE WONT WORK BUT IT IS SUPPOSED TO BE GL_TEXTURE0,
	GL_TEXTURE1 as defined in glew.h as  
	#define GL_TEXTURE0 0x84C0
	#define GL_TEXTURE1 0x84C1*/
	
	//glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_2D, texID);

	//Texture parameters
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);

	//allocate texture
	glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, piData);

	//Free soil data
	SOIL_free_image_data(piData);
}

Thanks so much!

Also a couple extra questions.

  1. How are textures passed to the GPU?
  2. How can textures be stored in a theoretically small sized GLuint
  3. Any other functions that would be useful to have in my Texture class
  4. Do you think it will work?
  5. Thanks much!
  6. What functions would I need in order to give each texture class the option to have a Normal and Specular map? And would I load those in the same way as any other texture?
	Amm += 1;
	texID = (Amm);
	glGenTextures(1, &texID);
	GLenum tex  GL_TEXTURE'AMM' 
	glActiveTexture(tex);

It’s not clear to me what this Amm thing is intended to do. Or why you’re using glActiveTexture to begin with.

glActiveTexture selects which texture image unit is currently active. But that only really matters when you’re binding a texture in order to render with it. Your Load function is just binding the texture to upload pixel data to it. And once you’re finished uploading, you should unbind it.

So you should use whatever texture unit was active when your Load function was called. And therefore, you shouldn’t need to call glActiveTexture at all.

However, if you do need to select the texture unit by an integer rather than an enum, the correct way to do this is like this:

glActiveTexture(GL_TEXTURE0 + i);

where i is the integer in question.

What do you mean by that? Are you asking about how textures are used by shaders, or how to upload texture data into a texture object?

The same way a 32/64-bit pointer can “store” a large block of memory. It doesn’t. In both cases (pointers and texture object names), the value is merely a [i]reference[/i] to that memory.

Well, I wouldn’t have a texture class like this to begin with. The most I’d do is have a simple RAII-wrapper (with C++11 move support) that will create/delete the texture object. The major operations on textures would be done by external functions, since those operations either operate on global state not wholly owned by the texture or can fail.

That being said, there are two basic operations that you do with textures: bind them to the context for use, and upload pixel data into them. If you’re going to have a fully-qualified texture object, then those are the main operations you’ll be using.

Yes, it’s possible to make a class that contains an OpenGL texture.

That’s not a question.

Textures are textures. Whether they are a “normal map” or “specular map” or whatever else depends on what data you put into them and how you use them in your shader. But the texture itself is just a texture. You upload data into them the same way you would for any other texture that uses that particular format of texture data.

Even the format you use doesn’t depend specifically on being a “normal map” or whatever. There are many viable choices for texture data. Some normal maps only need to store the X and Y component of the normals, with the Z being computed by the shader. So GL_RG8_SNORM could be sufficient. Others might need a Z component, so GL_RGBA8_SNORM is needed. Alternatively, you could use GL_RGB10_A2 (your shader will need to compensate for not being signed normalized). You could even use GL_RGB16F if you really felt like wasting memory…