Skybox (Cubemap) - Only first loaded image is stored in texture buffer

Hey,

I’ve been trying to implement a simple skybox in my program, which seemed fairly straight forward, but I’ve run into some trouble.
The shaders seem to be correct, but for whatever reason it’s only drawing whichever image I’ve loaded in first.
So, basically if I load in the top image (GL_TEXTURE_CUBE_MAP_POSITIVE_Y) first, it will only draw the top of the skybox, if I load the right image first instead (GL_TEXTURE_CUBE_MAP_POSITIVE_X), it only draws the right side.

I’ve been following the tutorial over here and the only noteworthy difference I’m doing is that I’m using glCompressedTexImage2D instead of glTexImage2D to load the images, and I’m loading mipmaps as well.

I’ve noticed that after loading all of the images, the texture size of the texture in the buffer is 512x512, which is the size of each individual image of the skybox, that doesn’t seem right to me?

This is the first time I’m working with cubemaps so I’m somewhat clueless as to what’s going on here. Any help would be much appreciated.
If needed I can post specific parts of my code as well.

Post all relevant code. Preferably a minimal, complete program.

Saying “I’m doing it like this tutorial, except I’m not actually doing it like that” doesn’t tell us anything useful.

[QUOTE=GClements;1255959]Post all relevant code. Preferably a minimal, complete program.

Saying “I’m doing it like this tutorial, except I’m not actually doing it like that” doesn’t tell us anything useful.[/QUOTE]
I figured someone may have had a similar issue, but alright.
Loading the images:

const GLenum cubemapTargets[6] = {
	GL_TEXTURE_CUBE_MAP_POSITIVE_Y, // Top
	GL_TEXTURE_CUBE_MAP_NEGATIVE_X, // Left
	GL_TEXTURE_CUBE_MAP_POSITIVE_Z, // Front
	GL_TEXTURE_CUBE_MAP_POSITIVE_X, // Right
	GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, // Back
	GL_TEXTURE_CUBE_MAP_NEGATIVE_Y // Bottom
};

const char *postfix[6] = {"up","lf","ft","rt","bk","dn"};
GLuint DDSLoader::LoadCubemap(const char *imgFile,DDSTextureInfo **texture)
{
	std::string pathCache = FileManager::GetNormalizedPath(imgFile);
	std::map<std::string,GLuint>::iterator i = m_texIds.find(pathCache);
	if(i != m_texIds.end())
	{
		if(texture != NULL)
			*texture = &m_textures[i->second];
		return i->second;
	}

	GLuint textureID;
	glGenTextures(1,&textureID);
	glBindTexture(GL_TEXTURE_CUBE_MAP,textureID);
	glPixelStorei(GL_UNPACK_ALIGNMENT,1);

	std::string path = "materials/";
	path += imgFile;
	if(path.substr(path.length() -4) == ".dds")
		path = path.substr(0,path.length() -4);
	StringToLower(path);
	for(unsigned int j=0;j<6;j++)
	{
		std::string subPath = path +postfix[j] +".dds";
		const char *cPath = subPath.c_str();
		nv_dds::CDDSImage img;
		if(!img.load(cPath))
		{
			glDeleteTextures(1,&textureID);
			DDSLOADER_ERR;
		}
		glCompressedTexImage2D(cubemapTargets[j],0,img.get_format(),img.get_width(),img.get_height(),0,img.get_size(),img);
		for(int i=0;i<img.get_num_mipmaps();i++)
		{
			nv_dds::CSurface mipmap = img.get_mipmap(i);
			glCompressedTexImage2D(cubemapTargets[j],i +1,img.get_format(),mipmap.get_width(),mipmap.get_height(),0,mipmap.get_size(),mipmap);
		}
	}
	GLint width,height;
	glGetTexLevelParameteriv(GL_TEXTURE_CUBE_MAP,0,GL_TEXTURE_WIDTH,&width);
	glGetTexLevelParameteriv(GL_TEXTURE_CUBE_MAP,0,GL_TEXTURE_HEIGHT,&height);
	DDSTextureInfo text;
	text.ID = textureID;
	text.width = width;
	text.height = height;

	m_texIds.insert(std::map<std::string,GLuint>::value_type(pathCache,textureID));
	m_textures.insert(std::map<GLuint,DDSTextureInfo>::value_type(textureID,text));
	if(texture != NULL)
		*texture = &m_textures[textureID];
	return textureID;
}

The nv_dds class stems from the nvidia SDK.

The rendering code is fairly wide-spread, it would be difficult to extract and post it in a compact form, but I believe the problem originates from the code above.

Note that GL_TEXTURE_CUBE_MAP isn’t a valid target for glGetTexLevelParameteriv(). You have to specify an individual face (cube map arrays are valid, as are proxies). This shouldn’t cause the problem you describe, though.

Have you tried calling glGetError()?

[QUOTE=GClements;1255964]Note that GL_TEXTURE_CUBE_MAP isn’t a valid target for glGetTexLevelParameteriv(). You have to specify an individual face (cube map arrays are valid, as are proxies). This shouldn’t cause the problem you describe, though.

Have you tried calling glGetError()?[/QUOTE]
The lines you’ve mentioned are throwing a GL_INVALID_ENUM error, but as you’ve said, that’s to be expected.
No other errors before that.
Is there any way to check if the buffer actually has an image assigned to each side?

AMD hardware, by any chance? I’ve found in the past that unless you load cubemap faces in the exact order of the GLenum defines (+x/-x/+y/-y/+z/-z) the AMD GL driver will barf. Not certain how relevant that is nowadays - they may have fixed it - but worth checking.

I do have AMD hardware, yeah, but changing the order didn’t help. Strangely enough with the order you’ve given me it still only rendered the top image (+y).
I will try it again tomorrow as an independant program and see how that goes. Maybe it has something to do with the rendering code after all.
I’ll report back once I’ve made some progress.

Thanks for your input so far.