Pixel blending using glTexSubImage2D()

Hey all,

I have a question regarding blending a texture for transparency. We are trying to optimize the rendering and experimenting by placing bullet holes and blood splatters onto our background texture using glTexSubImage2D(). The issue is that the source pixels are being copied, not blended, onto the background texture. This results in the destination pixels being completely overwritten. I believe glTexSubImage2D() only replaces the pixels completely and may not be using it right in this case.

Is there something I am missing or another method? I have a feeling the solution is much more complex then I expected.


void OGL_Renderer::drawImageToImage(Image& source, const Rectangle_2d& srcRect, Image& destination, const Point_2d& dstPoint)
{
	Image subImage(&source, srcRect.x, srcRect.y, srcRect.w, srcRect.h);
	
	glEnable(mTextureTarget);
	
	// Bind our destination texture.
	glBindTexture(mTextureTarget, getTextureId(destination));
	
	// Detect which order the pixel data is in to properly feed OGL.
	GLint nColors = subImage.getPixels()->format->BytesPerPixel;
	
	GLenum textureFormat;
	if(nColors == 4)
	{
		if(subImage.getPixels()->format->Rmask == 0x000000ff)
			textureFormat = GL_RGBA;
		else
			textureFormat = GL_BGRA;
	}
	else if(nColors == 3)     // no alpha channel
	{
		if(subImage.getPixels()->format->Rmask == 0x000000ff)
			textureFormat = GL_RGB;
		else
			textureFormat = GL_BGR;
	}
	else
		cout << "Image is not truecolor." << std::endl;
	
	glTexParameteri(mTextureTarget, GL_TEXTURE_MIN_FILTER, TEXTURE_FILTER);
	glTexParameteri(mTextureTarget, GL_TEXTURE_MAG_FILTER, TEXTURE_FILTER);
	
	// Check for the need to clip the source texture.
	Rectangle_2d clipRect;
	
	if ((dstPoint.x + srcRect.w) > destination.getWidth())
	{
		clipRect.w = srcRect.w - ((dstPoint.x + srcRect.w) - destination.getWidth());
	}
	else
		clipRect.w = srcRect.w;
	
	if ((dstPoint.y + srcRect.h) > destination.getHeight())
	{
		clipRect.h = srcRect.h - ((dstPoint.y + srcRect.h) - destination.getHeight());
	}
	else
		clipRect.h = srcRect.h;
	
	glColor3f(1.0, 1.0, 1.0);
	
	// Copy source onto destination.
	glTexSubImage2D(mTextureTarget, 0, dstPoint.x, dstPoint.y, clipRect.w, clipRect.h, textureFormat, GL_UNSIGNED_BYTE, subImage.getPixels()->pixels);

	glDisable(mTextureTarget);
}

We already have GL_BLEND enabled and use

 glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

Blending is what happens when you do actual rendering. glTexSubImage just copies pixel data into textures. If you want blending, you will have to render to the texture in question.

Ok so the pixel data I am sending to the texture via glTexSubImage I can’t blend?

The easiest way is to use 2 textures, the one for your model, and the second one for the bullets. The second texture should have alpha values set to 0 all around the bullet drawing. Then you mix the two textures (GL_DECAL environment for the second texture should be enough).

Is that using multi-texturing? Right now for the ones we have, we are blending textures on different geometry (all 2D btw) which works perfectly, but we didn’t want a new texture for each new bullet hole and wanted to be able to place the blended bullet hole textures onto the background texture to save video memory and rendering time.

Yeah, glTexSubImage2D is a pixel transfer operation and replaces the destination entirely; not suitable for what you want. On balance I think you’re better off with a texture atlas of bullet holes and blending them in a second pass. Unless you’re really really really tight for video memory you won’t even be aware of the extra overhead of this, and I’m suspecting that it will be faster than updating textures at runtime.

If you did want to use glTexSubImage2D for this you would need to keep a copy of the original texels in system memory, update this with your bullet hole image data (doing the blend in software) then glTexSubImage2D them. That’ll certainly save on video memory but at the expense of extra system memory usage, and likely won’t perform as well as the second pass.

Often it’s a matter of memory saving, performance - pick one. As I said, unless you’re coding to very specific hardware with low memory I’d pick performance any day; memory is cheap and plentiful, and is a resource to be used, not scrimped and saved. Think about it this way: if you have 1GB of video memory and you only use 32MB, that’s 992MB that you’re effectively wasting, because it’s sitting there doing nothing at all for you when you could be doing something useful and/or interesting with it instead.

I’d like to chime in here and explain a bit more about what we’re doing, exactly.

The fact that there are bullet holes and blood splats is actually irrelevant. The simple point is to be able to render a source texture to a desitnation texture.

The function’s intent is not to conserve memory but to instead be able to modify a given image. For instance, the function could be used to render a GUI widget with various skin parts and to save that texture into memory and only render one quad with a single texture vs a dozen or more quads with several textures.

Or, in the case of a side-scroller, to draw blood and soot from explosions onto the background.

There are any number of other possiblities but the core functionality we’re looking for is essentially to take a source texture and render it onto a destination texture permanently modifying the destination texture.

To do this, render to an FBO (for security. a low tech way is to do that on the back framebuffer, before clearing and doing actual render).
Draw a quad with background texture, draw a blended quad with splat texture, then glCopyTexSubImage the affected rectangle to the texture you want to update.

Here is the original function with the FBO render technique you described.


void OGL_Renderer::drawImageToImage(Image& source, const Rectangle_2d& srcRect, Image& destination, const Point_2d& dstPoint)
{
	// Ignore the call if the detination point is outside the bounds of destination image.
	if(dstPoint.x > destination.getWidth() || dstPoint.y > destination.getHeight())
		return;

	Image subImage(&source, srcRect.x, srcRect.y, srcRect.w, srcRect.h);
	
	glEnable(mTextureTarget);
	glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
	
	// Bind our destination texture.
	glBindTexture(mTextureTarget, getTextureId(destination));
	
	// Check for the need to clip the source texture.
	Rectangle_2d clipRect;
	
	(dstPoint.x + srcRect.w) > destination.getWidth() ? clipRect.w = srcRect.w - ((dstPoint.x + srcRect.w) - destination.getWidth()) : clipRect.w = srcRect.w;
	(dstPoint.y + srcRect.h) > destination.getHeight() ? clipRect.h = srcRect.h - ((dstPoint.y + srcRect.h) - destination.getHeight()) : clipRect.h = srcRect.h;

	// Ignore this call of the clipping rect is smaller than 1 pixel in any dimension.
	if(clipRect.w < 1 || clipRect.h < 1)
		return;
	
	// Create a framebuffer object
	GLuint myFBO;
	glGenFramebuffersEXT(1, &myFBO);
	
	// Bind the framebuffer object 
	glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, myFBO);
	glPushAttrib(GL_VIEWPORT_BIT);
	glViewport(0, 0, destination.getWidth(), destination.getHeight());
	
	// Attach a texture to the FBO 
	glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, mTextureTarget, getTextureId(destination), 0);
	
	// Position to draw our quad in the FBO
	glColor4ub(255, 255, 255, 255);
	GLfloat vertices[8] = {
		dstPoint.x, dstPoint.y,
		dstPoint.x + clipRect.w, dstPoint.y,
		dstPoint.x + clipRect.w, dstPoint.y + clipRect.h,
		dstPoint.x, dstPoint.y + clipRect.h
	};
	
	GLfloat texture[8] = {
		0.0f, 0.0f,
		1.0f, 0.0f,
		1.0f, 1.0f,
		0.0f, 1.0f
	};
	
	// Render our quad
	drawVertexArray(subImage, vertices, texture);
	
	// Bind our destination texture again
	glBindTexture(mTextureTarget, getTextureId(destination));
	
	// Copy FBO contents to destination
	glCopyTexSubImage2D(mTextureTarget,
						0,
						0,
						0,
						dstPoint.x,
						dstPoint.y,
						clipRect.w,
						clipRect.h);
	
	// Reset viewport and unbind
	glViewport(0, 0, mScreen->w, mScreen->h);
	glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
	glDeleteFramebuffersEXT(1, &myFBO);
}

It works well! I have 2 small issues however. The first is that the source image is being drawn opposite of the mouse on the y-axis. So if I click on the bottom of the window it gets drawn at the top. X-axis is working just fine.

Second, in the upper left hand corner of the window the pixels seem to jitter and change like I am getting interference in the texture. I may be copying too many pixels?

“Bottom-left is the origin” strikes again.

You need to set up an orthographic projection after your glViewport with the bottom and top params swapped around.

The images themselves are drawn correctly… it is literally the y position of where the image is rendered that is reversed… the projection is correct.

The orientation of the images are correct. Are my projections incorrect for the FBO? Or for the buffer copy after rendering to FBO?

EDIT2: I wanted to add if I switch the Ortho as suggested, the text is then drawn in the bottom left upside down, the mouse moves opposite of the actual input, but the images draw in the correct screen locations.