Wirting out VBO/EBO data from a compute shader

Hi,

I need to fill in a VBO and an EBO as part of the execution of a compute shader. My scenario is a dynamic tessellation scenario where the usual hardware tessellation shaders are not sufficient for my needs. I think this scenario is common with CUDA or OpenCL for instance.

Preparation step:

  1. Create VBO and EBO. Give them a maximum size, but do not fill them in with data. For debugging purposes, I fill them in with sample data that is later rendered as-is by glDrawElements. My compute shader being empty and does nothing right now.

Render loop:

1a. Bind VBO/EBO as TEXTURE_BUFFER Shader Images. That is, in such a way that my compute shader will be able to use them in a read/write fashion. This step requires the creation of two TEXTURE_BUFFER objects.

1b. Launch compute shader (glDispatchCompute)
1c. Wait until the compute shader has actually done its job (glMemoryBarrier with Image bit set)
1d. Unbind the Images.

2a. Bind VBO/EBO as ARRAY_BUFFER/ELEMENT_BUFFER
2b. glDrawElements the VBO/EBO
2c. Unbind the VBO and EBO

Code of my render loop (I’m using LWJGL):

// At this point, the VBO and EBO have been created
// They are not bound

if (textureID == -1)
{
  // Create TBO on demand, if needed
  textureID = GL11.glGenTextures();
  GL11.glBindTexture(GL31.GL_TEXTURE_BUFFER, textureID);
  GL31.glTexBuffer(GL31.GL_TEXTURE_BUFFER, internalFormat, vboBufferID);
  GL11.glBindTexture(GL31.GL_TEXTURE_BUFFER, 0);
}

// Now bind the TBO as an Image
GL42.glBindImageTexture(0, textureID, 0, false, 0, GL15.GL_READ_WRITE, internalFormat);
                    
GL43.glDispatchCompute(1, 1, 1); // 1, 1, 1 for testing purposes only right now
GL42.glMemoryBarrier(GL42.GL_SHADER_IMAGE_ACCESS_BARRIER_BIT); // wait for completion

// Unbind the TBO, to make sure subsequent bindings as VBO/EBO will be ok
GL42.glBindImageTexture(0, 0, 0, false, 0, GL15.GL_READ_WRITE, internalFormat);

// Now bind VBO and EBO, and issue glDrawElements

I am using this code as part of a more complex application (unfortunately). Right now, I’ve got an OpenGL error which I have yet to properly dig up and analyze. Executing the compute shader and calling glMemoryBarrier works fine. My issue revolves around the buffers.

Does the above code looks correct to you?

Thanks,

Fred

NB: my compute shader code is completely empty right now, and I pre-fill the VBO and EBO with sample data:

#version 430 compatibility
layout (local_size_x = 1, local_size_y = 1, local_size_z = 1) in;
void main()
{
}

[QUOTE=fred_em;1245884]Right now, I’ve got an OpenGL error which I have yet to properly dig up and analyze. Executing the compute shader and calling glMemoryBarrier works fine. My issue revolves around the buffers.
[/QUOTE]

I located the error, it is here:


if (textureID == -1) 
{
  // Create TBO on demand, if needed
  textureID = GL11.glGenTextures();
  GL11.glBindTexture(GL31.GL_TEXTURE_BUFFER, textureID);
  // The following line throws a GL_INVALID_OPERATION
  GL31.glTexBuffer(GL31.GL_TEXTURE_BUFFER, internalFormat, vboBufferID);
  GL11.glBindTexture(GL31.GL_TEXTURE_BUFFER, 0);
}

vboBufferID is 3 here (eg. it is non-zero).

When I create a TEXTURE_BUFFER off of a new buffer, not off of the VBO, things run with no error but it obviously doesn’t do what I want.

// This code (obviously) works fine
if (textureID == -1) 
{
  int vboBufferID = GL15.glGenBuffers();
  GL15.glBindBuffer(GL31.GL_TEXTURE_BUFFER, vboBufferID);
  GL15.glBufferData(GL31.GL_TEXTURE_BUFFER, 256, GL15.GL_STREAM_DRAW);
  GL15.glBindBuffer(GL31.GL_TEXTURE_BUFFER, 0);

  // Create TBO on demand, if needed
  textureID = GL11.glGenTextures();
  GL11.glBindTexture(GL31.GL_TEXTURE_BUFFER, textureID);
  GL31.glTexBuffer(GL31.GL_TEXTURE_BUFFER, internalFormat, vboBufferID);
  GL11.glBindTexture(GL31.GL_TEXTURE_BUFFER, 0);
}

The following code works, but I am unable to say why:


if (textureID == -1) 
{
  // Create TBO on demand, if needed
  textureID = GL11.glGenTextures();
  GL11.glBindTexture(GL31.GL_TEXTURE_BUFFER, textureID);
  GL15.glBindBuffer(GL31.GL_TEXTURE_BUFFER, vboBufferID);
  GL31.glTexBuffer(GL31.GL_TEXTURE_BUFFER, internalFormat, vboBufferID);
  GL11.glBindTexture(GL31.GL_TEXTURE_BUFFER, 0);
}

I am confused as to why I need to both bind the buffer and texture. From my understanding, glBindTexture then glTexBuffer should be sufficient, shouldn’t they?

Things have settled down. The code was pretty much going berserk before, and calling glBindBuffer was, I believe, randomly making it work.
glBindBuffer(GL_TEXTURE_BUFFER) is not needed. The compute shader set up seems to be working as expected.

That said I don’t fully understand the reasoning behind glBindBuffer(GL_TEXTURE_BUFFER). I don’t see the need for the existence of GL_TEXTURE_BUFFER as a buffer binding point. (it obviously makes sense as a texture binding point/target).

glTexBuffer accepts buffer IDs regardless of their binding point. To load a buffer which destiny is to be given to glTexBuffer, one can pretty much use any kind of buffer slot, eg I can do

glBindBuffer(GL_ARRAY_OBJECT, bufferid); // or GL_ELEMENT_BUFFER etc.
glBufferData(GL_ARRAY_OBJECT, <some data>);
glTexBuffer(......., bufferid);

GL_TEXTURE_BUFFER is just like a temporary buffer binding point.

Right?