Drawing with Compute Shader

hi everyone,

i have read that the compute shader can render, but i’m asking myself how this works.
do i have to create the image data for the framebuffer anywhere else, like in a shader storage buffer, and later put the data into a framebuffer texture or can i access directly a (framebuffer) texture in the compute shader anyhow ?

thanks in advance!

To render from a compute shader, you’d probably use an image for the output.

but how can i access an image for the output ??

input is clear:
uniform sampler2D mytex;
then reading a texel:
vec4 texel = texture(mytex, sometexcoords);

but can i write into “mytex” ?
if yes: whats the GLSL command for that ?
if no: how would you access a texture for writing ?

the only way i figured out to get some data out of computeshaders is (shader storage) buffers

You can read this tutorial for example.

OK thanks, silence !! i never heared / read anything about “Image Load Store” …

me neither.

[QUOTE=john_connor;1284053]but how can i access an image for the output ??

input is clear:
uniform sampler2D mytex;
[/QUOTE]

(I assume that you’ve figured this much out from the other replies, but I’ll add it here for anyone who stumbles across the thread).

Not “texture”, but “image”.

In the shader, image2D (etc) type, accessed via imageLoad(), imageStore() etc. If you might be writing the same pixel via different shader invocations, there’s also the imageAtomic* functions to perform atomic read-modify-write operations.

In the client code, glBindImageTexture() binds a texture (or rather, mipmap level) as an image.

Images are basically textures used as “raw” arrays. Unlike textures, they can be written as well as read. Access uses integer array indices, with no interpolation, filtering, wrapping, etc. Also, only formats with power-of-two pixel sizes are supported (the only supported three-component format is GL_R11F_G11F_B10F, but normally you’d just use a 4-component format even if you don’t need an alpha channel).

This looks weird to me. There might have interesting underlying reasons, for sure. But this looks like going backward when textures (I know here we talk about image) had to be power-of-two.

It’s possibly due to the fact that a non-power-of-two-sized pixel may span word boundaries, which would be problematic for atomic operations.

Although, it’s probably a moot point, as I believe that most hardware only actually supports power-of-two pixel sizes (anything else just gets padded out).

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.