PDA

View Full Version : Opengl ES 2.0: How can I pass large(aprox 1-9mb ) array to fragment shader?



ChizhDroid
11-18-2015, 02:27 AM
Hi all,

I want pass large array to fragment shader. In fact, this is a image. But I do not want this image was involved in interpolation, so I can not use this image as a texture. How can I implement it?

Thanks in advance

GClements
11-18-2015, 02:37 AM
I want pass large array to fragment shader. In fact, this is a image. But I do not want this image was involved in interpolation, so I can not use this image as a texture. How can I implement it?

If you don't want the texture to be filtered, set GL_TEXTURE_MIN_FILTER and GL_TEXTURE_MAG_FILTER to GL_NEAREST, and/or sample at the centre of each texel.

But even then, the standard only requires support for 64x64 textures; anything larger is up to the individual implementation.

ChizhDroid
11-18-2015, 04:31 AM
If you don't want the texture to be filtered, set GL_TEXTURE_MIN_FILTER and GL_TEXTURE_MAG_FILTER to GL_NEAREST, and/or sample at the centre of each texel.

But even then, the standard only requires support for 64x64 textures; anything larger is up to the individual implementation.

Thanks for your reply

Let me explain the problem in detail (see attached).

Large image: This image has a size equal to the size of the screen (eg 1024x768). This image is not to participate in the interpolation.

Mask texture: This is a small image (eg 64*64 ) and this image will participate in the interpolation.



In fragment shader I would like to have something like this:



highp vec4 maskTexel, imagePixel, resultColor;

maskTexel = texture2D(mainTexture, gl_PointCoord);
imagePixel = magicFunction(……);// get pixel color from Large image at screen coordinates


if (maskTexel.a < 0.1) {
discard;
}
resultColor = mix(maskTexel, imagePixel, maskTexel.a);
gl_FragColor = resultColor;



How do I implement this?

Alfonse Reinheart
11-18-2015, 06:01 AM
There is no "magicFunction", nor is there a need for such a thing.

You have a function that can be used to fetch from a texture at a particular screen location (aka: texture2D). So use that. There's nothing that says you can't use two textures.

gl_FragCoord.xy contains the window-space X and Y position of the current fragment. Just use them as your texture coordinates. Turn off filtering as GCElements suggested.

ChizhDroid
11-18-2015, 12:03 PM
gl_FragCoord.xy contains the window-space X and Y position of the current fragment. Just use them as your texture coordinates. Turn off filtering as GCElements suggested.

Thanks for the answer. I tried it. Unfortunately it's not working. If you do not complicate, please take a look at my code (a lot of code).

some init:




glGenFramebuffers(1, &viewFramebuffer);
glGenRenderbuffers(1, &viewRenderbuffer);

glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
// This call associates the storage for the current render buffer with the EAGLDrawable (our CAEAGLLayer)
// allowing us to draw into a buffer that will later be rendered to screen wherever the layer is (which corresponds with our view).
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(id<EAGLDrawable>)self.layer];
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, viewRenderbuffer);

glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);
glViewport(0, 0, backingWidth, backingHeight);

glGenBuffers(1, &vboId);

glEnable(GL_BLEND);

glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glUseProgram(program[PROGRAM_POINT].id);

glUniform1i(program[PROGRAM_POINT].uniform[UNIFORM_MAIN_TEXTURE], 0);

// viewing matrices
GLKMatrix4 projectionMatrix = GLKMatrix4MakeOrtho(0, backingWidth, 0, backingHeight, -1, 1);
GLKMatrix4 modelViewMatrix = GLKMatrix4Identity; // this sample uses a constant identity modelView matrix
GLKMatrix4 MVPMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix);
glUniformMatrix4fv(program[PROGRAM_POINT].uniform[UNIFORM_MVP], 1, GL_FALSE, MVPMatrix.m);
// point size
glUniform1f(program[PROGRAM_POINT].uniform[UNIFORM_POINT_SIZE], pointWidth);




init mask texture:


glGenTextures(1, &texId);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)width, (int)height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);


init big image texture:




glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &bigImage);

glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, bigImage);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, texturerData);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

glUseProgram(program[PROGRAM_POINT].id);





render code:


static GLfloat* vertexBuffer = NULL;
static NSUInteger vertexMax = 64;

[EAGLContext setCurrentContext:context];
glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);

// Allocate vertex array buffer
if(vertexBuffer == NULL)
vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat));

vertexBuffer[0] = (GLfloat)336;
vertexBuffer[1] = (GLfloat)700;

vertexBuffer[1] = 0.1;

[self drawBrushTexture:vertexBuffer count:1 size:100];

vertexBuffer[0] = (GLfloat)500;
vertexBuffer[1] = (GLfloat)700;

[self drawBrushTexture:vertexBuffer count:1 size:200];

// Display the buffer
glBindRenderbuffer(GL_RENDERBUFFER, viewFramebuffer);

[context presentRenderbuffer:GL_RENDERBUFFER];
}

-(void)drawBrushTexture:(GLfloat*)vertexBuffer count:(NSUInteger)vertexCount size:(int)size{

glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, vertexCount*2*sizeof(GLfloat), vertexBuffer, GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, GL_FALSE, 0, 0);

glUniform1f(program[PROGRAM_POINT].uniform[UNIFORM_POINT_SIZE], size);

glUniform1i(program[PROGRAM_POINT].uniform[UNIFORM_BRUSH_TEXTURE], 1);

glUniform1i(program[PROGRAM_POINT].uniform[UNIFORM_MAIN_TEXTURE], 2);

glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texId);

glDrawArrays(GL_POINTS, 0, (int)vertexCount);

}



vertex shader:



attribute vec4 inVertex;

uniform mat4 MVP;
uniform float pointSize;
uniform lowp vec4 vertexColor;


void main()
{
gl_Position = MVP * inVertex;
gl_PointSize = pointSize;
}


fragment shader:



uniform sampler2D mainTexture;
uniform sampler2D brushTexture;

void main()
{

highp vec4 maskTexel, imagePixel, resultColor;

maskTexel = texture2D(mainTexture, gl_PointCoord);
imagePixel = texture2D(brushTexture, gl_PointCoord);

if (maskTexel.a < 0.1) {
discard;
}
resultColor = mix(maskTexel, imagePixel, maskTexel.a);
gl_FragColor = resultColor;

}



result = <left image>

if in fragment shader I use


imagePixel = texture2D(brushTexture, gl_FragCoord.xy);


instead of



imagePixel = texture2D(brushTexture, gl_PointCoord);


then I can see result = <right image>