Hi all,
I found an interresting thing : if you get the assembly code of your shader with something like :
Code :GLint formats = 0; glGetIntegerv(GL_NUM_PROGRAM_BINARY_FORMATS, &formats); GLint *binaryFormats = new GLint[formats]; glGetIntegerv(GL_PROGRAM_BINARY_FORMATS, binaryFormats); #define GL_PROGRAM_BINARY_LENGTH 0x8741 GLint len=0; glGetProgramiv(yourShader, GL_PROGRAM_BINARY_LENGTH, &len); char* binary = new char[len]; glGetProgramBinary(yourShader), len, NULL, (GLenum*)binaryFormats, binary); glUseProgram(0); FILE* fp = fopen(name.c_str(), "wb"); fwrite(binary, len, 1, fp); fclose(fp);
It's possible to look at memory limitation :
In this case, the shaders need to use lmem witch is global GPU memory===>>>> SLOW!!!Code :... TEMP lmem[12]; ....
So use it to opitmize you shader vram usage!!!