View Full Version : glGet (again)

06-21-2006, 04:30 PM
I'm looking to write a class that uses openGL to do matrix operations, because I assume theirs will be much faster than my own. This class won't be doing anything graphical, just calling MultMatrix, Translate... when I asked before, why the glGet methods weren't modifying the values I passed them, the response was that I needed to give GL a rendering context beforehand. I have/had no idea what that means. I looked it up and found some wgl and glx stuff, but I don't want this to be platform specific, so what's the minimal that I need to do to get glGet to glGive? Here's the non-functional test I wrote:


int main()
float* mat = (float*)malloc((16*sizeof(float)));

glGetFloatv(GL_MODELVIEW_MATRIX, mat);

int i,j;
i = j = 0;

for(;i < 16;)
printf("[ ");
for(j = 0; j < 4; j++)
printf("%f ", mat[i]);
printf(" ]\n");

int mode = 0;

glGetIntegerv(GL_MATRIX_MODE, &amp;mode);
printf("%d \n", mode);
return 0;

06-21-2006, 04:37 PM
Please don't duplicate posts.

As I said, use GLUT for this. GLUT provides a cross-platform solution.

Have a nice day.

06-21-2006, 04:38 PM
You need to create a render context and make it current on your machine, there's no other way to access OpenGL-commands. So at least the initialisation stuff has to be platform-dependent.

Good luck, moebius

06-21-2006, 04:50 PM
To: <Nad> unregistered

I apologize for the duplication. I had this page loaded, and foolishly went to post a new topic before refreshing/checking for your latest response. Thanks for your patience.

06-21-2006, 07:10 PM
I highly doubt, that what you are doing will increase speed in any way. In fact, i am quite sure, that you will see horrible speed, even compared to "normally" calculating matrices.

What you are trying to do means, that you first send the command to the driver, which will cache it and only process it if you actually issue a draw-call, and then you will retrieve the data from the driver (which means it could be read from the GPU).

Let me tell you, that sounds bad! The GPU might be fast at computing that stuff, but the overhead of sending and retrieving the data is so damned high, that you will be baffled about the low speed.

And you are doing all this for 4x4 matrices??

Do it the normal way (calculating all on the CPU), that will save you a lot of time and will be much faster. If you really want to improve speed, take a look at SSE.

Maybe you should first learn how to use the GPU (through OpenGL) for what it is actually designed for (rendering), before you try to (mis-)use it for something different.