getting analytical solution from open gl.

After the object is defined for opengl to render, the dataset is out of our control. Can anybody tell me how i can get the set of points rendered as the object at any point in the course of object manipulations, so that i can use them for other calcultions i need to do on the object.

Hi,

you can’t; and, anyway, even if you could, they probably would be suitable ENOUGH for what you need (whatever that is). Given the resolution of the screen, then the resolution of the floating point engine doesn’t need to be spectacularly high. I doubt OpenGL will play aroun dwith 80bit precision floats inside its engine, just to truncate them to integers when rastering.

The only way is to do the maths yourself (or use the glu API) to multiply the projection/transofmration stuff… if this is what you’re after. If you’re doing something more complex, tho, like tracking a feature in a texture stretched over a transformable mesh, then you will really have to abandon using glu stuff altogether, bite the bullet and do it yourself.

cheers,
John

can I atleast get the transformed instances of the points that i originally gave to define the object in opengl?

OpenGL feedback does allow that, but it’s a bad feature, no one accelerates it, and you very simply should not use it.

  • Matt

Hi,

as whatshisname said, feedback would work, but if you’re only interested in the transformed points, then you can EASILY do it yourself… its just matrix multiplication…

(how? well, you could either get the transform matricies yourself, and multiply them together… but that would mean looking up the form of the translate matrix, for example. Alternatively, set up your opengl modelview matrix with the glTranslatef yadda yadda calls, and then READ the matrix when you’re finished and multiply your verticies by this matrix… simple as 3.14159265)

cheers,
John

Hmm… So let’s say that I’m “attatching” points and unit vectors to my model that I plan to use in other parts of my engine (I’m trying to track them, but they’re in the affine space of my model, not in any texture space or anything). If I want these points to remain in the same orientation as my model, then there’s no way to get OpenGL to do it? I just have to perform transformations on them manually when I perhaps call glRotate() before drawing my model?

-Oops. sorry, that wasn’t my most eloquent piece of writing…

-Thanks for any info on this topic!

yer. that;s right. opengl won’t tag semantics to your model (with the exception of fall-through tokens in selection), so you will need to track semantic stuff like this yourself.

cheers,
John

i have found that we can track some of the things thru the glfeedback stuff, however can some one please tell me how to parse the feedback list it returns, i tried by the grammar definition of the list but doesnt work somehow.