Feedback during rasterization ...

Hello …
I want to find an opengl function which permit to take the DephtBuffer values during the scene rasterization… at the moment, the solution I have found is:
Between two declarations of polygons(part of a surface of an object), I practice a ‘glReadBuffer’ on the DephtBuffer … but this solution forces me to declare the polygons of my objects one by one ! (no use of ‘glut’ or ‘glList’ to create objects …) what is not interesting, because of the time I loose !
So, on the Opengl rendering method, all the depht values are evaluated and comparated to the buffer’s one … how can we take back an information of this change ??? (That should be a solution …) …
An other solution I have tried is to use the FeedBack, but to have a fine description, I need to create clipping planes near of my focus point … but it use a lot of time to give sub-polygon vertexs …
Does anybody have an idea ??
Thanks!

hae? i don’t understand at all what you try to do, sorry…
would like to help but i don’t see what you try to do

Thank you for the intention…

I am going to try to express myself differently … and sorry for my English…

In fact, what I want to do, it’s get back the values of Z-Buffer every time it is changed. Let us suppose that I want to create a card of the depthes … for it, I place glDepthFunc on ALWAYS, and I would like to get back the value of the depth every time it changes, thus every time one meets one new surface.

Do you see better??
And please, have you got an idea??

If you have glDepthFunc(GL_ALWAYS), every triangle you render will update the color and depth buffers. You could just as well transform the triangles yourself in software, this will give you depth values for the vertices. Then you could interpolate these to get depth values for the whole triangle.
This would work without reading the depth buffer all the time, though it still won’t be fast.

edit
Though consistent against another, these values are possibly miles away from the values generated and used by the hardware. And to add insult to injury, you’d also have to perform clipping in software
/edit

However, I’m still a bit puzzled why you would want to do such a thing. There might be a simpler and faster solution using the stencil buffer, but to be able to answer, you must explain what exactly you want to do with the depth values, or better, what you want to accomplish on screen.

[This message has been edited by zeckensack (edited 03-29-2002).]

hm… without knowing what you want to get from this, why you wanna do it, there is no other way we can suggest you except rendering every triangle and copy its depthvalues out…

i now understand what you wanna do, but for what would be helpful, that way we can possibly find a more optimal approach to this…

First of all, thank you for your cooperations…

In fact, I want to take advantage of capacities of the graphics card to make projections to calculate distances of Crossed in objects… For example, if one takes a line (which represents a shot of laser pistol) which crosses a sphere: if one takes place on the axis of the line, one can get back the depthes in which both surfaces diametrically opposite of the sphere were met …

In my problem, I thus try to get back the data used for the automatic deep test !(that is why I declare glDepthFunc(GL_ALWAYS), to estimate the depth for every surface met!)

If you have not answer to my problem, know you how I can have OPENGL’s sources, to see of what way is made the deep test?

Still thanking you for your assistance … And by hoping that I would have been more clear!

flav1

Sounds like you want to do collision detection. You should do this yourself and not rely upon the graphics card to do it for you because using the graphics card would be both slower and less reliable. For example, what happens when a laser hits something that is not on the screen? You will never know about this.

What you want to do is lookup information on collision detection, line-sphere (or ray-sphere) intersection, line-triangle (or ray-triangle) intersection, sphere-sphere intersection, etc. Try doing a search for these things on www.google.com or take a look around www.gamedev.net or www.flipcode.com and they should have some tutorials on these things for you.

This idea could work, but it will be slow. You will be forced to analyze millions of pixels. Best way to do collision testing is using line to triangle intersection algorithm (for your laser gun thing anyway)

V-man

This idea could work, but it will be slow. You will be forced to analyze millions of pixels. Best way to do collision testing is using line to triangle intersection algorithm (for your laser gun thing anyway)

Besides being slow, it isn’t even guarenteed to work. It is only as good as the resolution of the image.