Storing 3D-Coordinates in a texture or pbo

Hi,

I have a paper, which writes “…render the 3D points and normals into texture memory on the GPU…”. I am trying to build a program which can handle reflections on curved objects. This should be hybrid solution with GLSL and CUDA.

With a shader I try to copy the 3D-coordinates of the reflector to a pbo, map the pbo to CUDA and perform my calculations.

This is my vertex shader


varying vec3 pos;

void main()
{
	pos = gl_Vertex.xyz;
	gl_Position = ftransform();
}

and here is my fragment shader:


varying vec3 pos;

void main()
{
	gl_FragColor = vec4(pos, 1);
}

This is really quite simple, but the coordinates are normalized, interpolated or clamped to one. How can I avoid that the step from vertex to fragment shader doesn’t interpolate my data? Or how can I calculate world coordinates from my fragment output? I really need the output as world coordinates to perform calculations with CUDA…

I hope anybody can help me or give me some new ideas or tips…

Thank you very much and a nice weekend!!! :cool:

Greetings thopil

Have a look at this extension: ARB_color_buffer_float.
Most ATI cards dont support it, I’m afraid.

To avoid clamping use a floating point rendertarget (texture). Interpolation cannot be disable. Why don’t yuo do the vertex transformation with CUDA (or on CPU) in the first place? All you need it transforming the position by the modelview matrix if you want world-space coords.

Thanks Zengar and namespace!! :smiley:

The ARB_color_buffer doesn’t work…

The problem is that I can’t use such a rendertarget like textures with CUDA. The examples of the CUDA SDK show that they use only pixelbuffer objects as rendertargets and map them to CUDA.

Why don’t yuo do the vertex transformation with CUDA (or on CPU) in the first place?

That I have almost tried, but I need the coordinates “per pixel”. I am searching for the right reflection position. That’s why I use the fragment shader to give back all coordinates. Or do you mean when I have all coordinates of all reflector vertices that I could multiply the vertices with the modelview matrix and interpolate in CUDA? Would be that the same effect?

Thanks!!

thopil

If you don’t want that openGL normalize your textures try this:

glClampColorARB(GL_CLAMP_FRAGMENT_COLOR_ARB, GL_FALSE);
glClampColorARB(GL_CLAMP_READ_COLOR_ARB, GL_FALSE);

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.