Arbitrary fragment pattern

Hello,

I am wondering whether one can change the screen positions of the fragments rendered by OpenGL.

By default, the fragments are sampled on the screen in a uniform way.

In my application, I would like to have them sampled in an irregular way (as the final image would be read out by the CPU to be further processed, rathe than being displayed on the screen).

Is there any possibility to do that?

Thanks!

I am not entirely sure i understand correctly, but from within the fragment shader you cannot change the fragments coordinates. However you can change the target position of a fragment from the vertex shader. If you would, for example, render GL_POINTS and ensure that every vertex is rasterized as exactly one fragment (pointsize=1, no MSAA or such), then you could move the gl_Position to the desired coordinates from within the vertex shader. Thereby you could “adress” framebuffer positions.

Does that help?

[QUOTE=x57;1261194]I am not entirely sure i understand correctly, but from within the fragment shader you cannot change the fragments coordinates. However you can change the target position of a fragment from the vertex shader. If you would, for example, render GL_POINTS and ensure that every vertex is rasterized as exactly one fragment (pointsize=1, no MSAA or such), then you could move the gl_Position to the desired coordinates from within the vertex shader. Thereby you could “adress” framebuffer positions.

Does that help?[/QUOTE]

Thanks for your reply.

Considering OpenGl rendering pipeline, it can be imagined that the light rays are the lines that pass the “eye” and the centers of the fragments. What I want to do is: I would like to get the targets of the rays that originate from the “eye” and pass other positions rather than the default centers of the fragments.

I am not sure what your idea is. gl_position records the position of the vertex, how can it affect the fragment positions with repect to the near plan (or screen)?

Or you assumed that I want to record the positions of the targets of the fragments? I don’t want that; I need the the positions of the targets of the fragments with customized positions.

By “targets”, I mean the intersections of the imagined light rays with any geometry in the scene.

[QUOTE=shuiying;1261198]Considering OpenGl rendering pipeline, it can be imagined that the light rays are the lines that pass the “eye” and the centers of the fragments.

What I want to do is: I would like to get the targets of the rays that originate from the “eye” and pass other positions rather than the default centers of the fragments. [/quote]

You can change the VIEWING transform such that the rasterization samples cover the area of your scene that you’re interested in.

However, I suspect that isn’t what you want. It sounds like you want scattered write capability. You can do this, but it’s very expensive. It’s not what GPUs were built to do well.

What exactly are you trying to do here? Ray tracing? Do you want scattered reads from your scene, scattered writes to your scene, or both? More detail on exactly what you’re trying to do would help us suggest the best ways to apply GPU rendering to your needs.

[QUOTE=Dark Photon;1261210]You can change the VIEWING transform such that the rasterization samples cover the area of your scene that you’re interested in.

However, I suspect that isn’t what you want. It sounds like you want scattered write capability. You can do this, but it’s very expensive. It’s not what GPUs were built to do well.

What exactly are you trying to do here? Ray tracing? Do you want scattered reads from your scene, scattered writes to your scene, or both? More detail on exactly what you’re trying to do would help us suggest the best ways to apply GPU rendering to your needs.[/QUOTE]

Thank you for your reply. I think I might need the scattered write.

What I want to do is similar to ray tracing, but much simpler. I would like to simulate a LiDAR. The LiDAR has four layer of lasers turning oscilating around an vertial axis. I use the openGL frustum to simulate the lasers. The lasers at one layer that is not perpendicular to the vertical axis should render a conical surface. However, one layer of the “light rays” passing through the fragments of the openGL frustum renders a plane surface. So I need to simulate one Lidar with several frustums. I implement the frustum with the camera class of openscenegraph. The cameras can only be rendered one after anther. This slows down the simulation to a 10 or even smaller FPS (frame per second).

So I want a strategy to simulate the LiDAR with one fustum. In that case, the fragments should be located where the real-life lasers are supposed to intersect with the near plane (or screen). So I need the rasterization to be implemented on those intersections rather than the default fragments.

So is there any way to assign those positions to the GPU telling it to rasterize at them?

What about rendering depth textures, just like you would a shadow map. Then (as others have discussed) draw points for every texel in the depth texture, sampling depth and projecting them back into the scene with Projection * View * lightViewInv * lightProjectionInv * (texCoord.xy2-1, sampledDepth2-1, 1).

Thanks for your help. I am not familiar with shadow mapping. I had a quick look at it and don’t understand how the texture pixels of the second pass are mapped to the depth texture. I assume that the depth of a pixel of the second texture is caluculated by interpoldating between nearer pixels on the depth texture. So that is still sampling based on texture with regular texel pattern.

I can record the positions of scene objects in the vertex shader and pass them to the fragment shader. I can do some sampling in the fragment shader.

At present, I am experimenting this: set the resolution of the rendering camera at a very high value and make the rendering frustum contain all the real-life lasers. Then I calculate the nearest pixel to the real-life laser intersections on the near plane and record them. After rendering, I just get the image fragments related to those “nearest pixles”. This method is OK but suffer from inaccuracy.

I would like to do an interpolation for the real-intersections at the “nearest samples” in the fragment shader. I use dfdx and dfdy and get the real intersections from a sampler2D. But the result is not good at all. I would like to know how to do interpolation in fragment shader.

Ok, after you description, I’m still struggling to get my head around exactly what you want to do, and what you hope to accomplish by it.

So if I understand correctly, you don’t want to render pre-captured LiDAR data (a bunch of points). You want to “simulate” a LiDAR by generating the point cloud across the surface of your scene objects. Do I understand correctly that you just want to render the point cloud? What requirements do you have on how it is sampled in eye space, how each point sample is rendered, and/or whether you want to capture and store that point cloud (e.g. on disk)?

Again I’m not sure exactly what you want to do, but to help facilitate discussion, how about rendering your scene and in your fragment shader do a computation of whether this fragment is illuminated by a LiDAR sample (based on whatever algorithm you’re using; statistics, etc.). If so, output a color (e.g. green). If not, output black.

Or do you want to compute intersection positions from multiple vantage points (one for each laser layer), and then render them reprojected from another vantage point?

[QUOTE=Dark Photon;1261294]Ok, after you description, I’m still struggling to get my head around exactly what you want to do, and what you hope to accomplish by it.

So if I understand correctly, you don’t want to render pre-captured LiDAR data (a bunch of points). You want to “simulate” a LiDAR by generating the point cloud across the surface of your scene objects. Do I understand correctly that you just want to render the point cloud? What requirements do you have on how it is sampled in eye space, how each point sample is rendered, and/or whether you want to capture and store that point cloud (e.g. on disk)?

Again I’m not sure exactly what you want to do, but to help facilitate discussion, how about rendering your scene and in your fragment shader do a computation of whether this fragment is illuminated by a LiDAR sample (based on whatever algorithm you’re using; statistics, etc.). If so, output a color (e.g. green). If not, output black.

Or do you want to compute intersection positions from multiple vantage points (one for each laser layer), and then render them reprojected from another vantage point?[/QUOTE]

Hi,

I have another problem in trying new methods.
I would like the fragment shader to read values from a uniform sampler2DRect.

The sampler2DRect stores some data that I want to give to the shader.

The problem is that the shader always gets filtered value of the actual data. I want the shader to get the exact value without any filtering.

I use the following binding method and shader programming:


//set image 

osg::ref_ptr<osg::Image> imageSampler = new osg::Image();


imageSampler->allocateImage((int)XRes, (int)YRes, 1, GL_RGBA, GL_FLOAT);


osg::Vec4f * rgba = (osg::Vec4f *)(imageSampler->data());



// write date to the image


for (int row = 0; row < subYRes; row++) {


                    for (int column = 0; column < subXRes; <
span style="color:rgb(0,0,0)">column++) {


                       

                        *rgba = osg::Vec4f(data, data, 0, 0);


                        rgba++;

                   }



}


// texture for sampler


osg::ref_ptr<osg::TextureRectangle> textureRect = new osg::TextureRectangle;

textureRect->setTextureSize((int)subXRes, (int)subYRes);

textureRect->setInternalFormat(GL_RGBA);

textureRect->setImage(0, imageSampler);
textureRect->setFilter(osg::Texture2D::MAG_FILTER, osg::Texture2D::NEAREST);


// configure  shader 

stateset->addUniform(new osg::Uniform("textureID0", 0));


stateset->setTextureAttributeAndModes(0, textureRect, osg::StateAttribute::ON);



// in fragment shader:

uniform sampler2DRect textureID0;

void main()


{



    vec2 st=vec2(gl_FragCoord.x -0.5,gl_FragCoord.y-0.5);


    vec4 rgba = texelFetch(textureID0, ivec2(st));


    Frag_Color = vec4(rgba.r,rgba.g,0,0);
}

If I read the frame buffer, the value of rgba.r and rgba.g seem to be filtered value, not exactly what the real value.

Where I am wrong here ?

Thanks a lot!

It can not be filtered since you are using texelFetch - double check everything. Maybe the image was rescaled before uploading, or the size of it and the framebuffer is different and the result confusing you - but it is not possible that texelFetch extract interpolated values!
BTW,

vec2 st = vec2(gl_FragCoord.x-0.5, gl_FragCoord.y-0.5);
ivec2(st);

is equivalent to

ivec2(gl_FragCoord.xy);

because the fractional part simply get cut.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.