[QUOTE=Dark Photon;1261294]Ok, after you description, I’m still struggling to get my head around exactly what you want to do, and what you hope to accomplish by it.
So if I understand correctly, you don’t want to render pre-captured LiDAR data (a bunch of points). You want to “simulate” a LiDAR by generating the point cloud across the surface of your scene objects. Do I understand correctly that you just want to render the point cloud? What requirements do you have on how it is sampled in eye space, how each point sample is rendered, and/or whether you want to capture and store that point cloud (e.g. on disk)?
Again I’m not sure exactly what you want to do, but to help facilitate discussion, how about rendering your scene and in your fragment shader do a computation of whether this fragment is illuminated by a LiDAR sample (based on whatever algorithm you’re using; statistics, etc.). If so, output a color (e.g. green). If not, output black.
Or do you want to compute intersection positions from multiple vantage points (one for each laser layer), and then render them reprojected from another vantage point?[/QUOTE]
Hi,
I have another problem in trying new methods.
I would like the fragment shader to read values from a uniform sampler2DRect.
The sampler2DRect stores some data that I want to give to the shader.
The problem is that the shader always gets filtered value of the actual data. I want the shader to get the exact value without any filtering.
I use the following binding method and shader programming:
//set image
osg::ref_ptr<osg::Image> imageSampler = new osg::Image();
imageSampler->allocateImage((int)XRes, (int)YRes, 1, GL_RGBA, GL_FLOAT);
osg::Vec4f * rgba = (osg::Vec4f *)(imageSampler->data());
// write date to the image
for (int row = 0; row < subYRes; row++) {
for (int column = 0; column < subXRes; <
span style="color:rgb(0,0,0)">column++) {
*rgba = osg::Vec4f(data, data, 0, 0);
rgba++;
}
}
// texture for sampler
osg::ref_ptr<osg::TextureRectangle> textureRect = new osg::TextureRectangle;
textureRect->setTextureSize((int)subXRes, (int)subYRes);
textureRect->setInternalFormat(GL_RGBA);
textureRect->setImage(0, imageSampler);
textureRect->setFilter(osg::Texture2D::MAG_FILTER, osg::Texture2D::NEAREST);
// configure shader
stateset->addUniform(new osg::Uniform("textureID0", 0));
stateset->setTextureAttributeAndModes(0, textureRect, osg::StateAttribute::ON);
// in fragment shader:
uniform sampler2DRect textureID0;
void main()
{
vec2 st=vec2(gl_FragCoord.x -0.5,gl_FragCoord.y-0.5);
vec4 rgba = texelFetch(textureID0, ivec2(st));
Frag_Color = vec4(rgba.r,rgba.g,0,0);
}
If I read the frame buffer, the value of rgba.r and rgba.g seem to be filtered value, not exactly what the real value.
Where I am wrong here ?
Thanks a lot!