Rendering method

Hi there,

I have to build an application in which surfaces have to be filled according to a mathematical function dependent of source position and orientation (like drawing sound intensity over many surfaces, which is diffused by enclosures everywhere).
I am searching for the best way to do this in OpenGL.
Can you help me on that please ?
Have you better ideas or hints to help me taking the good way to go ?

Best regards,
Nicolas

I think you need to start by describing exactly what the input variables are and in what form ( textures, uniforms , lookups)
Perhaps then we may understand what you are trying to do.

Ok so I will try to explain myself clearly.
I have some enclosures to display in a 3D environment.
I do have all the mathematical stuff to compute sound levels produced by them at any distance and angle.
Now, I would have some tips or hints about how to compute them in order to display them in color gradient on some surfaces (walls for example).

Edit:
Some hints : for now, I thought about getting pixels covered by a surface, using their coordinate to define their color value according to my mathematical function. But I don’t know if it is possible and easy to do.

2nd Edit:
Ok like I really need some help here, I will try to be clearest … Is it possible to get pixels coordinates (X, Y, Z) from a surface (one or more polygons) in order to set their colors afterwards ?

I’m still not sure what you are after.
You can get access to the geometry x y z position in the fragment shader.
You also have access to the texture coordinates used to texture the geometry.
So which of these does your calculations need to use?

Yes. Create a FBO and attach a texture with a floating format GL_RGBA32F or GL_RGB32F.

Render your object to it and from your fragment shader, output the xyz coordinate. Later on, you can use that texture. Of course, you need to know a little bit of GLSL programming.

Hi !
I am back with new information but with same aim.
Remember, I have to visualize sound pressure from enclosures onto surfaces.
Is that possible using GLSL knowing that I can have many sources which may be translated and/or rotated, and many surfaces which may be also translated and/or rotated, and can hide some other surfaces which are below.
Simplest example : I have an unique enclosure rendering some sound in direction (z-axis) of two surfaces with one after the other in z-axis.
For mathematics, that’s ok, I already have sound pressures and filtering about orientation/distance/absorption…
Now, how can I draw those dB levels (regarding a color scale between 0/100 dB for example) on those surfaces ?
Can I do this using fragment shaders computing each pixel color regarding its position ?
And how can I know if there is another object (surface) between the one rendering and the sound source ?
Best regards

You can color code your sound pressure at every point of a target surface. This may require mathematics and the shape of your surface.

Can I do this using fragment shaders computing each pixel color regarding its position ?

Fragment shader can give you access to pixel value. Then you may compute pressure at this location and color code your pixel.

And how can I know if there is another object (surface) between the one rendering and the sound source ?
Best regards

Ray casting is best.

Hope this helps you.

Hi there !
Thanks for your answers.

Here where I am right now : clearly, I have done some shadow mapping. I render my surfaces in a depth texture, and use it later in fragment shader to know if frag is occluded or not.
Rendering surfaces in one FBO is not really an issue, moreover when using only vertex shaders.

But, when come many sound sources (like light sources), it may be relatively hard to deal with them.

So, my new question is : Do I have to compute a depth texture for each source and to use them all in fragment vertex later on ?

Depends on how your going to model sound propagation (just like for light it depends on how you model light propagation), and then how you map that to the API model.

Even if (like much RTR) we consider only single bounce propagation with direct “lighting” and occlusion, this problem with many point sound sources seems much like illuminating with many point light sources, where instead of accumulating multi-band radiance you’re accumulating multi-band sound. In RTR you have a number of options. Here are a few:
[b]
Forward Shading (1-pass)


for o in objects:    (CPU loop)
  for l in lights:   (GPU loop)
    result += shade( o, l )

Forward Shading (1-pass-per-light)

for l in lights:     (CPU loop)
  for o in objects:  (CPU loop)
    result += shade( o, l )

Deferred Shading

for o in objects:    (CPU loop)
  gbuffer = sample( o's interaction properties )

for l in lights:     (CPU loop)
  result += shade( gbuffer, l )

[/b]
Of course there are variants of even these, but trying to keep it simple. Envision replacing “lights” with “sound sources” in the above.

As you can see, with Forward-shading (1-pass), you’re applying all light (sound) sources together in one go, so if each of them has a shadow map, it needs to be available for all of them before this (that is, unless shadowing is performed recursively in shade() of course, which doesn’t map well to a GPU).

But with Forward-shading (1-pass-per-light), we’ve flipped it around and are only applying one light (sound) source at a time, so we could compute the shadow (occlusion) maps for that source alternatively with applying the source’s contribution to the scene. So there you don’t need all the shadow maps up-front. Though to avoid swapping render targets back and forth, you might very well want to (if you have enough GPU mem for them).

Similar situation with Deferred Shading. The main thing here is we’ve gotten rid of the nasty nested loop which (at least in RTR) leads to shader permutation hell and needless duplicate work (we’ve changed O(l*o) to O(l+o)).

Also, in the random tangently related department, I recall at the SIGGRAPH 2011 OpenCL (not OpenGL) BoF Intel did a demo and presented some work doing 3D audio propagation with OpenCL which you might want to check out. See the link for the presentation.

I have read your answer and publications that you point to.
Actually, I don’t need auralization and let me explain a little bit more precisely my goals :

  • I have a spatial structure (#triangles)
  • Some structures may occlude signal (#occlusion)
  • I can have up to 200 sound sources doing spherical emission (I already have data of sound pressure according to distance & angle)

What I need is:

  • Visualization of sound pressure on surfaces, taking account of occlusions but no reverb
  • Possibility to send/use parameters of each source for multiple wavelengths (about 200 float values for each)
  • RTR

Actually, It seems that fragment shader will limit me in getting each source parameters… I have to inspect that point.

I was thinking of doing cube mapping for each sources, but it would cost (if I have an 800/600 viewport, 200 sources):
8006006*200 ~ 550Mb!
And I can’t pass 200 textures to fragment shader.

Have you some tips to help me ? Please…

Do it in passes, and add-blend the pressure for each pass.

Can you explain your POV or give me a scheme of your idea ?
I am not so clever to understand it within 12 words.
I lack some OpenGL tricks too, learning it from 3 months at most.
And French… :slight_smile:

N.B. : I would add that I need custom function for blending sound pressure between them using a log(x) func

I just skimmed trough the topic so I might be way off.
But my idea is, instead of trying to render the 200 sound sources at once (as you calculated it would be too big for you), try to do a pass Px for each sound source (or maybe only 4 at a time etc) to render its results on a FBO. Then render as a textured quad with additive blending to another framebuffer R’=R+PX
For the log(x), you should be able do it when doing each pass, Px is already scaled.
I hope it was clearer, and in the meantime I will read the topic more in depth.

About the “And I can’t pass 200 textures to fragment shader.”

=> you can group textures into an atlas (cf. a big texture that store a lot of littles textures into a grid) and use one texture coordinates translation for to select the good sub-texture into this atlas/grid

If I have understand, you want to “abstract/replace” the source of a sound by the color emitted by a light and handle the propagation of the lighting/sound using “standard” OpenGL lighting or something like this ?

Hi
I am coming with some more fresh infos.
So, I have built a cubemap for each enclosure where I store distance between it and fragment. This is done when enclosure is created and store in memory.
For rendering, I draw surfaces for each enclosure using its cubemap to detect occlusion.
This is working and quite fast actually.

Now, I have to add all of rendered surfaces in order to solve the following equation : c(x) = 10*log(c1(x)/ref + … + cn(x) / ref)
c1 to cn are the colors for the pixel x for each enclosure.

So, how can I just add colors/values and apply the log AFTER having added them ? (cause log cannot be applied during rendering, additions cannot be separated).

Do your sums, then use the result as source texture and log() it.

Beware of precision issues however, because it may quickly degrade if you do not use floating point render buffers.

I am using GL_UNSIGNED_BYTE precision for distance cubemaps because distance between enclosures and surfaces won’t be higher than 255 meters (actually, I store this value : dist(enc, frag) / maxDist with maxDist equals to 500).

But for texture I will use a GL_FLOAT precision, indeed. May I ask you how I must add color values per pixel ? Using blending right ? Can you explain me ?