multiple vbos

Is there any performance gain from using mutliple VBOs? Right now i’m generating a GLfloat vertices[6291456] (3x128x128x128) and putting it in one VBO. Would i see any performance gain from separating vertices in half, in two VBOs, in 3…in 4??

Not really, especially if you are just storing vertex positions. If you have one VBO then you can issue the rendering of all points with a single draw call, while with multiple VBOs you need to separate the rendering into multiple draw calls and switch the VBOs between those. So I think especially in your case it would rather be an overhead.

Btw, are you rendering all your spheres now with a single DrawArrays call?

Yes all the spheres are beiing rendered with a single DrawArrays call. Well they are rendered as point sprites and i use a shader to make them look like spheres. The problem now is that the sphere is always facing where the user is looking. I’m working on trying to make it fixed.

Easy, change the lightDirection vector, rotate it as needed :
http://www.opengl.org/wiki/GLSL_Uniform

There is a maximum number of vertexes that can be submitted in a single draw call before you drop back to software emulation. This is - of course - implementation dependent, so you’ll need to do a glGet (look for GL_MAX_ELEMENTS_VERTICES and GL_MAX_ELEMENTS_INDICES if you’re also using an index buffer) and tune your drawing code as appropriate.

Unfortunately the documentation for glGet (http://www.opengl.org/sdk/docs/man/xhtml/glGet.xml) is unclear if this applies to client-side vertex arrays only, VBOs only, drawing with indexes only (and if so, whether it’s glDrawElements or glDrawRangeElements or both) or drawing without indexes, but I think the safest approach is one of “when in doubt, err on the side of caution”.

There is a maximum number of vertexes that can be submitted in a single draw call before you drop back to software emulation.

As far as I know this is rather a theoretical limit. On all ATI cards I’ve checked those the max index count is 16 million and the max vertex count is 2 billion which is rather difficult to reach. Unless you really render more than 2 billion spheres with a single DrawArrays call, you should be fine as I don’t think that NVIDIA would have a limitation over this one either.

I’m only working with vertices, normals and colors element, no indices.

Here the code from the frag shader (how should i call pointLight to be fixed?)


// pixel shader for rendering points as shaded spheres  
const char *spherePixelShader = STRINGIFY(
uniform float pointRadius;  // point size in world space  
uniform float pointLight;
varying vec3 posEye;        // position of center in eye space
void main()  
{      
	const vec3 lightDir = vec3(0.577, 0.577, 0.577);
	const float shininess = 50.0; 
	// calculate normal from texture coordinates
	vec3 N;      
	N.xy = gl_TexCoord[0].xy*vec2(2.0, -2.0) + vec2(-1.0, 1.0);
	float mag = dot(N.xy, N.xy);      
	if (mag > 1.0) discard;   // kill pixels outside circle      
	N.z = sqrt(1.0-mag);
	// point on surface of sphere in eye space      
	vec3 spherePosEye = posEye + N*pointRadius;
	// calculate lighting      
	float diffuse = max(0.0, dot(lightDir, N));
	//    gl_FragColor = gl_Color * diffuse;
	vec3 v = normalize(-spherePosEye);      
	vec3 h = normalize(lightDir + v);      
	float specular = pow(max(0.0, dot(N, h)), shininess);
	gl_FragColor = gl_Color * diffuse + specular;  
}  
); 

I’m only working with vertices, normals and colors element, no indices.

Why do you need a normal for a point sprite? Even color is unnecessary unless you want to tint your spheres.

Or you could just read the OpenGL specification. The spec says that this matters only for draw range calls. And since non-range calls don’t mention anything, then you can expect that breaking the limits with range calls will be no different from doing draw range calls.

Actually, i have no normals glfloat. But i need to call glEnableClientState(GL_NORMAL_ARRAY); else it crashs. I’m loading the color from a txt file, so i need to call a glfloat colors.

Edit: Forget that, i just realized i made a mistake. Now the enableclientstate for NORMAL array is removed, and program is running correctly.

i will probably need an index array tho!

Why do you need an index array if you render point sprites? You can never reuse vertex data in case of points so I don’t see the point.

Here the code from the frag shader (how should i call pointLight to be fixed?)


// pixel shader for rendering points as shaded spheres  
const char *spherePixelShader = STRINGIFY(
uniform float pointRadius;  // point size in world space  
uniform float pointLight;
varying vec3 posEye;        // position of center in eye space
void main()  
{      
	const vec3 lightDir = vec3(0.577, 0.577, 0.577);
	const float shininess = 50.0; 
	// calculate normal from texture coordinates
	vec3 N;      
	N.xy = gl_TexCoord[0].xy*vec2(2.0, -2.0) + vec2(-1.0, 1.0);
	float mag = dot(N.xy, N.xy);      
	if (mag > 1.0) discard;   // kill pixels outside circle      
	N.z = sqrt(1.0-mag);
	// point on surface of sphere in eye space      
	vec3 spherePosEye = posEye + N*pointRadius;
	// calculate lighting      
	float diffuse = max(0.0, dot(lightDir, N));
	//    gl_FragColor = gl_Color * diffuse;
	vec3 v = normalize(-spherePosEye);      
	vec3 h = normalize(lightDir + v);      
	float specular = pow(max(0.0, dot(N, h)), shininess);
	gl_FragColor = gl_Color * diffuse + specular;  
}  
); 

[/QUOTE]

Funny I do not have that from the link mentionned previously, in particles.zip :
particles\NVIDIA CUDA SDK\projects\particles\shaders.cpp :


// pixel shader for rendering points as shaded spheres
const char *spherePixelShader = STRINGIFY(
uniform vec3 lightDir = vec3(0.577, 0.577, 0.577);
void main()
{
    // calculate normal from texture coordinates
    vec3 N;
    N.xy = gl_TexCoord[0].xy*vec2(2.0, -2.0) + vec2(-1.0, 1.0);
    float mag = dot(N.xy, N.xy);
    if (mag > 1) discard;   // kill pixels outside circle
    N.z = sqrt(1-mag);

    // calculate lighting
    float diffuse = max(0.0, dot(lightDir, N));

    gl_FragColor = gl_Color * diffuse;
}
);

There you see the uniform vec3 lightDir that will be changed each time the light moves relative to the camera.

uniform vec3 lightDir = vec3(0.577, 0.577, 0.577);

it’s giving the same result. The specular light is following the user eye direction.

Did you actually read all my posts ?
Sorry but I must really express myself badly.
A ‘uniform’ is some kind of global read-only variable for a shader, which can be changed from your program, between draw calls. The line with vec3(0.577, 0.577, 0.577) acts as a default value if you do not set the uniform value from your code.

Search for “uniform variables” here :
http://zach.in.tu-clausthal.de/teaching/cg_literatur/glsl_tutorial/index.html

I understood all this. What i’m trying to do is something like uniform posEye but for the light position. I’ll check your link.

For a sun-like distant light, position is not needed, only light direction.
Unless you want a local light of course.

That’s a good point hehe! Would it be hard to create shadow from the shader i’m using?

In order to render shadows, you have to use shadow mapping in this case. What you have to do is the following:

Render your scene from the light’s direction using an orthogonal projection (taking in consideration you are using a directional light source) into a depth texture (without color buffers attached) using framebuffer objects.
The shader you use for the depth texture creation should be a simplified version of your sphere shader that keeps only the shape (discards should have effect) but not the lighting conditions as you don’t need them and rendering will be faster this way.
Then use the depth texture in the normal rendering pass to decide what is in shadow and what is not. For this, you have to calculate the vertex positions in the light’s orthogonal view space. Use these coordinates as texture coordinates to look up in the depth texture and that will tell you whether your fragment is in shadow or not.
This is simple shadow mapping, but in your case, because you want to use “volumetric” point sprites, it is not so simple but not that hard either.
What you have to do is after having the texture coordinates calculated for the shadow map per-point, you have to apply the appropriate offset in the fragment shader based on the shape of the sphere.