PDA

View Full Version : No Instancing support for GL 2.x ?



BionicBytes
09-30-2009, 03:29 AM
I would like to try and implement instancing for drawing multiple versions of the same object at different positions in the scene - preferrably using just a single API call. My Windows laptop drivers are limited to OpenGL 3.0 (waiting for nVidia to release Opengl 3.1 or 3.2 drivers!).

In my engine, the instance data for the models is in the form of a model matrix which specifies position, scale and rotation (as you'd expect really). I have been waiting for nVidia to implement the ARB/EXT_Instanced_Arrays which would be ideal for me as this API uses a divisor to break up an array of model matricies into instance 'chunks'. This is easy to implement (in theory) into the current engine which is GL 2.1 based.

However,the Instanced_arrays extension is not forth comming and is probably side-lined in favour of other methods. This is where I need help because I can't see how to actually impelment ARB_Texture_Buffer_Objects or ARB_Uniform_Buffer_Objects to hold the instance data and use ARB_Draw_Instanced to render the object instances.
My understanding of ARB_Uniform_Buffer_Objects or ARB_Texture_Buffer_Objects is that GLSL shader version 140/150 is required, and this is a problem because:

1) Requires a re-write of model shaders to conform to version 140
1b) Requires a re-architect of engine to remove fixed function uniforms and attributes for version 140 shader compatability
2) GLSL version 140 requires a GL 3.1 / 3.2 context ? - latop currently limited to GL 3.0

Additionally, it's not clear how to access the 'instance' data in the uniform buffer object. Using DrawElementsInstanced API I can 'see' the glInstanceID increamenting for each itteration and in the shader I can get access to the InstanceID. However, how am I to get the model matricies (instance data) in the correct order for my current frame? For example, If I have 1000 instances to render I can pack all these into 1 single UBO. Using view fustrum culling, I now want to render 50, for example, so now I need someway of telling the shaders to read the model matricies for the visible instances. How ? Do I need to create a new UBO each frame with just the visisble instances? This would indicate to me an overhead on the CPU as the list is built, memory copied, and then uploaded to GL.

In this scenareo, UBO are more desirable than Texture_Buffer_Objects as the uniform buffers are contant over all vertices and thus processed faster than a per pixel lookup into the TBO to read the modelmatricies (instance data) - although I suppose the TBO could be read in the vertex shader - but I don't know if this is faster or slower than TBO lookups in the pixel shader.

What I need from you guys is the following:

1) Am I correct about the GLSL version requirements and re-writing shaders ?
2) Has anyone actually implemented instanced model rendering as I am trying to- if so what technique did you use?
3) UBO is the way to go rather than TBO ?
4) Do UBO buffers need to be populated every frame to contain a linear set of instance data. ie there is no way to 'skip' over instance data in the UBO for the visible set of instances.
5) Opengl 3.1 context needed to use uniform blocks
6) Re-write of application to support uniform blocks along with UBO extension ?

Any help would be appreciated!

Alfonse Reinheart
09-30-2009, 11:32 AM
ARB_texture_buffer_objects should be implementable in GL 3.0. If NVIDIA hasn't done so, then there's really nothing you can do.


Do I need to create a new UBO each frame with just the visisble instances?

You don't need to create a new buffer. Instead, don't use the buffer object as the permanent storage for all the instances. Instead, using a streaming buffer that you update each frame, copying only the data that you intend to render with this frame. glMapBufferRange with GL_INVALIDATE_BIT is your friend here.

BionicBytes
09-30-2009, 02:31 PM
Thanks for the reply.
Please correct me if i'm wrong - but I thought Texture_Buffer_Objects requires a new texture fetch operation in GLSL and sampler type and hence VERSION=140. Does this not also mandate a GL 3.1 context to support GLSL 140 ?
Also, as texture fetches will have to be performed per pixel (or perhaps per vertex) won't this be quite slow compared to Uniform Buffer Objects?
Has anyone checked the speed difference? Of course I can't be picky because currenly the laptop drivers are holding me back. My point is that if UBOs are faster for what I need, then I'll have to wait for GL 3.2 drvers.
I think you are also saying the same thing as me: I will have to upload instance data to the buffer object per frame - and I would have thought this is quite costly. What does the GL_INVALIDATE_BIT do ?

kRogue
09-30-2009, 02:55 PM
GL_EXT_texture_buffer_object has been on nVidia hardware since the GeForce 8 was released, enable the extension in your shader and slap the EXT suffix on the texture buffer object calls and you are good even with a driver from late 2007. Uniform buffer objects are also available as an old nVidia EXT extension, but the usage is a little different than what is in GL 3.1: GL_EXT_bindable_uniform.

nVidia's mainline driver does GL 3.1, but, ahem, are you getting your driver from nVidia or letting your distribution get it for you? I have found that Ubuntu quite often fetches a way too old driver, you can also pick up nVidia GL 3.2 beta driver too.

Alfonse Reinheart
09-30-2009, 02:56 PM
I thought Texture_Buffer_Objects requires a new texture fetch operation in GLSL and sampler type

Yes, it does. The extension provides this. It's part of the extension specification.

All you need to do is activate the extension in GLSL using the #extension directive.


Also, as texture fetches will have to be performed per pixel (or perhaps per vertex) won't this be quite slow compared to Uniform Buffer Objects?

Try it and see.

kRogue
09-30-2009, 03:21 PM
Just one more thing, when the GeForce 8 was released, nVidia made a LOT of extensions for GL 2.x that supported it's capabilities, many of those extensions have found their way into the GL spec over the past year or so: GL_EXT_gpu_shader4, GL_EXT_geometry_shader4, GL_EXT_texture_buffer_object, GL_EXT_draw_instanced, GL_EXT_draw_buffers2, GL_EXT_bindable_uniform, GL_EXT_texture_integer. More or less, any functionality in GL 3.2, was already available in GL 2.1 with nVidia extensions for almost 2 years now. (How else do you think that they got GL 3.x drivers out the door so fast?)

When looking at "feature X" of the GL 3.0, 3.1 or 3.2 specification, at the end of the spec, there is a "what is new section" and 9 times out of 10 it names an extension that the feature came from.

Edit: Actually most of these extensions are dated from the end of 2006, so around for almost 3 years!!

BionicBytes
10-01-2009, 05:46 AM
nVidia's mainline driver does GL 3.1, but, ahem, are you getting your driver from nVidia or letting your distribution get it for you? I have found that Ubuntu quite often fetches a way too old driver, you can also pick up nVidia GL 3.2 beta driver too.


I am using nVidia's web site to view laptop Geforce8 drivers. Currently there only seems to be GL 3.0 drivers.

I had always been waiting for ARB extenstions rather than EXT - since previous experience has taught me that functionality may change - and this is true for Bindable Uniforms. Actually, I was holding out for the simplest form of instancing - the ARB_Instanced_Arrays as this is dead easy to include into legacy applications like mine.

I had forgotten about enabling the extension in the GLSL shader - so I don't need Version=140 or GL 3.1 contexts. I assume this is correct ?

However, I am faced with adding support for ARB bindable uniforms API into my shader class - so I can only hope it's worth the effort (which will be considerable). I was kinda hoping someone else has actually done this kind of thing and is able to give a general recommendation!

Alfonse Reinheart
10-01-2009, 11:30 AM
this is true for Bindable Uniforms.

It's not just the functionality change. EXT_bindable_uniforms is a terrible extension. The idea is nice and all, but it doesn't specify what the format of the data should be. And if you don't know that, you really can't use it.


I had forgotten about enabling the extension in the GLSL shader - so I don't need Version=140 or GL 3.1 contexts. I assume this is correct ?

As long as ARB_texture_buffer is defined, yes.

BionicBytes
10-01-2009, 12:51 PM
OK, since there are no other replys forthcomming, I'll try and implment UBO and instancing.
I'll post the results on this thread when I'm done - but it may take some time so don't expect any updates on this for a few weeks!

BionicBytes
10-03-2009, 08:56 AM
Looks like my choices are ever worse!
I was wrong, my laptop drivers - nVidia 186.13 no not include ARB_Uniform_Buffer_Object and only texture_Buffer_Objects.
I was surprised by this to say the least - but then again this extension and the new MapBufferRange API are part of OpenGL 3.1 and this driver is only GL 3.0.

Oh well....It hardly seems worth my effort attempting to get TBOs to work when really I suspect the UBO approach is the way to go. I'll just have to wait......

Alfonse Reinheart
10-03-2009, 12:52 PM
the new MapBufferRange API are part of OpenGL 3.1

MapBufferRange is 3.0. And it's available on my 2.1 cards as an extension.

BionicBytes
10-05-2009, 10:15 AM
agreed that MapBufferRange is part of GL 3.0.... but the issue is that uniform Buffers are not and the ARB extension is also not present with my Laptop Geforce 8 drivers 186.13.

Alfonse Reinheart
10-05-2009, 11:32 AM
the issue is that uniform Buffers are not and the ARB extension is also not present with my Laptop Geforce 8 drivers 186.13.

I know. I was just pointing out the mistake about MapBufferRange in case someone read it and got the wrong idea.