Longs Peak Object Model

--------------------------------------------
Ok first of my observations is rather trival (a warm up?)

I’m used to 2 different naming styles:

  1. this_is_some_name
  2. thisIsSomeName / ThisIsSomeName
    I prefer the second one, but I think I’ll have some problems getting used to this:
glTemplateAttrib<name type>_<value type>

I think I would prefer the ‘_’ character to be removed and to start from capital letter. We could probably even get away with not using capital letter in because it’s obvious that ‘ti’ means ‘template-integer’ and not ‘template index-NOTHING’, but in case there would be name conflict in future a capital letter is fine for me.

--------------------------------------------
Second thing that is on my mind is generalization.

We create a texture starting with:

glCreateTemplate(GL_IMAGE_OBJECT)

This defines what set of attributes this object template will have. Now imagine I want to do render to vertex buffer. Wouldn’t it be nice to have something like this:

glCreateTemplate(GL_IMAGE_OBJECT | GL_VERTEX_ARRAY_OBJECT)

Such object would have both attributes of a texture and a vertex array and could be used as both. It would be up to programmer to ensure that texels of texture overlap with vertices in vertex arrray (it could actually have a limitation to 4-components + power of two - others would generate an error upon creation of such combined object).
Of course there is a problem - it’s best to use interleaved arrays for vertex data. But as for non-interleaved arrays I think it could be possible to implement this on existing hardware.

For interleaved arrays it would require interleaved textures or multi-component textures (not possible to sample such texture, and when rendering to such texture it occpies few MRT’s) - that would require a great deal of flexibility from hardware so I don’t consider it very reasonable - for more complex cases geometry shaders or vertex texture fetch are the way to go.

Such object combination also allows to use glImageData2D on vertex array organized as grids, therefore allowing to pass let’s say heightmaps from CPU on the fly - especially usefull for etremely large terrain data streamed from system memory or HDD (only edges need to be updated when moving).

My point here is - textures / arrays / uniform sets (like the environment uniforms mentioned) - they’re all actually interpretations of some memory area on GPU - so should we allow to combine different interpretations (hehe, “render-to-uniform-array” ?).

I think that would require one more parameter to glTemplateAttrib:
glTemplateAttribt_o(template, GL_IMAGE_OBJECT, GL_FORMAT, _texture_format);
glTemplateAttribt_o(template, GL_VERTEX_ARRAY_OBJECT, GL_FORMAT, vertex_array_format);

Yes, I know it can be done with geometry shaders or even vertex shaders (VTF I mentioned before), but overlapping objects shouldn’t be very difficult to implement and will be easier to use than geometry shaders. It could also be faster because you don’t need additional shaders when rendering - there is only cost of updating.

Anyone out there thinks this makes sense or is it just me that should get some sleep?

Nah, I guess I’m a bit overworked lately, but I’ll post it anyway :smiley:

This defines what set of attributes this object template will have. Now imagine I want to do render to vertex buffer. Wouldn’t it be nice to have something like this:
You’re making some fairly strong assumptions. Particularly:

1: That it doesn’t work that way already.
2: That even if it doesn’t, that you can’t bind an image buffer object as a buffer.

For example, take this line:

Such object combination also allows to use glImageData2D on vertex array organized as grids, therefore allowing to pass let’s say heightmaps from CPU on the fly - especially usefull for etremely large terrain data streamed from system memory or HDD (only edges need to be updated when moving).
Um, we already have that ability.

About the naming: Really, i don’t care that much. If it starts with a capital letter or an underscore, is, in my opinion, nothing that one needs to discuss about. If the Khronos group (“the ARB” was much easier to type!) decided, that this is good, so will it be.

About the generalization:
The template that you generate there is certainly only a struct, that is generated in the driver, which you fill out afterwards. It just hides the actual struct, to be extensible. What you want is not to generate a template/struct, that can be a combined thing (vertex/image…), but what is generated using the parameters passed through that template, is what you want to be a combination.

As Korval already pointed out, you assume, that this isn’t possible at the moment. I highly doubt that, because certain combinations might be very useful. However, the template is not, what you want to merge. Instead you will most likely have to generate a completely different template, which was intended to create a combined object. The advantage would be, that you can use only templates, for what they are intended. You cannot generate some useless combinations, which the driver doesn’t know how to interpret.

I’m sure the guys have thought this out very carefully. The small example was just a general introduction, how it looks like. We cannot derive the full functionality from it.

Jan.

As far as I understood, image objects are derived from buffer objects (they made it rather clear in the article), so you should be able to use them as such. Also, the creation routine returns GLBuffer type.

2: That even if it doesn’t, that you can’t bind an image buffer object as a buffer.
If I bind 4-component image as buffer then what kind of vertex data it represents? vertex2d + texcoord2D? This is why you need to use more than one template for an object, but after some thoughts I think that you can create object with one template and then assign other templates to it, enabling additional properties. So yo’re probably right - it could be implemented allready.

It seems like the object model is pretty damn good, but there is lots of information we don’t have which is making it hard for us (me at-least) to gauge if the design is good.

Maybe it’s because we’ve never seen an example of VBO’s in the new model, any chance of that for the next issue of the pipeline? :slight_smile:

Originally posted by k_szczech:
If I bind 4-component image as buffer then what kind of vertex data it represents? vertex2d + texcoord2D?
In one of the slides from the BOF I think, it was hinted that VBO’s would get a state object. presumably this would contain the layout information for the VBO. I could quite easily see this in the new model:

glBindVertexBuffer(GLbuffer, GLVBOState);

Regards
elFarto

Originally posted by Korval:
[quote]Such object combination also allows to use glImageData2D on vertex array organized as grids, therefore allowing to pass let’s say heightmaps from CPU on the fly - especially usefull for etremely large terrain data streamed from system memory or HDD (only edges need to be updated when moving).
Um, we already have that ability.
[/QUOTE]True, regarding this it would be nice to have something like SUN_mesh_array make it into core sometime this millenium

If the Khronos group (“the ARB” was much easier to type!)
They’re still called the ARB. The full name of the Khronos subgroup is the “OpenGL ARB Working Group”.

Maybe it’s because we’ve never seen an example of VBO’s in the new model, any chance of that for the next issue of the pipeline?
Actually, what I would like to see in the next issue is a fully-functional example. That is, taking the entire pipeline from the creation of the rendering context to the rendering of an object. I would like to see all the steps that are necessary to render something under the new object model. It’s obviously going to be more complex than standard GL, but I’d like to see what the whole thing looks like from beginning to end.

Originally posted by Korval:
It’s obviously going to be more complex than standard GL
More complex than current state? Which part exactly?

It’s pretty straightforward, it’s very much like D3D10, except instead of using structs to define state blocks you’re creating an object an initializing it with function calls.
Sure it’s somewhat more verbose, but it’s also expendable (unlike D3D10) since you can add new enums/states to an existing state block through extensions.

Sofar so good, I also don’t like the underscore in the new naming convention, I’ll get over it for sure, but it could be changed to current convention it would be as good.

Basically we have buffers and state blocks that tells the hardware how to interpret the buffer bound to a given entry point. (right ?)

Are we going to have state blocks for everything just like D3D10 ?
(rasterizer, blending,…)

More complex than current state?
More complex in the sense that it isn’t as straightforward to get something running as standard GL, but it’s better overall for getting something real running.

Originally posted by Korval:
Actually, what I would like to see in the next issue is a fully-functional example. That is, taking the entire pipeline from the creation of the rendering context to the rendering of an object.
I plan to do that for the next issue. There are still enough details being worked out that we can’t do it today - format objects, exactly what the drawing calls will look like and how VBOs will be constructed, how some remaining bits of non-programmable state will be represented, and a few others.

As far as the “ti_o” naming convention, we’re open to something better. This is just the least objectionable idea we’ve had so far.

On the multipurpose templates idea: basically the Longs Peak object model is a shallow tree. Buffer objects contain unformatted data; image buffers contain formatted data. There are parameters that affect use of buffers. For example, you say up front whether an image buffer can be used as a texture, as a renderbuffer, or both, which allows the driver to make intelligent decisions. But we are not planning to allow multiple inheritance :slight_smile: - so templates describe exactly and only the set of attributes that make sense for the type of object a template corresponds to. This also lets us do a certain amount of type- and range-checking on attributes on the client side, although some checks (combinations of attributes, resource limits) still can’t happen until actually creating an object on the server.

Really templates are just a generalization of attrib lists (the old { NAME, value, NAME, value } sort of thing you see in GLX and EGL). We started off trying to use attrib lists, but they weren’t flexible enough. Actually we used to call them “attribute objects”, but the resulting naming scheme wasn’t great. Michael Gold came up with “template”, which we like much more.

This object model seems to be almost exactly the same as D3D10, but with C syntax. Very good, it’s a great design and the C syntax doesn’t obscure it much, and can easily be wrapped if desired.

But how is it possible that the ARB is about 2 years behind Microsoft in implementing the exact same thing? To me, it seems like something in the entire process is really wrong somewhere…

What about vertex arrays? Are they implemented as objects too? Will the old client-pointer vertex array remain (I hope not :-)?

I had the idea of using buffer objects for data transfer, with ability to format data in them. In this model, all data will be stored in buffer objects, which then can be bound to image/array objects. This would also allow the same buffer to be reused for more then one object, probably with different format. Did you consider such design?

This object model seems to be almost exactly the same as D3D10, but with C syntax.
Do you have some insight that we don’t have? Because from the few bits we’ve seen I doubt you could come to the conclusion that it’s “the exact same thing”.

Ok, both models are object oriented. True, both models contain roughly the same kinds of objects, but hey, both are meant for the same hardware.

But the similarity ends here. From what I’ve seen, the GL model seems much more flexible and extensible, while the D3D10 model seems more like the usual “we completely redesign it anyway next version, so don’t bother too much with extensibility”.

The only thing I hope they FINALLY implement is the ability to re-index indices sent to the gpu. This would be AWSOME. I believe there was a thread a couple of months ago about all the features we wanted. They need to read that thread.


But the similarity ends here. From what I’ve seen, the GL model seems much more flexible and extensible, while the D3D10 model seems more like the usual “we completely redesign it anyway next version, so don’t bother too much with extensibility”.

Well, the major difference (OO vs binding) between D3D and OpenGL has been eliminated. The new policy of “create immutable, do not modify” is also the same as D3D10 (because it’s the right thing to do, of course).

The only obvious major difference left is the additional flexibility of the parameter lists. Nothing stops D3D10 from simply adding a new COM interface, where all the DESC structs are extended with new stuff, making a transition to the next subversion easy.

Still, nothing explains why D3D10 has been around for a year or so already, even though hw just arrived, and Longs Peak isn’t expected to arrive for quite a while yet. Why didn’t the OpenGL development start earlier?

Hi ector,

Any similarity between this object model and DX10 is purely coincidental. I have personally never seen D3D docs or code, not DX10 nor DX3 or anything in between. However, I do have twelve years of OpenGL implementation experience on which to base my ideas, and that doesn’t even count the considerable experience of the rest of the ARB.

The guiding principles for this design include runtime efficiency, flexibility and extensibility. Knowledge of upcoming hardware features played a minor role, but bear in mind that Longs Peak is intended to be implementable on shader model three hardware and newer. We feel this gives developers a larger target audience than had we designed the API around a single generation of hardware which is only just hitting the market (and is only available for a single operating system).

Why did it happen now instead of two years ago? You could just as easily ask why it didn’t happen four or six years ago. We are, in effect, breaking backward compatibility for the first time in the 15 year history of OpenGL. This is not a task we undertake lightly, as the burden on application vendors is, in some cases, considerable.

In hindsight I wish we had modernized OpenGL in the 2.0 timeframe. That was the original intent but the time wasn’t right, for various reasons I will not address here.

Our goal is not to copy DX10 or any other API. Our goal is to build on many years of experience on both sides of the interface to deliver a forward looking, efficient graphics standard which we can all enjoy for years to come.

Originally posted by Jon Leech (oddhack):
Michael Gold came up with “template”, which we like much more.
Jon is being modest. I’m not sure I was the first person to suggest the name “template”, but the concept was actually Jon’s idea, which solved a difficult dilemma: how to atomically specifying all immutable properties required for object creation, while remaining extensible.