PDA

View Full Version : Statless texture specification and access



kRogue
11-30-2010, 02:56 AM
Hi all, below is an attempt to give a starting ground for
a specification to provide texture image specifications and reads
without being affect by GL state.

Comments welcome. Flames mostly welcome.




Name
EXT_stateless_texture_access

Name Strings

GL_EXT_stateless_texture_access

Contributors

Kevin Rogovin, Nomovok

Contact

Kevin Rogovin, Nomovok (kevin.rogovin 'at' nomovok.com)

Status

Draft

Version

Last Modified Date: 11/30/2010
Author revision: 1
Version 0.1


Number

TBD, if ever.


Dependencies

This extension is written against the OpenGL 3.3 specification.


Overview


This extension introduces a set of commands to specify texture
data which are not affected by global GL state. To this end, a
new objec type is introduced so that specifying texture data does
not depend on current GL state, only the state of those objects.
A great deal of the functionality to reduce the affect of GL state
on GL command can be found in GL_EXT_direct_state_access. This
extension provides the following functions:


New Procedures and Functions

uint AllocateTexture1D(enum target, boolean allocate_mipmaps,
int internalformat,
sizei width);

uint AllocateTexture2D(enum target, boolean allocate_mipmaps,
int internalformat,
sizei width, sizei height);

uint AllocateTexture3D(enum target, boolean allocate_mipmaps,
int internalformat,
sizei width, sizei height, sizei depth);

uint AllocateTexture2DMultiSample(enum target, sizei samples,
int internalformat,
sizei width);

uint AllocateTexture3DMultiSample(enum target, sizei samples,
int internalformat,
sizei width);


void TextureData1D(uint texture,
uint mipmap,
int x, sizei width,
uint packing_specification,
enum format, enum type,
uint buffer_object,
const void *pixels);


void TextureData2D(uint texture,
uint mipmap,
int x, int y, sizei width, sizei height,
uint packing_specification,
uint buffer_object,
enum format, enum type, const void *pixels);


void TextureData3D(uint texture,
uint mipmap,
int x, int y, int z,
sizei width, sizei height, sizei depth,
uint packing_specification,
uint buffer_object,
enum format, enum type, const void *pixels);

void GetTextureData1D(uint texture,
int x,
sizei width,
uint unpacking_specification,
uint buffer_object,
enum format, enum type, void *data);

void GetTextureData2D(uint texture,
int x, int y,
sizei width, sizei height,
uint unpacking_specification,
uint buffer_object,
enum format, enum type, void *data);

void GetTextureData3D(uint texture,
int x, int y, int z,
sizei width, sizei height, sizei depth,
uint unpacking_specification,
uint buffer_object,
enum format, enum type, void *data);

enum GetTextureTarget(uint texture);
void UseTexture(uint texture_unit, uint texture);
void UseBufferObjectAsTexture(uint texture_unit, uint buffer_object, sizei offset, GLenum format);

uint CreatePixelStoreSpecification(void);
void DeletePixelStoreSpecification(uint);
void PixelStoreSpecificationParamf(enum pname, float value);
void PixelStoreSpecificationParami(enum pname, int value);


The following functions from GL_EXT_direct_state_access are to be
imported by GL_EXT_stateless_texture_access (shamelessly copy-pasted from the
GL_EXT_direct_state_access extension):

<EXT_texture_integer: New integer texture object commands and queries
replace "Tex" in name with "Texture" and add initial "uint texture"
parameter>

void TextureParameterIivEXT(uint texture, enum target,
enum pname, const int *params);
void TextureParameterIuivEXT(uint texture, enum target,
enum pname, const uint *params);

void GetTextureParameterIivEXT(uint texture, enum target,
enum pname, int *params);
void GetTextureParameterIuivEXT(uint texture, enum target,
enum pname, uint *params);


<OpenGL 3.0: New texture commands add "Texture" within name and
replace "enum target" with "uint texture">

void GenerateTextureMipmapEXT(uint texture, enum target);

<OpenGL 3.0: New texture commands add "MultiTex" within name and
replace "enum target" with "enum texunit">

void GenerateMultiTexMipmapEXT(enum texunit, enum target);


Additions to Chapter 3 of the OpenGL 3.3 Specification (OpenGL Operation)


Memory for textures maybe allocated through:


uint AllocateTexture1D(enum target, boolean allocate_mipmaps,
int internalformat,
sizei width);

uint AllocateTexture2D(enum target, boolean allocate_mipmaps,
int internalformat,
sizei width, sizei height);

uint AllocateTexture3D(enum target, boolean allocate_mipmaps,
int internalformat,
sizei width, sizei height, sizei depth);

uint AllocateTexture2DMultiSample(enum target, sizei samples,
int internalformat,
sizei width);

uint AllocateTexture3DMultiSample(enum target, sizei samples,
int internalformat,
sizei width);


where target refers to a texture target where the allocated texture may
be bound to, internalformat is the internal representation of the texture,
width, height and depth are the dimensions of the texture. For non-multisample
textures, the parameter allocate_mipmaps indicates is to allocate the memory
necessary to store mipmaps. For multisample textures, the parameter samples
refers to the number of samples. Calls to TexImage1D, TexImage2D, TexImage3D,
will generate an GL_INVALID_SOMETHING error if a texture allocated with
one of the above calls is bound to the current texture unit.


GL facilitates the creation and deletion of objects, called
PixelStoreSpecification objects, that store how GL is to
pack and unpack pixel data.

uint CreatePixelStoreSpecification(void);

and destroyed with:

void DeletePixelStoreSpecification(uint);

When created, the PixelStoreSpecification state vector is initiliazed
with the defaults values of GL of PixelStore.

The calls

void PixelStoreSpecificationParamf(uint object, enum pname, float value);
void PixelStoreSpecificationParami(uint object, enum pname, int value);

sets a parameter of the PixelStoreSpecification object value named by pname
to the specified value, pname and value pairs are as in PixelStore.

The name 0 is used for the "default" PixelStoreSpecification of GL, it's values
are unchangeable and is initialized with the default packing and unpacking
value of GL.

Texture may have their texture data modified from the GL client with the commands:
<or should be only those allocated with AllocateTexture1D, AllocateTexture2D, AllocateTexture3D>?

void TextureData1D(uint texture,
uint mipmap,
int x, sizei width,
uint unpacking_specification,
enum format, enum type,
uint buffer_object,
const void *pixels);


void TextureData2D(uint texture,
uint mipmap,
int x, int y, sizei width, sizei height,
uint unpacking_specification,
uint buffer_object,
enum format, enum type, const void *pixels);


void TextureData3D(uint texture,
uint mipmap,
int x, int y, int z,
sizei width, sizei height, sizei depth,
uint unpacking_specification,
uint buffer_object,
enum format, enum type, const void *pixels);


where texture is the name of the texture, mipmap is the mipmap lod to be modified,
x,y and z are the location of the region to be modified (as in TexSubImage famlily),
width, height and depth specify the size of the region to be modified (as in TexSubImage famlily),
The unpacking_specification is the name of a PixelStoreSpecification parameter determines
how pixels are to be unpacked by GL. The parameter buffer_object is the name of a buffer object
to copy from, with format and type specifying how the data is formated,
pixels refers to an offset with the buffer object to copy data from. If buffer_object
is 0, then pixels is a pointer to client address space.

Texture data may also be read back to a buffer object or client address space via:



void GetTextureData1D(uint texture,
int x,
sizei width,
uint unpacking_specification,
uint buffer_object,
enum format, enum type, void *data);

void GetTextureData2D(uint texture,
int x, int y,
sizei width, sizei height,
uint unpacking_specification,
uint buffer_object,
enum format, enum type, void *data);

void GetTextureData3D(uint texture,
int x, int y, int z,
sizei width, sizei height, sizei depth,
uint unpacking_specification,
uint buffer_object,
enum format, enum type, void *data);

Additionally, pixel data my be read back from the current framebuffer
by specifying texture_name as 0.


The command:

void UseTexture(uint texture_unit, uint texture);

specifies for the named texture_unit to source data from the named texture, texture. The
texture target where a texture is bound to via UseTexture is determined at allocation
of the texture object and can be queried with

enum GetTextureTarget(uint texture);

The command:

void UseBufferObjectAsTexture(uint texture_unit, uint buffer_object, sizei offset, GLenum format);

specifies for the named texture unit to sample data (i.e. sampleBuffer of GLSL)
from the named buffer object starting at the specified offset, format determines
the format of buffer object to sample from.


Issues

(0) What is the status of this extension?

This extension is a proposal from a developer, not from an IHV, as such should
be taken with a huge grain of salt.

(1) What is the purpose of this extension?

The main purpose of the extension is to provide a means to specify texture
data without needing to be aware of any GL state.

(2) Is the state of the PixelStoreSpecification named 0 affected by command PixelStore?

No. The purpose of the PixelStoreSpecification named 0 is to provide a "default"
way for data to be packed and unpacked by GL. Additionally, by having that object
affected by PixelStore commands, then the interpretation of TextureData depends on
the current GL state, which this extension is aiming to remove.


(3) What is the point to UseBufferObjectAsTexture?

It's main purpose is for a developer to see directly in code the source of data.
Additionally, it also provides an offset into the buffer object to... one can argue
that one can add such to GL_EXT_texture_buffer_object as well, but UseBufferObjectAsTexture
removes a layer of (I feel) unnecessary indirection. In brutal honesty, the call
is inspired by NV_shader_buffer_load 's ability to set uniforms to "pointers".


(4) How does the GenerateMipmap family of functions interact with texture allocated
by the new entry points? I.e. what is the expected behavour of calling GenerateMipmap
on a texture allocated with the new call but with allocated so that mipmaps are not?

Unresolved. There are several options:

i) Textures allocated with one of the new calls can also have
the glTexImage family of functions affect them. The question then
becomes, do GL implementors lose potential performance or does
the GL implementation burden increase?

ii) GenerateMipmap acts as always, i.e. if a texture was allocated
without mipmaps, then GenerateMipmap will allocate and generate them.
This though violates the idea that the texture allocation call
specifies the memory required by the texture object for it's lifetime.

iii) GenerateMiapmap generates an error for those textures allocated
with the new allocation calls that specify to not allocate mipmaps.
The issue with the solution is that then GenerateMiapmap acts differently
on how the texture was created.

iv) GenerateMiapmap cannot be used with such textures, instead a new call
to generate the mipmaps must be used. This has the advantage of being
self consistent. The disadvantage is that it again places textures into
2 classes: those allocated with the new calls and those allocated with
TexImage.

(5) Given that the texture target (which is really a texture type) is specified
at allocation, it seems unnecessary that the direct_state_access TexParameter
family of functions need the texture target when manipulating a texture allocated
with one of the new calls.

The simple lazy answer is: "don't worry about it and use GetTextureTarget" for
such textures. Potentially added new TexParamter calls that don't require
a texture target parameter.

(6) How does GetTextureTarget behave for those texture names retrieved
from GenTextures that have not been allocated via a TexImage call yet?

Returns GL_INVALID_ENUM.

(7) How does TextureData family of functions behave for such texture
names as asked about in Issue (6)?

Generate an error analogous to calling TexSubImage acting on such textures.
More generally speaking, one can view the TextureData entry points as
a "stateless" version of the TexSubImage calls.

Alfonse Reinheart
11-30-2010, 03:20 AM
uint AllocateTexture1D(enum target, boolean allocate_mipmaps,
int internalformat,
sizei width);

uint AllocateTexture2D(enum target, boolean allocate_mipmaps,
int internalformat,
sizei width, sizei height);

uint AllocateTexture3D(enum target, boolean allocate_mipmaps,
int internalformat,
sizei width, sizei height, sizei depth);

uint AllocateTexture2DMultiSample(enum target, sizei samples,
int internalformat,
sizei width);

uint AllocateTexture3DMultiSample(enum target, sizei samples,
int internalformat,
sizei width);

If you're going to effectively rewrite OpenGL texture code, it seems a terrible waste to keep things exactly the same. Particularly the terribly unnecessary 1D/2D/3D calls.

If you're going this route, rather than the Direct State Access route of just porting functions, it'd be better to do this:



uint glCreateTexture(enum target, enum format sizei width, sizei height, sizei depth, uint samples);


Just have one function. The target parameter will tell the function which other parameters matter. If you're creating a GL_TEXTURE_2D_MULTISAMPLE, then width, height and samples matter.

Similarly, functions that take a texture should not need a target parameter. The whole notion of a "target" should be excised outside of the texture's creation. That's simply the texture's type.

I'm not a fan of the binary choice for mipmaps; I'd prefer a range.

Also, you forget functions for compressed texture uploading, which requires special handling since it comes pre-formatted.

If you are insistent on wrapping the pixel pack/unpack stuff into an object, consider folding the "format" and "type" parameters that the texture upload/download functions take. Though personally, I'd prefer some form of immutable object creation. Where the object creation takes an array containing the parameter data the way, the way that wglCreateContextAttribs does.

kRogue
11-30-2010, 04:00 AM
If you are insistent on wrapping the pixel pack/unpack stuff into an object, consider folding the "format" and "type" parameters that the texture upload/download functions take. Though personally, I'd prefer some form of immutable object creation. Where the object creation takes an array containing the parameter data the way, the way that wglCreateContextAttribs does.


The main purpose of the pixel pack/unpack objects is to avoid the most common source of texture image specification in a GL application: packing of data. Folding the format and type is NOT a good idea, as it hides the data conversion.



I'm not a fan of the binary choice for mipmaps; I'd prefer a range.


HUH? Why in the world would you want only some of the mipmaps allocated and not all? Mipmap completion is an all or none affair. You could make a very weak argument with texelFetch in naming the LOD, but it is a pretty weak argument.



Similarly, functions that take a texture should not need a target parameter. The whole notion of a "target" should be excised outside of the texture's creation. That's simply the texture's type.


HUH? The texture target specifies the type of the texture, so it had better be there when the texture is allocated, just as in the TexImage calls. Additionally, the texture type likely influences how GL is to store a texture as well.



Also, you forget functions for compressed texture uploading, which requires special handling since it comes pre-formatted.


In truth I did not "forget", rather I deliberately left it out as I am not too sure what is the best way to proceed. One option is to, like the current GL, add 3 new calls to allocate compressed textures: CreateCompressedTexture1D, CreateCompressedTexture2D and CreateCompressedTexture3D which takes parameters specifying the compressed texture data as well, i.e. CreateCompressedTexture allocates and sets the data, one can also argue to add such a function for other textures too: CreateTexture1D, CreateTexture2D and CreateTexture3D, but that rubs against me a great deal: a function is now doing two thinges: allocating and setting, though it is arguable that is perfectly fine too. Debatable both ways :cool:




f you're going to effectively rewrite OpenGL texture code, it seems a terrible waste to keep things exactly the same. Particularly the terribly unnecessary 1D/2D/3D calls.

If you're going this route, rather than the Direct State Access route of just porting functions, it'd be better to do this:



uint glCreateTexture(enum target, enum format sizei width, sizei height, sizei depth, uint samples);



Epic HUH? Firstly I am not "effectively rewrit[ing] OpenGL texture code", as for the above, it is okay-ish until you talk about mipmap allocation, so then one would have:




uint glCreateTexture(enum target, enum format sizei width, sizei height, sizei depth, uint samples, bool generate_mipmaps);
//or give a range of lod's to generate in place of a bool, gross.


But that just smells bad, as there are more and more icky rules to determine if the call is valid, besides how readable is this:



tex=glCreateTexture(GL_TEXTURE_1D, GL_RGBA8, 100, 1, 1, 1, GL_TRUE);
//vs
tex=glAllocateTexture1D(GL_TEXTURE_1D, GL_TRUE, GL_RGBA8, 100);


I'd make a bet the second is epically easier to read.

One thing I epically hate: one function doing many things with more and more complicated rules to determine if the arguments are valid. Much easier to implement and use in having a family of functions, hence the 1D, 2D, 3D. The rules are simpler and at any rate, epic chance that any GL implementation has to make a switch statement anyways to an internal call. By making separate calls each with a more narrowly defined purpose, the GL implementation is not going to be any harder and the developer has an easier time too.

Alfonse Reinheart
11-30-2010, 04:49 AM
Firstly I am not "effectively rewrit[ing] OpenGL texture code", as for the above

Of course you are. You specifically state that it is an error to use objects created with your API as textures in the regular OpenGL API and vice-versa. This is a separate path that uses its own APIs. It does not create OpenGL texture objects, because OpenGL texture objects can be bound with glBindTexture. This proposal creates a new object type that cannot be used the way regular texture objects do.

You are making a new API to create and manage a new object that is in no way interchangeable with other OpenGL objects. I don't know how that can be called anything other than rewriting OpenGL's texture code.


Why in the world would you want only some of the mipmaps allocated and not all? Mipmap completion is an all or none affair.

Mipmap completion is a controlled by the base level/max level settings. And those can very much be changed.

The reason to want only some of the mipmaps is so that you can load the lower mipmaps first, then load the higher ones in as you stream them.


The texture target specifies the type of the texture, so it had better be there when the texture is allocated, just as in the TexImage calls.

You may have missed the part where I said, "outside of the texture's creation". I was referring to the use of the target in the TextureParameterIivEXT calls.


I'd make a bet the second is epically easier to read.

Well, the second call contains the same information twice. You're allocate a 1D texture of type GL_TEXTURE_1D.

No. You allocate a texture of type GL_TEXTURE_1D. You only need to say it is a 1D texture once.

A counter-example. Which makes more sense:


tex = glCreateTexture(GL_TEXTURE_CUBE_MAP, GL_RGBA8, 256, 256, 0, 0, GL_TRUE);
or
tex = glAllocateTexture2D(GL_TEXTURE_CUBE_MAP, GL_TRUE, GL_RGBA8, 256, 256);

I'd say it's the one that isn't trying to confuse 2D texture creation with cube map texture creation. Same goes for using glAllocateTexture3D for array textures. It's one of those places in the API where they did the wrong thing just to have fewer entrypoints. They didn't make a glTexImageCubeMap because it would have had the same interface as glTexImage2D. So they just overloaded it.

I'm simply taking it to its logical conclusion. They concatenated glFramebufferTexture1D, 2D, 3D, and Layer all into a single glFramebufferTexture call (except for when you don't want to use layered rendering).

Speaking of cube maps, you forget a way to upload data to the different faces of a cube map.


One thing I epically hate: one function doing many things with more and more complicated rules to determine if the arguments are valid.

You must really hate OpenGL, because they do that everywhere ;)


Much easier to implement and use in having a family of functions, hence the 1D, 2D, 3D.

But there already are a family of functions behind the scenes. Cubemaps certainly do not use the same allocator as 2D textures, even though they're created with glTexImage2D. Same with rectangle textures. The functions have already been overloaded, so you may as well do it fully.

kRogue
11-30-2010, 05:21 AM
Well, the second call contains the same information twice. You're allocate a 1D texture of type GL_TEXTURE_1D.

No. You allocate a texture of type GL_TEXTURE_1D. You only need to say it is a 1D texture once.

A counter-example. Which makes more sense:



tex = glCreateTexture(GL_TEXTURE_CUBE_MAP, GL_RGBA8, 256, 256, 0, 0, GL_TRUE);
or
tex = glAllocateTexture2D(GL_TEXTURE_CUBE_MAP, GL_TRUE, GL_RGBA8, 256, 256);



The first call is fishy: 0 for samples and 0 for depth?? A cubemap texture is
a 3D texture with depth 6 so the second (correct) call should be



tex = glAllocateTexture3D(GL_TEXTURE_CUBE_MAP, GL_TRUE, GL_RGBA8, 256, 256, 6);







Mipmap completion is a controlled by the base level/max level settings. And those can very much be changed.

I do conceded that those levels change, but lets be honest, how often does anyone really do that? At any rate, allocating the data just means allocating it, if you change the base/max levels that just means you will not refer to uninitialized allocated memory.




Speaking of cube maps, you forget a way to upload data to the different faces of a cube map.

A cubemap is a 3D texture, it is an array (of 6) 2D textures. Allocating and specifying a cubemap texture would be done via the 3D calls.



You must really hate OpenGL, because they do that everywhere

Drifting into flame/troll territory here.



But there already are a family of functions behind the scenes. Cubemaps certainly do not use the same allocator as 2D textures, even though they're created with glTexImage2D. Same with rectangle textures. The functions have already been overloaded, so you may as well do it fully.


You miss my point, which is not shocking. If it is a non-trivial task to check that a combination of arguments is valid, then that case of overloading makes life harder for the developer and possibly the implementor.



I'm simply taking it to its logical conclusion. They concatenated glFramebufferTexture1D, 2D, 3D, and Layer all into a single glFramebufferTexture call (except for when you don't want to use layered rendering).


AND they kept the 1D, 2D and 3D calls too! It is a debatable point to potentially also provide a AllocateTexture and TextureData calls without a dimension suffix that checks the dimension values against the texture type. The main use case though is for middleware template C++ code though.



You may have missed the part where I said, "outside of the texture's creation". I was referring to the use of the target in the TextureParameterIivEXT calls.

See issue (5).



You are making a new API to create and manage a new object that is in no way interchangeable with other OpenGL objects. I don't know how that can be called anything other than rewriting OpenGL's texture code.


Could you freaking read it a touch closer?? In it says that it is not clear if these texture objects should be regarded as different or the same. I freely admit to tweaking the original post to edit's, but before your bile.

Lastly Alfhonse, you really need to quit being this way, it takes effort to sift through your posts to find something of value, there was one or two, but beyond that it is just seems that your posts are by someone that has to say something, anything to criticize. Genuine constructive criticism is a good thing, but much of what come out of you is just noise.

Groovounet
11-30-2010, 05:32 AM
Lastly Alfhonse, you really need to quit being this way, it takes effort to sift through your posts to find something of value, there was one or two, but beyond that it is just seems that your posts are by someone that has to say something, anything to criticize. Genuine constructive criticism is a good thing], but much of what come out of you is just noise.

Speak your reader language kRogue.

kRogue
11-30-2010, 05:41 AM
Lastly Alfhonse, you really need to quit being this way, it takes effort to sift through your posts to find something of value, there was one or two, but beyond that it is just seems that your posts are by someone that has to say something, anything to criticize. Genuine constructive criticism is a good thing], but much of what come out of you is just noise.

Speak your read language kRogue.


Lets not let my temper :o at Alfhonse ruin this thread. I must be some kind of online-social misfit, I can't tell if you think my behavior is acceptable or unacceptable :o .. or worse, that what I wrote in that last bit falls under exactly what it said... shudders.. self referring irony.

Groovounet
11-30-2010, 06:19 AM
Sorry, I am spoiling your post... Looks like my today's mood. I don't even think anything about Alfhonse being an a****** all the time neither than being an a****** is fundamentally wrong... actually on contrary. Anyway, it certainly [censored] people off.

Sorry again, I am off spoiling...

aqnuep
11-30-2010, 06:27 AM
My notes about the extension (and Alfonse's comments ;) ):

1. I agree with Alfonse and we should try to depart from the several commands for 1D, 2D, 3D, etc. so rather have a generic texture creation function.

2. I disagree with Alfonse statement about the type and format arguments to be included in the pixel pack/unpack objects. I agree that these information are somewhat related but type and format changes much more often than pixel unpack rules so I would not tie them together.

3. Agree with Alfonse about mipmap generation. It should not be an all or nothing decision as playing with base and max levels you can spare some memory.

4. I think this could be a great extension as this is the one major problem that the DSA extension doesn't even seem to care about. So thanks for the proposal!

5. This one is a minor, subjective and cosmetic note: I would change the names as they don't really fit into the GL language (actually this is true also for some of the functions introduced by the DSA extension) and not use too long words like "Specification" :)

kRogue
11-30-2010, 06:48 AM
I am trying to think of a good rule/something on the mipmap allocation issue. One of the core ideas of the extension was that allocation happens at creation, so that included mipmaps. If we want to change what mipmaps are to be allocated, then there are a few choices:


Pass a maximum and minimum mipmap level to allocate
OR Allow for texture data to be re-allocated after creation.

I freely admit I do not like the second option, but for potentially just because I am stubborn :o With that in mind, lets look how the spec would then look like using the second option. Doing the second option then means that AllocateTexture is essentially GenNames and TexImage rolled into one call, passing NULL as pixel data (and not having a buffer object bound to PIXEL_WHATEVER). That has the advantage that it completely removes the difference between textures made with the new calls and textures made the old style of calls. In terms of consistency that is great. Though I am still beyond hesitant in having the memory allocated for a texture changing. Additionally the memory savings of not allocating mipmap levels seems tiny... after all, a full pyramid of mipmaps only increases the total memory consumption by (for 2D textures) 33%, and the first mipmap by itself is 25% (for 2D textures) so not allocating all the levels once you allocate the base texture and mipmap level 1 is quite small, at 8% the size of the base texture (for example for a 1024x1024 RGBA8, we are now talking 335KB savings once the base and mipmap level 1 are allocated which together take up 5248KB).

On the issue of the overloading, I can see the want for the overloaded calls, so I see no harm in adding them, but I definitely want to keep the not-so-overloaded calls too :)



This one is a minor, subjective and cosmetic note: I would change the names as they don't really fit into the GL language (actually this is true also for some of the functions introduced by the DSA extension) and not use too long words like "Specification"


My names most definitely do suck.

I will most likely wait a few days and see the comments (and flames) that collect here and post a version from that feedback.

Alfonse Reinheart
11-30-2010, 07:18 AM
A cubemap texture is a 3D texture with depth 6 so the second (correct) call should be

That's even more confusing that calling glTexImage2D. A cubemap texture is no more a 3D texture than it is a 2D texture. It is its own texture type which is fundamentally different from any other texture type. And while yes, it certainly can be used in layered rendering, and it may actually be implemented these days as a modified form of a 3D texture, to the user, it has nothing to do with the concept of a 3D texture.

Also, I would point out that calling it a 3D texture violates your own rule about function parameters. Calling glAllocateTexture3D(GL_TEXTURE_CUBE_MAP, ...) with anything except a 6 as the depth is an error. So if you don't like functions that change the meaning and validity of parameters based on other parameters, then by your own rules, this is a bad function and must be split into a separate glAllocateTextureCubeMap call ;)

So basically, you're going to have parameters who's validity depend on other parameters one way or the other. At least with one function, it's simple and direct.

The principle reason for my opposition to having multiple functions lies here. (http://www.opengl.org/wiki/Texture) I wrote that page. And in doing so, I had to explain the convoluted logic that says that cubemaps, though they are a distinct texture type, must be uploaded with glTexImage2d just like 2D textures.

Here is what my explanation would have been like with one function:



uint glCreateTexture( GLenum target, GLenum internalformat, GLsizei width, GLsizei height, GLsizei depth, GLsizei samples, GLboolean hasMipmaps)

This function creates a texture of the type ''target''. It gives it the ImageFormat specified by ''internalformat''. The ''width'', ''height'', ''depth'', ''samples'', and ''hasMipmaps'' parameters are used only as appropriate for the given texture type being created. Inappropriate values, such as depth for 2D textures or hasMipmaps for rectangle textures, are ignored; they may contain any value and will not affect the results. The acceptable values for ''target'' are:

* GL_TEXTURE_1D: Width and mipmaps only.
* GL_TEXTURE_2D: Width, height, and mipmaps only.
* GL_TEXTURE_3D: Width, height, depth, and mipmaps.
* GL_TEXTURE_CUBE_MAP: Width, height, and mipmaps only.
* GL_TEXTURE_RECTANGLE: Width and height only.
* GL_TEXTURE_BUFFER: Width only.
* GL_TEXTURE_1D_ARRAY: Width, height, and mipmaps only. Height is the number of array entries (though personally, I'd prefer depth).
* GL_TEXTURE_2D_ARRAY: Width, height, depth, and mipmaps. Depth is the number of array entries.
* GL_TEXTURE_2D_MULTISAMPLE: Width, height, and samples.
* GL_TEXTURE_2D_MULTISAMPLE_ARRAY: Width, height, samples, and mipmaps.


One paragraph and a simple table is all it takes. It is easily understood by anyone who reads it.

Having both sets of functions is rather antithetical to this utility.


Additionally the memory savings of not allocating mipmap levels seems tiny... after all, a full pyramid of mipmaps only increases the total memory consumption by (for 2D textures) 33%, and the first mipmap by itself is 25% (for 2D textures) so not allocating all the levels once you allocate the base texture and mipmap level 1 is quite small, at 8% the size of the base texture (for example for a 1024x1024 RGBA8, we are now talking 335KB savings once the base and mipmap level 1 are allocated which together take up 5248KB).

It depends on what part of the mipmap pyramid you're not allocating. As you point out, the lion's share of the data is in the upper mip levels.

When you're doing serious texture streaming, you load the low levels first. That way, you can draw something even if it all isn't there yet. Once those are all in, then you load the big mip levels.

The real question that needs to be answered is this:

What do OpenGL implementations do when the first glTexImage call is not for the 0 mipmap level?

For example, if I allocate a 64x64 texture as level 1, does that automatically cause the allocation of a 128x128 texture and the full mipmap pyramid?

I don't think it can. If I recall the spec correctly, thanks to NPOTs, a 129x129 texture at the 0 mipmap level can have a 64x64 level 1. So the OpenGL implementation cannot know yet whether you want a 129x129 or 128x128 (or 129x128, etc) texture at the 0 mipmap level. So it can't really allocate anything but all of the lower mipmaps.

It is entirely possible that drivers simply guess. So it would pick 128x128. And if later you try to put a 129x129 in there, it will stop and do some unfortunate reallocation gymnastics behind the scenes.

If that is indeed the case, then the API most certainly should not expose a range. That would be making a promise that could not be kept. But if that is not the case, if it is widely implemented (meaning across ATI and NVIDIA hardware and drivers) that you can have the smaller mipmaps in one area and then allocate a big one without moving things around in graphics memory behind the scenes, then the API should expose that behavior.

So it's really predicated on things we don't know.

kRogue
11-30-2010, 07:58 AM
It is entirely possible that drivers simply guess. So it would pick 128x128. And if later you try to put a 129x129 in there, it will stop and do some unfortunate reallocation gymnastics behind the scenes.

If that is indeed the case, then the API most certainly should not expose a range. That would be making a promise that could not be kept. But if that is not the case, if it is widely implemented (meaning across ATI and NVIDIA hardware and drivers) that you can have the smaller mipmaps in one area and then allocate a big one without moving things around in graphics memory behind the scenes, then the API should expose that behavior.


Though I freely admit that I do not have first hand data, I would expect that a driver would want to keep the mipmap data allocated near to each other, since the typical use case has neighboring pixels potentially use different LOD's. However, the point is this: I polish this up, take into account for user feedback, and then hopefully later an IHV sees the spec, then takes a look at it, takes a look at their driver and proceeds to make their own spec which in turn hopefully maybe finds it's way to GL core after much debate by those that have been dealing with GL and implementation of GL for the longest of times.

For the texture streaming use case you describe, the developer already knows how big the textures need to be. So here separating allocation and image data setting works well here. One could allocate, but not set the data, and set the max and base lod accordingly. I have to admit, I find it awfully fishy to not want to allocate image level 0, at which point why not just make the texture base size the size of the first mipmap you are going to use?

For the mipmap allocation, I would just like to see a real use case where only allocating a range of mipmap levels instead of all makes a real difference. One simple way out is to add another API entry point:



void
AllocateMipmapLevel(uint texture, int mipmap_level);


which would allocate the named mipmap level, if that mipmap_level is already allocated then the function is a no-op. I want to avoid having TexImage calls interact, since such calls allocate, and one motivation for this thing was to have it so that the memory needed for a texture is determined in one call, with that in mind, then the suggested function, AllocateMipmapLevel, is not a good idea either :whistle:.

To repeat the question: are there any use cases where one does only wishes to allocate a range of mipmaps instead of all or none of them? Keep in mind that the proposal breaks up allocation and setting of texel values as two separate calls. I just keep thinking that that once you have have the base image and the first mipmap level, you have already allocated (for 2D textures) over 93% of the memory used vs allocating all mipmaps (for 1D textures it is 75% and for 3D textures it is over 98%). If one wants to start the pyramid base at a level besides zero, why not just make the image size the dimensions of that level? These are my thoughts for the all or none mipmap allocation.



So basically, you're going to have parameters who's validity depend on other parameters one way or the other. At least with one function, it's simple and direct.


Or more like for one function has more bits to check, like I posted before it is debatable. Additionally, having arguments that are sometimes ignored smells bad. At any rate, like I said before, I will be adding a dimensionless overloaded call, but I will also keep the dimensioned calls, since in production code, that is more readable. This spec does NOT add any real functionality anyways, just as direct_state_access does not really, it essentially makes the API more manageable and calls understandable without needing to know any global state. For what it is worth I debated having a dimensionless call which takes a pointer to dimensions and a number of dimensions:




uint
AllocateTexture(enum texture_target, bool generate_mipmaps, enum internal_format,
sizei number_dimensions, const sizei *dimensions);

void
TextureData(uint texture,
uint mipmap,
sizei number_dimensions,
const int *region_location, const sizei *region_size,
uint packing_specification,
enum format, enum type,
uint buffer_object,
const void *pixels);

void
GetTextureData(uint texture,
sizei number_dimensions,
const int *region_location, const sizei *region_size,
uint unpacking_specification,
uint buffer_object,
enum format, enum type, void *data);



Even though such an API points are more future proof-ish in terms of if texture data ever goes beyond 3 dimensions, I doubt the usefulness of such an API point and that just looks so awkward to me :D




That's even more confusing that calling glTexImage2D. A cubemap texture is no more a 3D texture than it is a 2D texture. It is its own texture type which is fundamentally different from any other texture type. And while yes, it certainly can be used in layered rendering, and it may actually be implemented these days as a modified form of a 3D texture, to the user, it has nothing to do with the concept of a 3D texture.


Maybe to you ;). A cube map texture is a 3D texture, it is 6 2D textures in layers. Looking at the traditional GL calls, the 1D, 2D, 3D are about the number of dimensions of an array, that array being the raw texture data. How that data is used is determined by the texture target type. With that in mind, a cube map texture is a 3-dimensional array of pixel with dimension NxNx6. Likewise a cube map array is a 3-dimensional array with dimensions NxNx(6L) where L is number of layers.

Also note that the Get calls of the proposal can read from the currently bound FBO, which means one can select from which render layer to read.

elanthis
04-18-2011, 01:07 AM
Maybe to you ;). A cube map texture is a 3D texture, it is 6 2D textures in layers.

No, it isn't, not at all. You don't seem to understand what the difference between a 2D texture and a 3D texture really is. In particular, think about what the mipmap levels mean for a 3D texture, or more specifically what magnification/minification means for a 3D texture. You'll quickly realize that a cubemap is not in any way at all equivalent to a 3D texture.

What does make sense is to equate a cubemap with a texture array of 6 2D textures. That is in fact exactly how D3D 10 does it, which indicates that's exactly how hardware actually works with cubemaps these days.

V-man
04-27-2011, 07:22 PM
I would do away with the let's pass many parameters to "CreateTexture".
For example :


void TextureData2D(uint texture,
uint mipmap,
int x, int y, sizei width, sizei height,
uint unpacking_specification,
uint buffer_object,
enum format, enum type, const void *pixels);


would turn into


GLTextureObject1 object;
object.Version=GL_TEXTURE_STRUCT_VERSION_1;
object.Type=GL_TEXTURE_2D;
object.TextureID=0;
object.MinMipmap=0;
object.MaxMipmap=0;
object.Width=256;
object.Height=128;
object.Pixels=NULL;
object.otherstuff=otherstuff;
glCreateTexture(&amp;object);


The driver checks object.Version and it will know what variables to expect. Later version of GL can have newer versions like GLTextureObject2 and GLTextureObject3, etc.

You can also have a function to allocate mipmaps, one by one


AllocateMipmap(&amp;object, 5); //Allocate mipmap 5
AllocateMipmap(&amp;object, 4); //Allocate mipmap 4


object.MinMipmap and object.MaxMipmap get updated by the driver. The driver can also store whether the texture state is valid.

It needs more work, but I prefer the passing of a structure.

mfort
04-27-2011, 11:53 PM
OpenGL API does not like C-like structures. Mostly for portability issues. But something like this would probably be feasible:


GLint params[] = {
GL_TEXTURE_WIDTH, w,
GL_TEXTURE_HEIGHT, h,
GL_TEXTURE_INTERNAL_FORMAT, i,
...
0 };
glCreateTexture(textureObject, GL_TEXTURE_2D, params);

V-man
04-28-2011, 08:11 AM
What portability issues? I imagine that some compilers might pad the structure which can be turned off in the compiler.

Also, why is there 1D textures? That seems useless because they are basically a 2D texture. If we are going to have 1D texture, then why not have 0D texture whose dimension is 0. It is a singularity, a hole in space and time. If you try to access it, you get sucked into it.

arekkusu
04-28-2011, 09:48 AM
Why have 2D textures if you already have 3D textures?

Because it saves an interpolator.

kRogue
04-29-2011, 11:10 PM
No, it isn't, not at all. You don't seem to understand what the difference between a 2D texture and a 3D texture really is. In particular, think about what the mipmap levels mean for a 3D texture, or more specifically what magnification/minification means for a 3D texture. You'll quickly realize that a cubemap is not in any way at all equivalent to a 3D texture.


I likely should have been more clear and succinct: when specifying only mipmap level 0 (and letting glGenerateMipmaps generate all other mipmaps), then from a data point of viewer, a 3D texture, a 2D texture array and a cube map texture are the same, a 3-dimensional array. Going one step further to specifying mipmaps, for both 2D texture arrays vs cubemap textures, it is the dimensions of the mipmaps that are not the same: for a "usual 3D texture" all dimensions are cut in half, for the 2D texture arrays only the first two dimensions are (and likewise only the 1st dimension for 1D texture arrays). So even looking at setting the image data of mipmaps, the API points are the same, only the rules for the expected values is different.

What I really had in mind was exactly what you said at the end:


What does make sense is to equate a cubemap with a texture array of 6 2D textures. That is in fact exactly how D3D 10 does it, which indicates that's exactly how hardware actually works with cubemaps these days.



Going on:


What portability issues? I imagine that some compilers might pad the structure which can be turned off in the compiler.

My main worry is not in the realm of 32bits, but in the realm of 64bits... if memory serves correctly what is meant by long depends on the compiler in the 64bit world (though once can say use int32_t, etc instead). Naturally padding is a compiler specific thing, which can be a big deal on some boxes. Under x86, unaligned access is ok (but horrible slow) where as for ARM unaligned access crashes (with bus error if memory is correct).

Lastly, by not giving a structure, but doing it the X11 way one gains the following:
default values to unspecified arguments one can add entries without updating any header files

Though I freely admit that it sucks to use the X11 way.

kyle_
04-30-2011, 02:43 PM
Why have 2D textures if you already have 3D textures?

Because it saves an interpolator.

Saves what?

V-man
04-30-2011, 03:39 PM
My main worry is not in the realm of 32bits, but in the realm of 64bits... if memory serves correctly what is meant by long depends on the compiler in the 64bit world (though once can say use int32_t, etc instead). Naturally padding is a compiler specific thing, which can be a big deal on some boxes. Under x86, unaligned access is ok (but horrible slow) where as for ARM unaligned access crashes (with bus error if memory is correct).

Lastly, by not giving a structure, but doing it the X11 way one gains the following:
default values to unspecified arguments one can add entries without updating any header files

Though I freely admit that it sucks to use the X11 way.

What do you mean by "long" and 32 bit and 64 bit?
A structure is a structure in C and C++. Ditto for VB and Java.

Default values is not a good idea. It is lazy programming, IMO.

Structures is the DirectX way and it is pretty good (although they don't use it everywhere).
Even if you aren't doing 3D programming, most likely a program will have a structure to group together values for something. You will likely pass your object to some function for processing.

Alfonse Reinheart
04-30-2011, 05:29 PM
Default values is not a good idea. It is lazy programming, IMO.

All programming is lazy. The drive towards laziness is why programming languages other than assembly exist. It's why scripting languages exist. It's why garbage collection exists.

Having reasonable default parameters, particular for a function that effectively takes dozens of parameters, is entirely reasonable. It saves unnecessary keystrokes, and prevents a class of runtime errors (namely, using the wrong defaults).

kRogue
05-01-2011, 06:17 AM
What do you mean by "long" and 32 bit and 64 bit?

If memory serves correctly, and that is a BIG FREAKING IF, the keyword "long" meant a 32 bit integer under some 64-bit compilers and under others it meant a 64-bit integer.



A structure is a structure in C and C++. Ditto for VB and Java.

All a structure is, is a bunch of bytes right? the ugly is, the freaking padding. Admittedly, for a fixed hardware platform, how something should be padded is pretty fixed, but alas... I am paranoid. The main icky I have with a struct is that if one wants to extend it... though that is not at all possible for at any rate if not using structs to begin with :whistle: Also, if memory serves correctly, a long, long time ago, like when GL was at version 1.1, some C compilers did not let one pass structs in a function (though really at this day and age, this is a mute point entirely since we've got s spec for the C programming language that mandates letting pass structs to functions). At any rate, the tipping point from using several arguments of a struct is a matter of taste... for my matter of taste, I freely admit that the current jazz I wrote at the start of this post does not get there, but is awful close.

On another hand, if someone really wanted to use structs then one could make a macro or inlined functions that takes the values from a struct and makes the GL call.






Default values is not a good idea. It is lazy programming, IMO.

I can say I am on the fence on this, it just depends on the situation to me... often enough a fair number of parameters are going to be the same very often.. so those are good candidates for default values (witness much of the glx and egl API for example).

kyle_
05-01-2011, 06:33 AM
What do you mean by "long" and 32 bit and 64 bit?

If memory serves correctly, and that is a BIG FREAKING IF, the keyword "long" meant a 32 bit integer under some 64-bit compilers and under others it meant a 64-bit integer.
Why would you care about "long" in OpenGL when it defies its own types?

kRogue
05-01-2011, 12:14 PM
Why would you care about "long" in OpenGL when it defies its own types?


You are right, as the GL header files have a system of macro magic to detect the right type for 64bit integers, and event at that there is no "GLLong" type, and the GLint types are to be 32-bit integers.. the 64bit integer types have the label GLint64/GLuint64 and the GLsizei is (correctly) typedefed as ptrdiff_t, so, yes, epic idiot post on my part on the 64bit thing...

Stephen A
05-03-2011, 02:58 PM
Naked structs are evil because they are neigh impossible to port across architectures, compilers, programming languages and operating systems. The issues are very real and impossible to solve in a general manner - which is why every programming manual worth its salt will explicitly warn against naked fields in structures.

The only sane solution would be to define a new object type with get/set functions to hold the necessary data:


GLTexObject* texture = glGenTextureObject(); // or a plain int for symmetry with older APIs
glSetTextureType(texture, GL_TEXTURE_2D);
glSetTextureWidth(texture, 256);
glSetTextureHeight(texture, 256);
...
glCreateTexture(texture);

This translates perfectly to most object-oriented languages in use today. E.g. C#:


var texture = new Texture
{
Type = TextureType.Texture2d,
Width = 256,
Height = 256
};

elanthis
05-03-2011, 04:30 PM
The only sane solution would be to define a new object type with get/set functions to hold the necessary data:

I wholeheartedly agree. ALL object types should be opaque pointers. It's not just object sizes and portability, but also future extension support, guaranteeing users can't attempt to construct objects manually, and implementation flexibility.

I mean, NVIDIA's idea of a texture object will probably look different than AMD's, which will look different than Mesa 3D's, which will look different than Intel's, which will look different than Imagination Technology's, etc. etc.

If you try to define the object as some "obvious" common fields then you're basically forcing them to be near-useless proxy objects with a ton of overhead, and you're basically right back to having the retarded GLint-based object ids except now you're stuck with a single definition of the object's properties forever. Using accessor functions gives the implementation the ability to structure and supplement the core object members however it needs to.

Opaque pointer types and accessor functions is the only way to go.

The only big issue with the opaque pointer types versus object ids is that in C there is no way to represent inheritance of object types. It would be great if you could have GLtexture* variables automatically convert to GLresource* variables so you could have a single set of ref/unref/lock/unlock/delete functions rather than needing GLTextureDelete, GLBufferDelete, etc. Especially for textures it'd be nice to have GLtexture2D that converts automatically to GLtexture. But that's just not possible in C, and at best you need macros/functions that convert from one type to another, e.g. GL_TEXTURE2D_TO_RESOURCE() and the like. Or you just need to duplicate functions. However, since OpenGL already forces you to duplicate functions because it uses absolutely zero of the advantages that the GLint id system could have bought, it's kind of a moot point. Get rid of the disadvantages of the GLint ids and just use opaque pointers. Please.




Default values is not a good idea. It is lazy programming, IMO.

All programming is lazy. The drive towards laziness is why programming languages other than assembly exist. It's why scripting languages exist. It's why garbage collection exists.

No, it isn't.

This reminds me of the argument I once heard from an idiot who claimed that object inheritance was a useless feature because it was just language bloat to work around cut-n-paste.

The problem is, we're far less concerned about "saving programmer time and effort" and far more concerned about "increasing code quality and efficiency."

The crap the programmer has to type in, the more room for mistakes and bugs there are. The more code that is duplicated, the larger the code size of the application and the slower it runs on modern systems. The more duplication of logic, the more likelihood of duplicated bugs being fixed only in some places and not all of them. etc.

Same goes for techniques like garbage collection. The idea that garbage collection does nothing more than remove the need to manually manage memory is completely incorrect. It can be used in places where manual memory management is feasible, but it can also be used in places where manual memory management just isn't possible, or is so error-prone as to be idiotic to attempt. Most applications written in C/C++ have some form of garbage collection anyway, although note that "automatic garbage collection" does not mean "mark and sweep garbage collector built into the language." Reference counting smart pointers in C++ are an automatic garbage collection implementation, after all.

That said, I agree that defaults are good and make sense. I don't like default arguments to functions in languages like C++, but then I don't like functions that have enough arguments to need defaults. If we're talking about construction objects and multi-function APIs for configuring values before construction, then all values that possibly can have defaults should have defaults.

Alfonse Reinheart
05-03-2011, 05:20 PM
Naked structs are evil because they are neigh impossible to port across architectures, compilers, programming languages and operating systems. The issues are very real and impossible to solve in a general manner - which is why every programming manual worth its salt will explicitly warn against naked fields in structures.

I've never seen this advice in a book before.

Oh, they'll tell you not to write them to disk or send them over the Internet. But that's the only portability problem with structs; using "naked fields" in a struct within a codebase is perfectly functional.

What you may be confusing is encapsulation, which has nothing to do with portability. It has to do with maintenance (which is at issue here). It allows you to change the representation of an object without changing its interface. You shouldn't expose members of real objects because at some later date you might want to change those fields. And then everyone using them would be screwed.

Also, OpenGL has not been ported to non-C languages. Oh, the API has adapters for many languages, but they're all just layers on top of the actual C API. There is no native Python OpenGL API; IHVs don't have hooks for getting at a C# implementation of OpenGL.

If OpenGL had structs, it would work no differently. Other languages would simply find ways to adapt. JavaScript had to have new objects created for WebGL to be used as source arrays for buffer objects.


The only sane solution would be to define a new object type with get/set functions to hold the necessary data:

And now, finally, we've come full circle. Because this exact the sort of thing, these attribute objects, that was going to be a key feature of Longs Peak.

Before the ARB put a bullet into it.

Or, to put it another way, it's not going to happen. The ARB will not rewrite the texture creation and allocation API. They've tried it twice, and both times it died.

Wanting it to happen will not change that. Writing large posts that detail specific ideas will not change that. Writing you own OpenGL 5.0 specification will not change that.

We all want a better API. But the ARB has made it plainly evident that we're not getting one. All we will ever get are incremental improvements to functionality. The absolute most we might get is DSA, but even that's pushing it.


The problem is, we're far less concerned about "saving programmer time and effort" and far more concerned about "increasing code quality and efficiency."

Except that we're not. Code isn't getting more efficient; it's getting less efficient. Things like Java/C#, scripting languages, etc are all less efficient than C/C++. But people use them. Because it's easier.

Why does "code quality" matter? Because someone has to maintain that code, and it is easier to maintain clean code that ugly, difficult-to-understand code. It's easier to write and debug clean code as well.

In short: laziness: wanting to make things easier on ourselves.

Now perhaps you have a problem with the term "laziness," equating it with a negative. But it is still accurate: we want to have to do as little work as possible, so we use the languages that allow us to do as little work as possible.


Reference counting smart pointers in C++ are an automatic garbage collection implementation, after all.

No, they really aren't. I've never heard anyone equate reference-counted smart-pointers to actual garbage collection before. They are not the same thing.

Garbage collection is not a generic term for any automatic memory management system. It refers to a specific scheme for dealing with memory automatically, one that is not intrusive and generally does not require writing extra code (with the exception of possibly having weak references).

kRogue
05-04-2011, 02:16 AM
And now, finally, we've come full circle. Because this exact the sort of thing, these attribute objects, that was going to be a key feature of Longs Peak.

Before the ARB put a bullet into it.

Or, to put it another way, it's not going to happen. The ARB will not rewrite the texture creation and allocation API. They've tried it twice, and both times it died.

Wanting it to happen will not change that. Writing large posts that detail specific ideas will not change that. Writing you own OpenGL 5.0 specification will not change that.

We all want a better API. But the ARB has made it plainly evident that we're not getting one. All we will ever get are incremental improvements to functionality. The absolute most we might get is DSA, but even that's pushing it.


*OUCH*. Might be true too, that is what makes the *OUCH* that much more painful.
Though the only ones that know what happened to LongPeaks is ARB/Khronos. Maybe
it was shot down because it was too much too soon, or maybe something else.
We can only guess :sick:

Oh well, I will still (eventually) clean the spec up.. just so horribly busy now.
Sighs.

Dark Photon
05-04-2011, 05:50 AM
I wouldn't take it so hard kRogue. There will always be two camps of developers. Those that are happy to flush the API down the toilet every few years in search of "the next great thing" (just because Microsoft says so) and those that are too busy adding useful features and perf enhancements for the latest GPUs to their company's product lines to waste time/money/effort with that nonsense and the expensive rewrites and maintenance costs that result (not to mention orphaning customers on older hardware).

Direct3D is perfect for the former group. OpenGL is perfect for the latter.

For the latter to put up with a full API flush and restart, it has to offer something really revolutionary (different and compelling, worthy of a totally new model). Besides OpenCL/CUDA/Compute (which are their own new API), GPU tech has pretty much just been evolutionary since SM4/GL3.

(Re compelling: Don't just tell me, but show me that this new model halves my frame times [or better] without a hardware change, letting me push a bunch more and/or more realistic content to our users, and you've got my attention. ...Hmmm... reminds me of NVidia bindless...)

V-man
05-04-2011, 08:27 AM
I wholeheartedly agree. ALL object types should be opaque pointers. It's not just object sizes and portability, but also future extension support, guaranteeing users can't attempt to construct objects manually, and implementation flexibility.

I mean, NVIDIA's idea of a texture object will probably look different than AMD's, which will look different than Mesa 3D's, which will look different than Intel's, which will look different than Imagination Technology's, etc. etc.


Why would users construct objects manually?

No they won't look different. We aren't going to access some object that are behind the scenes inside the driver. That's not the point of having "struct" for textures. You missed the point entirely.



If you try to define the object as some "obvious" common fields then you're basically forcing them to be near-useless proxy objects with a ton of overhead, and you're basically right back to having the retarded GLint-based object ids except now you're stuck with a single definition of the object's properties forever. Using accessor functions gives the implementation the ability to structure and supplement the core object members however it needs to.


Overhead? Well, we certainly would not want our programs to run at 1 FPS just because we introduce the concepts of "struct".
Accessor functions? Look at Alfonse's post.



The only big issue with the opaque pointer types versus object ids is that in C there is no way to represent inheritance of object types. It would be great if you could have GLtexture* variables ...snip


You are over complicating it.

Also, like I said in my post, we would have different versions of the texture struct if a new feature is to be introduced with a new GL version. See my older post for details.

As for defaults, they would disappear if struct is introduced. You would be forced to set up each member. IMO, it is a good thing since there isn't a huge number of variables associated with a texture but on the other hand, there is a good number to of variables to warrant grouping them together and making a single call to a glCreateTexture function.

skynet
05-04-2011, 11:48 AM
I have been actually involved with designing a C rendering API. When we started, first slides on LongsPeak came out, which introduced Templates plus Get/Set fucntions.
We embraced this mechanism and use it with great success.


Also, like I said in my post, we would have different versions of the texture struct if a new feature is to be introduced with a new GL version. See my older post for details.

A "versioned" struct would be pure hell. How do you mix and match extensions with versioned structs? Mind you, a specific version of the struct would refer to a _specific_ (binary) layout. Each combination of mixed extensions would need its own struct version. The binary layout of the struct became a contract that both the driver and the application have to match, otherwise you get crashes. You certainly don't want that.

Extensibility is of the _the_ key features that opaque structs offer. In OpenGL all about is using extensions and mixing them, sometimes even runtime-configurable. Binary layouts of structs are not runtime-configurable.

The Get/Set mechanisms also allows the driver to check for valid values _while_ you are filling the Template, which greatly helps with error detection.


As for defaults, they would disappear if struct is introduced. You would be forced to set up each member. IMO, it is a good thing since there isn't a huge number of variables associated with a texture but on the other hand, there is a good number to of variables to warrant grouping them together and making a single call to a glCreateTexture function.

Defaults are A Good Thing. Don't force the user to write needless boilerplate code just to fill in default values. An API needs to be easy to use but hard to misuse - you propose the other way around.

Alfonse Reinheart
05-04-2011, 04:26 PM
There will always be two camps of developers. Those that are happy to flush the API down the toilet every few years in search of "the next great thing" (just because Microsoft says so) and those that are too busy adding useful features and perf enhancements for the latest GPUs to their company's product lines to waste time/money/effort with that nonsense and the expensive rewrites and maintenance costs that result (not to mention orphaning customers on older hardware).

This is a gross mischaracterization of reality.

First, code maintenance has "time/money/effort" costs too. And if your codebase is a 20-year-old pile of hacks built on top of an API that looks like it came out of someone's colon, then throwing it out or just rewriting major parts of it will likely be cheaper in the long-run.

Second, Microsoft did not frequently change the API in D3D versions due to capriciousness. They did it out of necessity. Before the general stabilization of D3D 8/9, D3D was something of a mess. D3D 3.0 was utter garbage that Microsoft bought from the programing equivalent of a hobo living in an alley. D3D 5 actually looked like a rendering API, but it provided no mechanism for developers to take advantage of hardware T&amp;L. They therefore had to alter the API to allow for that. Hence D3D 7 and the first vertex buffers. Shaders came along, so once again they had to make room for it. Thus D3D 8.

Since then, Microsoft has been fairly consistent with things. The 8-9 era lasted a rather long time, and the API differences were less changes and more additions. D3D 10 was a big change certainly, as they left behind all the legacy cruft and embraced uniform buffers wholeheartedly. But D3D 11 is just more, rather than different. Not just in terms of functionality, but API as well.

Third, the biggest problem with your statements is your binary thinking (http://en.wikipedia.org/wiki/False_dichotomy): either the API changes "every few years", or it never changes. This is a strawman: a deliberate simplification of reality designed to promote one's own viewpoint while simultaneously denigrating the opposition as being clearly deranged.

Asking for an API revision is not the same thing as asking for constant API revisions. These aren't even in the same ballpark.

Wanting the API to be reasonably easy to understand is not the same thing as wanting to "flush the API down the toilet every few years". All it would have taken is the success of one of the API revision proposals in OpenGL history. The original 3DLabs OpenGL 2.0. Or the Longs Peak proposal. Either one could have fixed this.

There would not have been a prolonged string of constant API breakages. There would be a single compatibility gap. And if we had used the original 3D Labs proposal (obvious improved and modified), that compatibility gap would have passed into irrelevance by now for pretty much everyone. In 3 years, nobody would care about the Longs Peak compatibility gap either, especially when DX11-class hardware can at this very moment be found on CPUs.

Or to put it another way, short-term thinking is short-term.

V-man
05-05-2011, 09:39 AM
A "versioned" struct would be pure hell. How do you mix and match extensions with versioned structs? Mind you, a specific version of the struct would refer to a _specific_ (binary) layout. Each combination of mixed extensions would need its own struct version. The binary layout of the struct became a contract that both the driver and the application have to match, otherwise you get crashes. You certainly don't want that.


Which extension? Or should I ask, what's new?
There is nothing new.
A 2D texture needs the following parameters:
width
height
format
border
and whether mipmaps (full mipmap chain or some range like 0 to 4)




//Assuming something new came along for GL 5.1 and 5.2
if(GL_VERSION_5_2 and ++)
{
use_struct_version_3();
}
else if(GL_VERSION_5_1)
{
use_struct_version_2();
}
else if(GL_VERSION_5_0)
{
use_struct_version_1();
}
else
{
use_old_style();
}


or perhaps


//Since I don't know that GL 5.1 and 5.2 exist
if(GL_VERSION_5_0 and ++)
{
use_struct_version_1();
}
else
{
use_old_style();
}

kyle_
05-05-2011, 12:49 PM
Yeah, version checks will be there.

As i see it the main advantage of using a struct is implicit immutability of object right after its creation.

With separate functions specifying each object we have what we have with textures in GL now. Now it wouldnt hurt much to have an entry point that would cause object to become immutable (and perform some sanity checks at the same time).
This i would see as 'almost good' (if the user doesnt provide data for texture right away).

Realistically speaking i think this is the most we can get out of GL - struct's will never make it to the spec and i think that adding specialized texture creation function may have its problems too.

Dont know if theres much to win do drv side. If its smart enough it can probably avoid validating texture state too much, so the main difference is ease of api use, which will be hidden deep in application anywas.

skynet
05-05-2011, 01:22 PM
/Assuming something new came along for GL 5.1 and 5.2
if(GL_VERSION_5_2 and ++)
{
use_struct_version_3();
}
else if(GL_VERSION_5_1)
{
use_struct_version_2();
}
else if(GL_VERSION_5_0)
{
use_struct_version_1();
}
else
{
use_old_style();
}

And this is only for core versions, now think of extensions:



if(GL_VERSION_5_2 and EXT_A)
{
use_struct_version_3_extA();
}
if(GL_VERSION_5_2 and EXT_B)
{
use_struct_version_3_extB();
}
if(GL_VERSION_5_2 and EXT_A_B)
{
use_struct_version_3_extA_B();
}
if(GL_VERSION_5_2 and ++)
{
use_struct_version_3();
}
else if(GL_VERSION_5_1)
{
use_struct_version_2();
}
else if(GL_VERSION_5_0)
{
use_struct_version_1();
}
else
{
use_old_style();
}


see my point? And that is not even covering the code inside use_struct_version_X() (and all that default-value setting code).

Mind you, I am not only thinking about creating textures here, but all sorts of objects using this mechanism.