PDA

View Full Version : Higher C++ interface for OpenGL



thAAAnos
12-19-2003, 03:57 AM
I'm thinking on creating a basic but higher interface for OpenGl in C++. The main idea is
to handle the OpenGL context and calls as
a stream, so one could write something like
gl::rc << gl::begin(gl::POINTS)
<< p << q << gl::end;
where say p,q points that overload the
<< operator

My question is if any of you knows there is something like that already (and is open source) ?

3k0j
12-19-2003, 05:35 AM
foo(bar, x, y, z);hmmm..., we need something higher...

bar->foo(x, y, z);Good! But we can get higher than that..

fubar << foo(bar, x, y, z);Great!! But why stop now? Let's go even HIGHER!!

fubar << foo << bar << x << y << z;AWESOME!!!!!
Now that's a higher C++ interface for OpenGL \o/ !!

http://www.opengl.org/discussion_boards/ubb/smile.gif

[This message has been edited by 3k0j (edited 12-19-2003).]

thAAAnos
12-19-2003, 08:15 AM
Ok my example is lame, but if I had worked
all the details out, I wouldn't need to post
right?
Something like...



template< typename T, class Stream >
somedrawfunc(Stream out)
{
...
Container< Point<T> > &amp;points;
out << "Here are the points" << points;
...
}
and then something like
gl::RCstream<DC> rc(dc);
gl: http://www.opengl.org/discussion_boards/ubb/biggrin.gifLstream< RCstream<DC> > list(rc);
gl::CSstream< RCstream<DC> > clien(rc);
...
somedrawfunc< float >(rc);
somedrawfunc< double >(list);
somedrawfunc< mpz_class >(clien);
somedrawfunc< int >( std::cout );
somedrawfunc< int >( std::fstream("point.bmp"));


and yes << could be syntactic sugar but it
makes more sense to me at least like that...

Won
12-19-2003, 08:43 AM
I'm working on something similar. I'm calling it "Ozone".

Object-Oriented OpenGL -> O^3 -> Ozone.
If I get permission from the employer, I'd like to open source it.

You might want to check out XEngine, too.

-Won

Tom Nuydens
12-19-2003, 10:10 AM
But... why?!?

Isn't your energy better spent writing useful and cool applications with the existing C interface, rather than trying to write ugly and contrived C++ wrappers that no sane person will want to use?

-- Tom

Won
12-19-2003, 10:23 AM
This is a good point. It is very difficult to write a C++ wrapper that doesn't somehow sacrifice performance or readability.

So what's the point?

First of all, I DO plan on using my own wrapper, otherwise why would I bother?

Second, to say that it is merely a "contrivance" tacitly denies the possibility that a wrapper can improve productivity/code legibility etc. Quite the presumption. There are many benefits to having that extra layer of indirection.

Third, immediate mode is something of an abomination, anyway. I wouldn't really bother too much wrapping that in C++; it would be more for the object-oriented aspects of OpenGL that are crippled with a C-like interface.

Obviously, the wrapper won't be as familiar as straight-up OpenGL code, but I'm hoping the advantages will outweigh this disadvantage enough that someone besides the authors will use it.

-Won

cass
12-19-2003, 11:49 AM
In general, the only place I've found an OO wrapper useful in OpenGL is for managing the lifespan of "objects". Specifically texture objects, display lists, and program objects.

And with those, I would probably do it differently today than I did then.

The class of code that I have found very useful to keep is for doing things that OpenGL doesn't do for you, like linear algebra and object interaction. It would be really great if OpenGL included more support in this arena (via a major GLU update or another *standard* companion library). OpenGL does not compete well with D3D in this regard, and it is a barrier to entry for OpenGL developers.

Thanks -
Cass

Korval
12-19-2003, 11:59 AM
Something like...

Well, that has to be the worst interface for drawing triangles imaginable. Not only does it put horrific overhead in a performance-critical section of code, it reduces the readability of that section of code by an order-of-magnitude. Additionally, it continues the poor C++ paradigm of overloading the bitshift operator for non-bitshift tasks.


Second, to say that it is merely a "contrivance" tacitly denies the possibility that a wrapper can improve productivity/code legibility etc. Quite the presumption. There are many benefits to having that extra layer of indirection.

Such as? Besides slowing down performance-critical sections of code?


Third, immediate mode is something of an abomination, anyway. I wouldn't really bother too much wrapping that in C++; it would be more for the object-oriented aspects of OpenGL that are crippled with a C-like interface.

You don't have the ability to "un-cripple" them. The way to uncripple them would be to have functions that, effectively, took an object as a parameter. However, the OpenGL interface doesn't take the object as a parameter, so you have to keep re-binding the object each time you want to call a function. I don't know what kind of penalty object binding costs, but I can't imagine that it is cheap in all cases. At which point, once again, you're dropping performance in performance-critical sections of code.

[This message has been edited by Korval (edited 12-19-2003).]

Tom Nuydens
12-19-2003, 02:03 PM
Just to make myself clear: I think that putting a thin C++ wrapper around the GL is a complete waste of time. Especially if it looks as ugly as what you're proposing.

But I'm not at all opposed to the idea of having, say, a reusable texture class. You could add in some functionality to load images from disk, or you could have separate classes for disk-based textures and dynamically rendered textures, and share some code between them through inheritance. Similarly, encapsulating shaders or vertex arrays in classes might facilitate writing multiple code paths for different HW. Nobody will argue with you if you do any of this. I wouldn't call this a "C++ interface for OpenGL" or a "wrapper" anymore though.

-- Tom

JanHH
12-19-2003, 03:43 PM
I think that the fact that OpenGL is an absolutely not object oriented state machine thing is a pain a lot of times, when you are trying to write a rather large application with a neat class structure that uses OpenGL. Keeping track of "which objects need which OpenGL states in which condition and where are all these states at the moment one certain object is called to render itself" is a chaos, again and again. So a clear object oriented solution to this would be a good thing (and again, it would not have to wrap time critical code pars like the stuff that happens between glBegin() and glEnd() ), but also probably impossible when not using something like a scene graph.

davepermen
12-19-2003, 04:11 PM
well.. something i've written once wich was rather useful was a glDraw routine, wich worked like this:




glDraw(GL_POINTS) {
glVertex3f(...);
glVertex3f(...);
}


wich wrapped glBegin/glEnd.. i've continued this, to wrap other stuff into {}.

this was essencially useful during the time i got hardlocks if i forgot a glEnd somewhere.. that way my compiler took care of the lifetime of different gl-states. if i forgot to glEnd, i got a compiler error that i miss a }..

_that_ is at least useful. making gl errors language errors => compiler errors. much more easy to catch the bug http://www.opengl.org/discussion_boards/ubb/biggrin.gif (compared to bluescreen and restarting, for sure http://www.opengl.org/discussion_boards/ubb/biggrin.gif)


btw, i like the stream-idea, at least sort of.. you could get me to reinstall vc again, just for the fun of c++ hacking for funny interfaces http://www.opengl.org/discussion_boards/ubb/biggrin.gif

i could do it in D, too.. then again, the {} hack isn't possible there.. or... hm.. we'll see http://www.opengl.org/discussion_boards/ubb/biggrin.gif

why to do something like that? because its fun.. and it can make your code more rapit, less to type, more save, and you can move runtime errors to compile-time errors. something _VERY_ useful imho. c++ is an awesome language to "teach the compiler opengl" http://www.opengl.org/discussion_boards/ubb/biggrin.gif. is it a hack? yes. is it useful? well.. yes it helps http://www.opengl.org/discussion_boards/ubb/biggrin.gif but it would need to go into the next c++ standard (including opengl) to get widely addapted..

but that would be cool.. std::gl http://www.opengl.org/discussion_boards/ubb/biggrin.gif

zeckensack
12-19-2003, 04:46 PM
Funny how people equate OO design to "lots of dereference operators" http://www.opengl.org/discussion_boards/ubb/smile.gif

glBindTexture

There's your OOD. A texture object encapsulates and hides implementation details, it has defined behaviour. The only thing missing is the dereference operator and there are reasons for that.

You'll never see "this pointers", you'll see object names. It's the only way to guarantee fail-safe behaviour, and separate namespaces for different types of objects.

A robust GL implementation must be able to maintain sparsely populated arrays of names over the whole integer range. Finding the actual pointer to the implementation details and state belonging to the named object takes time.

If nothing else, object selection reduces function call overhead as soon as you use the bound object more than once in a row (where "use" includes changing object properties, eg glTexParameteri).

PS:
C bindings are portable. C++ bindings aren't even portable between different compilers on the same platform.

Korval
12-19-2003, 05:05 PM
//Object-based
glTexParameteri(texObj, Params);
glTexParameterf(texObj, OtherParams);

/vs. state-based
glBindTexture(texObj);
glTexParameteri(Params);
glTexParameterf(OtherParams);


Seems to me that the state-based approach required 3 function calls, and thus has more overhead than just 2. Granted, you're not passing as many parameters, but you've potentially blown the instruction cache by going to an entirely different function in the glBind* case.


C bindings are portable. C++ bindings aren't even portable between different compilers on the same platform.

I don't think anyone is asking that OpenGL be made as a C++ API.

[This message has been edited by Korval (edited 12-19-2003).]

zeckensack
12-19-2003, 05:36 PM
Originally posted by Korval:



//Object-based
glTexParameteri(texObj, Params);
glTexParameterf(texObj, OtherParams);

/vs. state-based
glBindTexture(texObj);
glTexParameteri(Params);
glTexParameterf(OtherParams);


Seems to me that the state-based approach required 3 function calls, and thus has more overhead than just 2. Granted, you're not passing as many parameters, but you've potentially blown the instruction cache by going to an entirely different function in the glBind* case.Somewhat agreed. You forgot the targets, but that in fact makes the following easier http://www.opengl.org/discussion_boards/ubb/smile.gif
The point I was trying to make is that the "object based" approach requires other functions to do name to object lookups, or object creation.

Somewhat optimized version:


//internal texture objects
std::map <GLuint,Texture*> texture_container;

//cache last object for quick reuse
GLuint current_tex_name=0;
Texture* current_tex_pointer=&amp;default_texture_object;

void
glTexParameteri(GLuint tex_name,GLenum params)
{
//potential redundancy starts here :-)
if (tex_name!=current_tex_name)
{
//find internal data in container
//this is potentially very slow
texture_container.find(<...> );
...
if (not_found)
{
current_tex_pointer=create_new_object();
add_it_to_container_as(tex_name);
}
else
{
current_tex_pointer=what_we_found;
}
current_tex_name=tex_name;
}
//actually do something
current_tex_pointer->apply_stuff(params);
}
You'd end up doing something like that for all functions acting on texture objects. So by your own argument, to preserve code cache (and branch predictor resources ...), it would IMO be preferable to have this:



void
glBindTexture(GLuint tex_name)
{
if (tex_name!=current_tex_name)
{
//find internal data in container
//this is potentially very slow
texture_container.find(<...> );
...
if (not_found)
{
current_tex_pointer=create_new_object();
add_it_to_container_as(tex_name);
}
else
{
current_tex_pointer=what_we_found;
}
current_tex_name=tex_name;
}
}

void
glTexParameteri(GLuint tex_name,GLenum params)
{
//potential redundancy delegated
glBindTexture(tex_name);
//actually do something
current_tex_pointer->apply_stuff(params);
}Now we've come round circle. The current GL model moves the burden to bind to the client application, and thus provides more opportunity for efficient code. That's all.

Korval
12-20-2003, 07:16 PM
The only reason that problem arises is because somebody had the bone-headed idea to make glGen* calls optional, and allow the application to just pick a 32-bit integer to and call glBind* on it. That means that the implementation has to internally keep around some collection of all the 32-bit integers that have ever been bound.

Once again, take a look at shader objects. Their equivalent glGen* calls are manditory, and do not necessarily return an integral value. They could return a pointer (potentially bad for 64-bit conversion), or some kind of 32-bit hash index that can be quickly used to lookup a value.

One problem with a state-based system is that it is a bit harder to track down bugs. If you have some code that sets some parameter on a texture, you can't be certain which texture it is setting if a glBindTexture call isn't sitting in front of it. As such, it is entirely possible that the given line of code is corrupting some other texture. This is not possible with an object-based approach (that being different from object-oriented, which deals with issues like inheritance and so forth, things that no low-level graphics API needs to deal with).

V-man
12-20-2003, 08:04 PM
glTexParameteri(texObj, Params);
glTexParameterf(texObj, OtherParams);
------------------------------

is D3D way of doing things. Do you think it will be noticeable in terms of performance? I'm sure that you'll be calling a whole set of functions, so another one in the bunch won't be a big deal.

How would you solve the push/pop issue? If you forget one of them, finding the source is a problem. Perhaps giving each a unique ID.

Also, I guess it can happen that the driver won't report the error???

I wish there was a "how to code the smart way for GL" document when I started.

zeckensack
12-20-2003, 09:45 PM
Originally posted by Korval:
The only reason that problem arises is because somebody had the bone-headed idea to make glGen* calls optional, and allow the application to just pick a 32-bit integer to and call glBind* on it. That means that the implementation has to internally keep around some collection of all the 32-bit integers that have ever been bound.

Once again, take a look at shader objects. Their equivalent glGen* calls are manditory, and do not necessarily return an integral value. They could return a pointer (potentially bad for 64-bit conversion), or some kind of 32-bit hash index that can be quickly used to lookup a value.You need established names. Making object creation automatic is just icing on the cake, removing that capability doesn't make it much easier. Instead of creating the object you'd throw an error.

But you still need to check whether you 'know' the object the client app wants to act on. "Integer or hash index" isn't much of a distinction either. If you've never seen the hash ... well, you get the idea.



One problem with a state-based system is that it is a bit harder to track down bugs. If you have some code that sets some parameter on a texture, you can't be certain which texture it is setting if a glBindTexture call isn't sitting in front of it. As such, it is entirely possible that the given line of code is corrupting some other texture. This is not possible with an object-based approach (that being different from object-oriented, which deals with issues like inheritance and so forth, things that no low-level graphics API needs to deal with).If you really don't know the current binding, you can always stick one in front of the function calls. So you can express "object based" in terms of "state based" but not vice versa.



class
ParanoidTexture
{
private:
GLuint gl_name;
public:
ParanoidTexture()
{
glGenTextures(1,&amp;gl_name);
}
Parameteri(target, <...> )
{
glBindTexture(target,gl_name);
glTexParameteri( <...> );
}
};

ParanoidTexture texture;
texture.Parameteri( <...> );There you go. You can wrap everything up and end up with the object based approach.

Korval
12-20-2003, 11:26 PM
is D3D way of doing things.

You seem to say that as though the D3D way of doing things is categorically bad.


How would you solve the push/pop issue? If you forget one of them, finding the source is a problem.

Pushing and poping can be dealt with relatively easily in C++. If you encapsulate the push/pop calls in a C++ object (constructor pushes, destructor pops), you've got nothing to worry about. Not that I'm suggesting that the internal GL expose such an API, especially since it takes all of 2 minutes to write such a class.


Also, I guess it can happen that the driver won't report the error???

Setting a texture parameter on the wrong object is not something that the driver can detect as an error. Setting the parameter on any object is legal; it is a symantic error, not an API error.


You need established names. Making object creation automatic is just icing on the cake, removing that capability doesn't make it much easier. Instead of creating the object you'd throw an error.

No, if glGen* had control over the "names", then it could generate names as it saw fit. The names in question could be actual pointers, or something that converts into a pointer after one quick memory access, or a relatively short lookup.

Because GL is forced to accept any object name regardless, the implementation must have some way to map any arbitrary object to a pointer to the internal object. Rather than having a simple function or even a cast operation, it becomes a complex search operation.


But you still need to check whether you 'know' the object the client app wants to act on.

Kind-of. I would actually prefer a debug and release version of the implementation. In debug, it can do glError and so forth checks to make sure that the texture object name really exists. However, in release, it should not even bother; just produce undefined behavior/crashes. Granted, that's somewhat wishful thinking, but it would provide a negligable speed increase in situations where bindable object state is constantly in flux (which is, admittedly, not that frequent).

Think about Win32 programming (if you've ever done any). HWNDs are handles to a window. You have to call a function to create a valid one. If you call a Win32 function with an invalid HWND, it will fail (in debug, with an error of some kind). You don't call a "bindHWnd" function; you just use the current one. It doesn't impose much overhead in terms of searching because the contents of a HWND are controlled by the OS. It can put whatever info in a HWND that it takes for search/validation times to be low.


The real question is, if you had OpenGL to write all over again, from scratch (as a C-based API), with no consideration as to backwards compatibility, would you continue to use the current paradigm or would you switch to the one used by shader objects? I think, if the ARB had it to do over again, they'd go for the shader object (ie, object-based) version.

thAAAnos
12-21-2003, 03:17 AM
1. First of all there is no overhead when using a well designed wrapper, there are ways to be fast and have some abstraction see blitz++
2. Function overloading would cut down a lot of names now wouldn't it?
3. I see the pipeline as a stream so << it is, it might sound strange to those who don't programm in C++ and stl but for those who do, I think it will make sense...
4. A lot of state management could be handled in the client side, and you wouldn't pay the overhead of glGet functions, not to mention easier debugging
5. Be able to switch implementation directly
you want lists? create a stream with lists
you want arrays? create a buffered stream
6. I do not expect programmers in C to like the idea, I expect programmers in C++ to like the idea.

GPSnoopy
12-21-2003, 04:07 AM
I don't think most low level C++ wrappers make sense with OpenGL 1.x.
And if it's a higher level wrapper, then it's not an OpenGL wrapper anymore. http://www.opengl.org/discussion_boards/ubb/wink.gif

However, I think that a low level C++ wrapper would be very useful with the proposed 3Dlabs interface for pure OpenGL 2.0; as there is a lot of abstraction that can be made to make life easier without any performance drop.



The class of code that I have found very useful to keep is for doing things that OpenGL doesn't do for you, like linear algebra and object interaction. It would be really great if OpenGL included more support in this arena (via a major GLU update or another *standard* companion library). OpenGL does not compete well with D3D in this regard, and it is a barrier to entry for OpenGL developers.

Dito.
Plus GLU is totally obsolete in most area as a lot of functionalities are now handled more efficiently directly via OpenGL (e.g. mipmaps generation), or simply because other functions are too slow to be useful IRL as they use the immediate mode with a lot of overhead (e.g. Quadrics & Co).
Making a new lib would be a good thing. In C++ there are ways to make things very nicely while being faster, but of course there is the problem of C compatibility.



2. Function overloading would cut down a lot of names now wouldn't it?

Yes. You're not the first one having the idea of using overloading with OpenGL, and you won't be the last.
But it adds lots of confusion!
Types are very important in OpenGL. Depending on whatever you're using: double or float, short or int, performance may vary a lot. Abstracting these details is NOT wise, you still want an absolute control over it; overloading make it difficult to do so.

The only point were overloading makes sense is when it's used within templates; but again OpenGL is too low level to need most of these paradigms.



3. I see the pipeline as a stream so << it is, it might sound strange to those who don't programm in C++ and stl but for those who do, I think it will make sense...

It can make sense, sometimes.
I once used an output iterator pattern to send OpenGL commands, but it was working at a higher level than a simple wrapper around OpenGL. I still think that in that particular case it was one of the right things to do.




// send to standard output
std::copy(Graftal.begin(), Graftal.end(), std: http://www.opengl.org/discussion_boards/ubb/redface.gifstream_iterator<symbol_type>(std::cout, 0));

// send to OpenGL plotter
gl_graftal_plotter Plotter;
std::copy(Graftal.begin(), Graftal.end(), gl_graftal_output_iterator(Plotter));


But IMHO in most cases using streaming operators or output iterators is just a clumsy syntaxic change.



4. A lot of state management could be handled in the client side, and you wouldn't pay the overhead of glGet functions, not to mention easier debugging

You're talking about a scenegraph. There is no way a simple C++ wrapper can take easily and elegantly take care of all these details.


EDIT: buggy code tags http://www.opengl.org/discussion_boards/ubb/frown.gif

[This message has been edited by GPSnoopy (edited 12-21-2003).]

TheSillyJester
12-21-2003, 09:53 AM
A more OO design (mostly removing binding where it's not needed) may allow easier multithreading, no?
One thread can upload some textures and another render some others textures at the same time.

zeckensack
12-21-2003, 10:13 AM
Originally posted by TheSillyJester:
A more OO design (mostly removing binding where it's not needed) may allow easier multithreading, no?
One thread can upload some textures and another render some others textures at the same time.Once again, your graphics chip isn't multithreaded, neither is your AGP. You may be able to execute to CPU threads truly simultaneously, but you'll have to serialize for the graphics hardware. Doing these things "at the same time" isn't possible.

zeckensack
12-21-2003, 11:06 AM
Originally posted by Korval:
No, if glGen* had control over the "names", then it could generate names as it saw fit. The names in question could be actual pointers, or something that converts into a pointer after one quick memory access, or a relatively short lookup.

Because GL is forced to accept any object name regardless, the implementation must have some way to map any arbitrary object to a pointer to the internal object. Rather than having a simple function or even a cast operation, it becomes a complex search operation.You're trading in robustness. It's easy to shoot down a direct pointer conversion, if you're so inclined. GL's texture object model can't be shot down.

Pointers also limit your ability to dynamically reallocate memory. A GL implementation might have a live context for hours, or even days, and see millions of object creations/destructions during that time.

There are only few restrictions you can put on pointers for "validation" purposes. You can enforce an alignment, you can do memory range checks but that's all very limited. And ...

You can't safely move an object in memory if there are pointers still referencing it in its old location, and the GL implementation has no knowledge of client references. Ie you must keep it there until it's properly destroyed. You can't do garbage collection anymore (an implementation might want to do that once per frame or so). You can badly fragment your memory. I wouldn't want to impose these restrictions. Name lookup can be strictly O(log2n) and is robust. Funneling pointers through the API doesn't offer enough benefit for its drawbacks IMO.

I won't complain if automatic object creation is removed, I never really needed it. But I'm unsure of whether there's a real benefit. But I'm all against pointers.


Kind-of. I would actually prefer a debug and release version of the implementation. In debug, it can do glError and so forth checks to make sure that the texture object name really exists. However, in release, it should not even bother; just produce undefined behavior/crashes. Granted, that's somewhat wishful thinking, but it would provide a negligable speed increase in situations where bindable object state is constantly in flux (which is, admittedly, not that frequent).

Think about Win32 programming (if you've ever done any). HWNDs are handles to a window. You have to call a function to create a valid one. If you call a Win32 function with an invalid HWND, it will fail (in debug, with an error of some kind). You don't call a "bindHWnd" function; you just use the current one. It doesn't impose much overhead in terms of searching because the contents of a HWND are controlled by the OS. It can put whatever info in a HWND that it takes for search/validation times to be low.Yes, I've done Win32 programming. I don't think the comparison is fair, because typical Win32 programs aren't particularly dynamic and frankly, Win32 hasn't been designed for speed. Creating a dialog might get you a few hundred HWNDs (lots of controls and stuff). The vast majority of programs will keep it there and destroy the whole dialog at once, they won't destroy and replace single controls over and over again.

HWND is in fact a plain pointer to struct. All I've said above about pointers applies. It doesn't matter much for a window manager, but IMO it sure does matter for a low levelish graphics API.

The real question is, if you had OpenGL to write all over again, from scratch (as a C-based API), with no consideration as to backwards compatibility, would you continue to use the current paradigm or would you switch to the one used by shader objects? I think, if the ARB had it to do over again, they'd go for the shader object (ie, object-based) version.I'd chose the current paradigm for sure. Maybe ditch the automatic creation of objects. Otherwise I think it's the perfect blend of performance and stability.

Korval
12-21-2003, 12:16 PM
2. Function overloading would cut down a lot of names now wouldn't it?


Syntactical sugar, at best. Once again, it'd be nice, but then you lose everything that a C-based library gets you. Namely, portability.


3. I see the pipeline as a stream so << it is, it might sound strange to those who don't programm in C++ and stl but for those who do, I think it will make sense...

The pipeline may seem to you like a stream but it isn't. Esepcially when you start getting into vertex arrays and so forth.


4. A lot of state management could be handled in the client side, and you wouldn't pay the overhead of glGet functions, not to mention easier debugging

Which would make the implementation slower, since the state isn't server-side where it needs to be.


. Be able to switch implementation directly
you want lists? create a stream with lists
you want arrays? create a buffered stream


And neither interface is approprite for performance rendering.


6. I do not expect programmers in C to like the idea, I expect programmers in C++ to like the idea.

OK, then as a C++ programmer, I think it is a bad idea. BTW, I never use the iostream API; I prefer printf-esque functions. You don't have to use all of C++ to be considered a C++ programmer.


However, I think that a low level C++ wrapper would be very useful with the proposed 3Dlabs interface for pure OpenGL 2.0; as there is a lot of abstraction that can be made to make life easier without any performance drop.

But most of GL 2.0 will likely never see the light of day. The only real thing we're likely to see out of GL 2.0 is glslang.


Pointers also limit your ability to dynamically reallocate memory. A GL implementation might have a live context for hours, or even days, and see millions of object creations/destructions during that time.

Not really. The pointer points to an object that contains whatever client-side data that the implemenetation needs to refer to a server-side object. For a texture, it would be a the location of the texture's data. For a VBO, it would be the location and type of the buffer. And so on.

You can still move the server-side data around without having to change the actual client-side object.


There are only few restrictions you can put on pointers for "validation" purposes. You can enforce an alignment, you can do memory range checks but that's all very limited.

Or you can keep a table around in memory that contains a list of all allocated objects. Which is, basically, no different than it is now, except that the table isn't used to convert a name into a pointer, but just for validation. So, in a hypothetical "release" mode, this table wouldn't need to exist.


Creating a dialog might get you a few hundred HWNDs (lots of controls and stuff). The vast majority of programs will keep it there and destroy the whole dialog at once, they won't destroy and replace single controls over and over again.

The same is true for OpenGL programs. You load a block of textures and VBO's. For a game, this might represent a level. Once the level is over, they all get destroyed. Even games that stream levels destory their objects in blocks, rather than individually.

TheSillyJester
12-21-2003, 02:15 PM
Originally posted by zeckensack:

Originally posted by TheSillyJester:
A more OO design (mostly removing binding where it's not needed) may allow easier multithreading, no?
One thread can upload some textures and another render some others textures at the same time.Once again, your graphics chip isn't multithreaded, neither is your AGP. You may be able to execute to CPU threads truly simultaneously, but you'll have to serialize for the graphics hardware. Doing these things "at the same time" isn't possible.

I didn't say it's faster, the driver can serialize the way he want (not sure there isn't room for some optimizations), but it's more convenient to program. And if there isn't room on the GPU putting a texture on CPU memory can be done asynchronously right?

jwatte
12-22-2003, 11:07 AM
Thread synchronization is expensive. If you force drivers to synchronize, you will never get peak performance out of your hardware.

Anyway, regarding the intial question: what specific problem are you trying to solve?

Nobody should be pushing OpenGL state management into a number of different objects, so that they would step on each others toes already. Visiting your scene graph to extract geometry can be done using all kinds of C++ template pattern goodness already. How to turn geometry + state into an actual rendered device is best done with a separate strategy. At the bottom, there's a renderer, but any new abstraction doesn't really seem worthwhile at that point.

If you want a new, higher-level API, you probably want to define your own scene graph (extracting geometry + rendering strategy) and implement it on top of OpenGL, rather than just re-mapping all the current OpenGL bindings to a higher-overhead, still low-level API.

Overengineering for overengineering's sake is seldom useful.

SirKnight
12-22-2003, 12:54 PM
I'm with cass on this subject. Also, putting uncessesary overhead ontop of your opengl functions is never a good thing. I don't see where we need something on a higher level than glVertex4f(...), for example, anyway. One thing that has always bugged me is when someone would create an "opengl class" where its member functions are exactly the same as the opengl counterparts. Like this for example:




class OGL
{
public:
void Vertex3f(blah blah);
void Verted2f( blah blah);
void Color3ub( blah blah);
};


etc...

Then:
OGL myOpenGL;
myOpenGL.Vertex(blah);

I mean wtf is the point of this? It's more to type for one thing. Then if you start doing weird things with operator overloading...well say good bye to decent performance once you do that. Forget about it. To me it seems like, unless im wrong, the point of OpenGL is to draw 3d graphics fast, not, let's try to make everything over simplified sacrificing half of my rendering speed.

Anyway, carry on. http://www.opengl.org/discussion_boards/ubb/biggrin.gif


-SirKnight

[This message has been edited by SirKnight (edited 12-22-2003).]

JanHH
12-22-2003, 04:01 PM
in 1997 when studying computer science, in the first few hours of the practical programming course where we were supposed to learn Smalltalk, I asked the professor if the way smalltak operates doesn's slow down performance. He nearly became angry and said that I should not think and talk so small minded about this. That's the way those people think, performance (in terms of really highly optimized performance in inner loops) is not an issue, good software design is MUCH more important.

In general, this is right, but OpenGL is a good example where optimized design and optimized speed are contradictions. So in order to get optimized frame rates, we have to live (and in fact whe shoud at least know that we do so) with the fact that we are not writing the best OO designed software.

In this forum once there was a flame thread about the very bad quake 2 source code, but, does anyone who play the game care about this?

An example for OO OpenGL (OOOpenGL *g*) is the gl4java bindings, this goes like

GL myGL;

myGL.glBegin(...)

and so on.

Another issue (as I already said) is the state machine chaos, an object oriented solution to this would be great, and functional overhead to state changing calls would not cost much performance. But this is a logical contradiction as an object structure encapsulating this would have to know about all OpenGL states that could be changed, including extensions, and including all that are yet to come (or it would have to be uptdated very very often).

Jan

JanHH
12-22-2003, 04:11 PM
hm I am just thinking about something like a GLStateManager that keeps track of state changes and helps me with the source code chaos in my terrain enigne..

It would have a default state and when a part of the OpenGL scene sets some states it would automaticaly reset them to default state when this part is finished. It would look like this:

myGLStateManager.BeginSection();
ground->Draw();
myGLStateManager.EndSection();

myGLStateManager.BeginSection();
clouds->Draw();
myGLStateManager.EndSection();

and inside of ground->Draw and clouds->Draw OpenGL states can be changed wildly withoug having to memorize what was done and having to reset it.

Maybe this is a good idea, at least for my program..

Jan

zeckensack
12-22-2003, 09:47 PM
Originally posted by JanHH:
hm I am just thinking about something like a GLStateManager that keeps track of state changes and helps me with the source code chaos in my terrain enigne..

It would have a default state and when a part of the OpenGL scene sets some states it would automaticaly reset them to default state when this part is finished. It would look like this:

myGLStateManager.BeginSection();
ground->Draw();
myGLStateManager.EndSection();

myGLStateManager.BeginSection();
clouds->Draw();
myGLStateManager.EndSection();

and inside of ground->Draw and clouds->Draw OpenGL states can be changed wildly withoug having to memorize what was done and having to reset it.

Maybe this is a good idea, at least for my program..

Jan

Ummm ...


glPushAttrib(myGround->GetStateBitsModifiedByDraw());
myGround->Draw();
glPopAttrib();That's very brute force, just like your approach, but it can be done.

My gut tells me that the GL can Push/Pop state more efficiently than you can manage it externally, though that depends on implementation quality, obviously.

Changing state redundantly isn't a good idea anyway. What you're really looking for is a good scene graph that can render objects in a "minimum state change" order of sorts. Basically the same thing as the travelling salesman problem.

JustHanging
12-22-2003, 10:06 PM
I think a simple state wrapper with early reduntancy checks is a good idea anyways. During the developement time you could also have it report the reduntant state changes, so if possile you can clear them up from the code too.

-Ilkka

thAAAnos
12-23-2003, 01:28 AM
Why do you keep telling that any wrapper will have impact on performance?
OpenGL is fast when it comes to rendering, transforming, clipping and so forth.but are you certain that OpenGL calls are lighter than any wrapper function that could well be inlined? What would be the performance impact
no more than 1% to be sure...



My gut tells me that the GL can Push/Pop state more efficiently than you can manage it externally, though that depends on implementation quality, obviously.


Have you forgoten that your cpus run at 3GHz?
I think of OpenGL calls as I/O calls, correct me if I am wrong please...
Doesn't it pay to avoid some calls when possible?

Zengar
12-23-2003, 02:14 AM
OpenGL has such a simple API that I see no use to write wrappers for it. Helper classes, yes, but not a wrapper.
So far agree with Tom.

Tom Nuydens
12-23-2003, 02:45 AM
Originally posted by thAAAnos:
Doesn't it pay to avoid some calls when possible?

Yes, but the way to do it IMHO should be by clever application design, not by putting an extra layer between the app and the GL.

-- Tom

zeckensack
12-23-2003, 02:57 AM
Originally posted by thAAAnos:
Why do you keep telling that any wrapper will have impact on performance?
OpenGL is fast when it comes to rendering, transforming, clipping and so forth.but are you certain that OpenGL calls are lighter than any wrapper function that could well be inlined? What would be the performance impact
no more than 1% to be sure...Sure, a straight inline wrapping is just as fast as the pure API. No doubt about it. It only gets interesting if you layer something on top, instead of just wrapping it up. In particular if you add functionality that the GL itself can potentially do better.

Have you forgoten that your cpus run at 3GHz?No, I haven't. I only said that the GL implementation may be able to do it faster, not by how much, nor did I claim that Jan's idea would be "too slow". I just felt it was reinventing the wheel.

I think of OpenGL calls as I/O calls, correct me if I am wrong please...
Doesn't it pay to avoid some calls when possible?I'd rather think that OpenGL hides the actual I/O calls from the programmer, but that's only a matter of interpretation http://www.opengl.org/discussion_boards/ubb/smile.gif
Sure it pays to avoid calls. That's the reason I suggested letting the GL do it in the first place.

Eg
glPushAttrib(GL_DEPTH_BUFFER_BIT);
<...> //do something
glPopAttrib();

That pushes/pops state set through these:
glDepthFunc
glDepthMask
glClearDepth <- let's ignore this one for a moment
glEnable/glDisable(GL_DEPTH_TEST);

These individual functions all need error checking on the parameters while PopAttrib doesn't need to do this. Only "current state" is pushed and that's guaranteed to be error free (almost; AFAIK WGL_ARB_make_current_read is the *only* exception where PopAttrib can possibly produce an error).

Also, there isn't necessarily a direct mapping to hardware registers between these functions. Suppose your hardware has three registers for managing its z buffer/depth testing:

1)depth test function
2)z buffer reads
3)z buffer writes

With glDepthFunc(GL_ALWAYS) you never need to read the depth buffer, so you could turn the reads off in hardware (saving precious bandwidth).
With glDepthFunc(GL_EQUAL) you never need to write the depth buffer. You 'usually' control that with glDepthMask, so you see that distinct GL states can overlap and affect *a single* state in the hardware domain.

The opposite is also true. Some GL states can affect *multiple* hardware registers, such as glDisable(GL_DEPTH_TEST), you never need to write (nor read) depth. It would affect all of our three supposed hardware registers (setting the function to "always"). We do not even have a register for "enable/disable depth test".

To figure out the hardware register values, we need to look at all three states together. This sort of interaction may be managed more efficiently by Push/Pop than by calling the individual entry points. And that was a simple example http://www.opengl.org/discussion_boards/ubb/smile.gif

Texture object to name lookup is another thing that can possibly be faster to push/pop than to set through the API. Instead of redoing the "search for pointer to object data", the implementation is free to "cheat" and push the pointer along with the texture name (but caution must be taken to handle the case where the object has moved in memory or has been destroyed).

Etc and so forth http://www.opengl.org/discussion_boards/ubb/smile.gif

thAAAnos
12-23-2003, 04:11 AM
As a conclusion some thoughts...

Maybe there should be a clear distinction between client side and server side API calls, and maybe not everything should be in the server. I don't know how the hardware guys feel about that but I have a feeling that they will end up having the program executed by the gpu http://www.opengl.org/discussion_boards/ubb/smile.gif

I think that it pays to have a wrapper that even does nothing than simply call the API functions, it should be a start for a better interface, and a diffrent view of things, and
it might evolve to an API in itself.

Maybe we need C for portability, but I don't deem C++ programs writen like in C portable, ie using printf instead of iostream for me is *not* portable, C and C++ are diffrent languages in my mind. And without a C++ wrapper/interface/API I cannot program with OpenGL and feel safe that no matter future changes to the API I will not have to redesign my program to get the best performance available.

And in the end if we care only for performance then we better forget about API's
and instead develop a GL language that gets compiled and executed in the gpu, don't know what this will do to design though...

JanHH
12-23-2003, 05:42 AM
I didn't know of th push/pop attrib commands. they of course make my idea rather superfluous. I need only two calls,

myGLStateManager.InitDefaultState()

and

myGLStateManager.ResetToDefaltState().

I think this would make my program less buggy, easier to develop and will not cost any performance (as state changers are not done in inner drawing loops).

Jan

V-man
12-24-2003, 03:38 PM
>>>And in the end if we care only for performance then we better forget about API's
and instead develop a GL language that gets compiled and executed in the gpu, don't know what this will do to design though...<<<

You need an API.
No API means you can't program for it.
The alternative is support at the language level.

That wont happen because API's exist that do the job well. Of course, there is always someone who will go the distance and do crazy things.

The idea of running a program completly on the GPU has come up.

I think that in the very long term, this one will realize. And I think NVidia will be the first.
I think of them as the Microsoft of hardware (meaning they have the resources)