3D & 2D??

I think that one of the greater missing features of
OpenGL (as “Graphic Library”) is 2D graphics support (Surfaces and other stuff, like DirectDraw).
For instance now linux have a quite good 3d support with opengl direct rendering, but to use 3d a decent 2d api is required (The X server is not good for games :frowning: ).
A common api for 2D would be killing!!!

There is support for overlays, underlays, auxiliary buffers, stereo buffers, front and back, vsyncing, glDrawPixels, glRedPixels, glBitmap, the imaging subset.

unless if you can be more specific…

V-man

OpenGL is too higher lever compared to DirectDraw.Overlays are textures, you cannot
create hardware surfaces and I think you should at least get a pointer of the backbuffer.It would not lose a lot of portability, at least less then how would with programmable shding vertices, and OpenGL will be a lot better for 2D games.OpenGL should lose a few of its Client-Server features: people don’t buy a Geforce2 to play Quake3 over the net!!!

Two things:

“OpenGL should lose a few of its Client-Server features: people don’t buy a Geforce2 to play Quake3 over the net!!!”

Yes they do. Who plays Quake 3 by themselves? It’s not that great of a game non-multiplayer.

Second, OpenGL isn’t designed for games, and nor should it be. It is designed as a fairly general purpose graphics library.

If you want a good cross platform 2D API, look into Allegro on www.sourceforge.net.

Originally posted by tiammazzo:
OpenGL is too higher lever compared to DirectDraw.Overlays are textures, you cannot
create hardware surfaces and I think you should at least get a pointer of the backbuffer.It would not lose a lot of portability, at least less then how would with programmable shding vertices, and OpenGL will be a lot better for 2D games.OpenGL should lose a few of its Client-Server features: people don’t buy a Geforce2 to play Quake3 over the net!!!

Wow. I think I disagreed with every single sentence of that.

  1. I think GL is about right; if anything I’d say it’s slightly too low level, and am looking forward to abstracted object management in OpenGL 2. You may think that specifying everything right down to the metal is cool and hardcore, but every time an API exposes this kind of implementation detail it rules out a huge set of technical possibilities, often including the optimal ones. A lot of hardware folks got very annoyed by the early DirectX versions for just this reason, and MS has backed down in most cases.

  2. Case in point; the optimal format for a backbuffer is almost certainly NOT linear. Exposing a “pointer” would cripple both portability and progress. And re your comparison: if you can’t do vertex programming in hardware, you can do it in software and still rasterize in hardware. If you don’t have a linear backbuffer in hardware and your API demands one, you’re screwed. Software all the way.

  3. 2D is irrelevant. Don’t get me wrong; I still play and like many 2D games. But I can’t think of any 2D game app that will cause current hardware to even break a sweat, so why bother optimizing for it? 2D will never go away, but it’ll never drive technology direction again.

If you want a straight-to-the-metal 2D pixel API, you’re in the wrong place. Try something like OpenPTC, you’ll be much happier.

Howdy,

a side note; the clint-server model in opengl is not to support multiplayer network games.

There is a differnece between quake3 encapsulating the entire state of the game and exchanging information about the players and the network model that the opengl pipe exposes. The client/server model is based, arguably, from the X-Windows client/server model, where one machine is generating graphics commands for another ~machine~ to render. ie. the machine that calls glBegin/glEnd is a separate entity from the computer on the other end of a network cable that is processing these glbegin/glend commands. Its useful, for example, when you have a high end graphics computer traversing a complicated data set so a user on a lower class machine can visualise the information.

cheers,
John

I think the point remains. You can do 2D with opengl and its plenty fast with recent hardware. There are other ways, such as just using GDI and that’s plenty fast too.

As for “why a pointer to any buffer is useless”, there is a very long discussion (50+ replies) in this very suggestion board.

V-man

2D is actually very easy to do in 3D space… You can mix it with 3D just fine, and it would act like 3D, but you have to draw it last, or set up your depth and z position. Use ortho mode, convert your screen coordinates to world coordinates, and draw away. As far as I know, uploading a texture each frame is a lot faster than glReadPixels, so why don’t you try it? And how can you deny something like hardware bilinear filtering? Mipping? The ability to move your 2D around in 3D (for something like a really cool UI) just as easily as you would anything else? There’s every reason to use ortho mode for all of your 2D needs… Too bad you can’t draw a texture to a specific buffer, though… that would be pretty cool.

i think that using 3D to do 2D has nothing wrong. but i think that the imaging subset is not complete : i’m greetly interested in 2D image manipulation, my GeForce2 has a lot more power than my cpu and is more optimized… 3D require already matrix multiplication (but with small matrix) i think that it will be great to add general matrix multiplication (using a texture as a matrix that we multiply whith a buffer and bloc matrix multiplication algorithm) allowing to do FFT or wavelet transform by_hardware in a single shot. the same problem comes with convolution imaging a blur effect (it’s a 3*3 matrix that remplace the current pixel value by the sum of it’s neighbors /9 and that is applied on each pixel) it not possible to do using convolutions tools right? a lot of effect can be obteined by matrix or convolution multiplication…