PDA

View Full Version : OpenGL 3 Updates



Pages : 1 2 3 4 5 [6] 7

knackered
07-17-2008, 09:42 AM
piclens?

Mark Shaxted
07-17-2008, 09:58 AM
parlez vous anglais?

Rob Barris
07-17-2008, 10:02 AM
Surely, even with virtualised vram, the driver must know when a txture is about to be paged out. All I want is for the driver to say "no need, I'll just delete it", and return an error when you try to use it. Then *I* can deal with the consequences.

One thing to consider is that, to boost throughput and concurrency, drivers may be (are likely) deferring your draw commands in a queue and executing batches of them via an alternate thread or via hardware consumption of bulk command buffers. i.e. the draw command you issued didn't actually happen when you issued it.

If you are checking some status prior to or after each draw, those pipes generally need to be drained, and performance drops. The desire to propagate accurate information back to the app, about what the driver is doing w.r.t. texture residency, is going to create sync points in a lot of places where they didn't happen before. Restated, the most efficient path is when the app is sending commands and data downhill to the driver and not asking to haul any back up.

An unmanaged texture concept potentially relieves main memory pressure but leaves all kinds of potholes for an application to deal with (ask any game developer about "Alt-Tab bugs").

edit:

Doing some more digging on this idea, I remembered that Apple did an extension something like this: "APPLE_object_purgeable".

http://developer.apple.com/graphicsimaging/opengl/extensions/apple_object_purgeability.html

It looks like a pretty tricky ext to use, read it and see if this matches up with what you would want to do.

Korval
07-17-2008, 10:24 AM
The current priority mechanism is either not really enforced or outright ignored by the driver. At the very least, can we please _enforce_ it in the spec (not just provide idiotic "hints" that the driver can ignore!!) so a higher priority texture is never kicked out of VRAM when a lower priority one is not currently used?

And how exactly would you go about enforcing it? Especially when the very concept of VRAM may not exist in some graphics cards (particularly the upcoming combined CPU/GPU chips from AMD, where everything is just main memory).


One way or the other, don't you agree that not being able to easily pinpoint the source of a glitch is unacceptable?

What do you mean by "glitch"? If you mean what that usually means, which is an undesired visual rendering artifact, then yes. But we're talking about performance, not rendering quality. If the implementation is not behaving visually as expected, that's a problem.

If you're talking about a sudden loss of framerate in performance, that's just something you deal with. The price you pay for having abstractions is that you don't always get the exact performance you want. So you drop the resolution of some textures, or poke at shaders or whatever to restore the lost performance.


Also, imagine a 1024x1024 texture is divided in 4 512x512 tiles and each of those is further divided down to some limit size. It would be nice to be able to take advantage of this and be able to create 4 512x512 textures from a 1024x1024 texture.

And what if the hardware can't do that? Or, rather, it can't do it in the way that you mean (where no VRAM allocations happen)? That is, a texture can have VRAM memory overhead that you aren't aware of, so 4 512x512 textures simply can't fit in the space that 1 1024x1024 did. With that, your entire algorithm falls apart.

Rob Barris
07-17-2008, 10:31 AM
Actually it would be interesting if you could create a large texture in VRAM with uninitialized contents, and then feed it segments of image via texsubimage over time, for people trying to really smooth out the frame to frame cost.

knackered
07-17-2008, 11:18 AM
parlez vous anglais?
avez-vous entendu parler de google?

Mark Shaxted
07-17-2008, 11:20 AM
APPLE_object_purgeable...

That's certainly a step in the right direction.

But I've just had an idea. What would suit *me* perfectly would be to enable textures to have additional properties - volatile, non-volatile, and normal.

Normal is per current behaviour (page in/out when required)
Volatile means driver must delete rather than page.
Non-volatile means driver must keep in vram at all costs.

BUT - for all volatile textures, the developer can also specify an alternate texture - in other words a prefered texture, and in the event it's been deleted, an alternate texture (may be a default 'fucia pink' debugging texture, or in my case a compressed tiny thumbnail).

Lets call the concept 'texture chaining'. Although I have no interest in game development I could envisage many situations where this would be benficial performance wise - a game could load a whole series of low res (non-volatile) textures which get called in the event of a missing (from vram) resource. This could even be extended such that a paged out texture would be replaced by it's alternate for as many frames as it takes to DMA the original back into vram. No more jerky games?

I have no idea if this is do-able. But it would certainly help me enourmously.



BTW - my motivation for volatile textures is simply to effectively see vram as a usable extension to system ram - not just a cache. Consider a win XP machine with 512mb of ram and 1gb of vram - the vram is wasted (unless you're willing to take a performance hit via the swap file). I'd like to see the vram as an extra memory resource to play with.

knackered
07-17-2008, 11:23 AM
Actually it would be interesting if you could create a large texture in VRAM with uninitialized contents, and then feed it segments of image via texsubimage over time, for people trying to really smooth out the frame to frame cost.

You can. I do it all the time. Except I don't create the texture object during a frame, I create a cache at startup. There's no way of casting to another format, but I've never had the need to.

knackered
07-17-2008, 11:25 AM
Mark, have you not heard of megatextures or clipmapping? You've pretty much just described it, and there was no need to change the rendering API.

Mark Shaxted
07-17-2008, 11:27 AM
avez-vous entendu parler de google?

err... oui ma bonne pomme de terre. Je parlais du google avec langue fourchue et ensuite ą dīner.


Yeah - I was the man who was forced to take french O-level at school, until I managed a staggering 7% in the mock exams, at which point it was suggested that perhaps I shouldn't bother :D

Looks intersting though.

Jan
07-17-2008, 11:44 AM
Well, instead of a "fallback-texture", one could add a flag, that the resolution of a texture is not critical. The result could be, that when VRAM is full, the driver removes the texture, but not the full mipmap-chain, but only as many high-res levels, as necessary.

Now the driver "informs" the application, that n mipmap-levels of some texture were destroyed (lost surface...). This does not need to be immediately, even if it is done with several frames delay, it would be absolutely fine, since the low-res levels (at least level n) would definitely stay in memory and can be used for rendering. Now the application can decide whether to reload the high-res versions (and up to which level) or maybe to wait a bit, if the current framerate isn't good enough.

If such a flag is not set, all textures are "managed" as it is right now.

Just an idea, i don't argue that it should be like this, but i like it definitely better, than having a feature to specify a (completely different) fallback-texture.

Jan.

knackered
07-17-2008, 12:17 PM
This does not need to be immediately, even if it is done with several frames delay, it would be absolutely fine
would it even be fine if that information were no longer current, jan?

bertgp
07-17-2008, 12:29 PM
What do you mean by "glitch"? If you mean what that usually means, which is an undesired visual rendering artifact, then yes. But we're talking about performance, not rendering quality. If the implementation is not behaving visually as expected, that's a problem.

If you're talking about a sudden loss of framerate in performance, that's just something you deal with. The price you pay for having abstractions is that you don't always get the exact performance you want. So you drop the resolution of some textures, or poke at shaders or whatever to restore the lost performance.


In this case, I am talking about a sudden spike in the frame time (drop in frame rate). I need to be able to guarantee a consistent 60hz. No drops are acceptable, not even for a frame. Droping whatever level of detail for the next frame will not cut it. That is why an efficient mechanism for subloading is necessary. See the thread "Texture subload" in the "OpenGL coding: advanced" section for a preview of the nightnare it is right now.


And how exactly would you go about enforcing it? Especially when the very concept of VRAM may not exist in some graphics cards (particularly the upcoming combined CPU/GPU chips from AMD, where everything is just main memory).

Good point; I hadn't considered those... However, imagine that VRAM is like cache memory for the GPU. The priority mechanism could be enforced by saying that a higher priority texture is located wherever the memory is the fastest, unless it must absolutely be kicked out. I concur that this does seem a bit hard to specify coherently when the VRAM concept disapears. The real issue I am after is being able to subload stuff in small chunks without performance hiccups to be able to guarantee consistent 60hz performance.



And what if the hardware can't do that? Or, rather, it can't do it in the way that you mean (where no VRAM allocations happen)? That is, a texture can have VRAM memory overhead that you aren't aware of, so 4 512x512 textures simply can't fit in the space that 1 1024x1024 did. With that, your entire algorithm falls apart.

Yes you are right. Some cases could exist where such a scheme or a similar one could work. As Komat pointed out, some texture types can be translated into other types. If the spec specified a texture memory layout, more opportunities like this could show up. However, maybe some underlying hardware design choices make this kind of specification impossible.



Actually it would be interesting if you could create a large texture in VRAM with uninitialized contents, and then feed it segments of image via texsubimage over time, for people trying to really smooth out the frame to frame cost.


This is my concern exactly. I would also extend this mechanism to VBOs since subloading large meshes is also desirable. Currently, creating an uninitialized buffer or texture is possible (giving NULL data pointer). glBufferSubData or texsubimage can be used to send small chunks afterwards, but the _whole_ content (for textures at least) is only pushed to VRAM the first time it is used, resulting in a sudden performance drop.

knackered
07-17-2008, 12:49 PM
but the _whole_ content (for textures at least) is only pushed to VRAM the first time it is used, resulting in a sudden performance drop.
Plot a point with it then. A black point. With zero alpha.

Mars_999
07-17-2008, 12:51 PM
I would like to see support in GLSL in GL3.0 for

unpack_4ubyte()

unpack functions for various types... And would be nice if possible to allow filtering other than point filtering.

Rob Barris
07-17-2008, 12:53 PM
Well, instead of a "fallback-texture", one could add a flag, that the resolution of a texture is not critical. The result could be, that when VRAM is full, the driver removes the texture, but not the full mipmap-chain, but only as many high-res levels, as necessary.


One issue here is that some hardware requires all the mips to be loaded adjacently in VRAM. They aren't fluidly allocatable in disjoint areas of video memory. So trying to make this happen could involve some really gnarly gymnastics in the VRAM allocation on those implementations, as mips "come and go".

Korval
07-17-2008, 01:26 PM
I need to be able to guarantee a consistent 60hz. No drops are acceptable, not even for a frame. Droping whatever level of detail for the next frame will not cut it.

I wasn't talking about dynamically deducing that a frame is running too fast. I meant testing your application to find out where the places are that are causing performance hitches, and then fix the art to remove those hitches.

BTW, what kind of application is considered fundamentally broken if it isn't running consistently at 60fps? Everyone's pretty tolerant of videogames dropping a frame here and there, though obviously they'd rather not.


If the spec specified a texture memory layout

Absolutely not. The freedom to layout texture memory as the hardware sees fit is one of the very first basic optimizations that any texture system does. You will never get the specification to require a particular layout, because that will always favor one piece of hardware over another (or it favors no hardware at all).


Actually it would be interesting if you could create a large texture in VRAM with uninitialized contents, and then feed it segments of image via texsubimage over time, for people trying to really smooth out the frame to frame cost.

And how would you do that, exactly? The layout of the texture in VRAM is not available, and certainly will not be made available. Without knowing how the driver is going to play with the bits of your texture before it uploads them, a single sub-image may require a dozen separate DMA operations. And that's not what either of you want.

The only real way to get async uploads is with something like PBO. That is, an honest-to-god feature of the API that says that the uploading will be async, rather than trying to trick the API into doing what you want. Something where you just hand the driver your bits and say, "Put this in the texture, but do so in the background. Make sure it's there if I ever use it."

Brianj
07-17-2008, 01:27 PM
A couple pages ago someone said list your top 10 issues about OpenGL3. One that I have is will Khronos update the OpenGL SDK with tutorials of their own? The links listed in the SDK are to websites that haven't been updated in ages, and I doubt they'll be updated to reflect the changes that OpenGL 3 brings. Also could they be actual tutorials - not just source code like in the nvidia SDK - but tutorials that explain things with code (NeHe style tutorials)(not just // comments in the source). Trying to learn just from source is hard, and there is site floating on the web that recommends against it.

Also will there be new books (red book / superbible)? I really hope so for the latter. OpenGL Superbible 4 was a godsend.

Komat
07-17-2008, 01:42 PM
BTW, what kind of application is considered fundamentally broken if it isn't running consistently at 60fps?

Something generating image for TV broadcast?

bertgp
07-17-2008, 02:03 PM
BTW, what kind of application is considered fundamentally broken if it isn't running consistently at 60fps? Everyone's pretty tolerant of videogames dropping a frame here and there, though obviously they'd rather not.


I am making a 3D engine for aircraft simulators to train pilots. To be certifiable, you have to stay at 60hz all the time. We are often fighting against optimizations/assumptions made for games that hamper us. When you see a texture for the first time in a game, you lose a few fps and nobody cares too much. That is why not much effort has been put by the IHVs to provide performance glitch free operation. However, it does annoy me when I play a first person shooter and I get killed because of a big texture upload that drops my fps suddenly.



but the _whole_ content (for textures at least) is only pushed to VRAM the first time it is used, resulting in a sudden performance drop.
Plot a point with it then. A black point. With zero alpha.

I am sorry but I do not see what this would solve. The source of the performance drop is the actual transfer of the texture over the PCI express bus. The problem is not the fill rate cost.



The only real way to get async uploads is with something like PBO. That is, an honest-to-god feature of the API that says that the uploading will be async, rather than trying to trick the API into doing what you want. Something where you just hand the driver your bits and say, "Put this in the texture, but do so in the background. Make sure it's there if I ever use it."


The PBO could fill this role. However, the PBO is just a linear data buffer and when you create a texture using a PBO, it must be tiled and the PBO's contents are copied entirely and tiled into a texture in one big transfer.

One way or the other, there must be a mechanism to let the driver know that we want the texture to _really_ be ready to draw, without a big hit on first use. Mind you, the idea of only sending a texture to VRAM once it is needed is good in general because it could be destroyed before it was ever used (think of a texture only used for a particular area and the player never gets there), but when you know what you are doing and you really really want the texture to be ready, this is a real pain.

Why not do it this way : have an API where I can schedule an upload (via PBO perhaps) and it happens with DMA in the background. Since the PCI express has such an insane bandwidth, during the major part of a frame, the bus is idle anyway except for the commands and states sent to the GPU; that is unless there are lots of readbacks to the CPU of course. This unused bandwidth could be consumed by a low priority "driver thread" to do the async transfers. Another command would exist to query whether an upload was completed and maybe how much data was still left to upload. This also requires a way to say "make sure this texture is really ready to draw when the upload is complete".

Ilian Dinev
07-17-2008, 02:15 PM
I am sorry but I do not see what this would solve. The source of the performance drop is the actual transfer of the texture over the PCI express bus. The problem is not the fill rate cost.
This will force the driver to upload the texture to VRAM. Draw a few points with each of your textures, right after loading the textures - and you'll have them all in vram.

Jan
07-17-2008, 02:17 PM
"would it even be fine if that information were no longer current, jan? "

The information would always be valid, because the driver would only remove data, not reload any, so once it is removed there will be no changes and thus the information will stay current.

"One issue here is that some hardware requires all the mips to be loaded adjacently in VRAM."

Yes, i already thought that might be a problem. As i said, it was only a quick idea, i don't think it would really be very useful. Just a bit brainstorming.

Jan.

Mark Shaxted
07-17-2008, 02:21 PM
How about creating a 'dummy' rendering thread (with wglShareLists) which draws the new textures to a 1x1 FBO to force an upload. Unless you're seriously CPU/GPU bound I doubt you'd see any relevant performance drop.

Jan
07-17-2008, 02:27 PM
"However, it does annoy me when I play a first person shooter and I get killed because of a big texture upload that drops my fps suddenly."

Yeah, that happens to me ALL THE TIME ! (no, honestly, that's one of the worst excuses for simply being a bad gamer :-P )

Well, i do not understand, why you would need constantly 60 Hz to be certified, but we are not here to discuss such strange requirements. I do however feel your pain, because with GPUs and drivers being targeted towards gaming purposes, glitches could happen anytime, anywhere, without warning, simply because the driver might decide, that it has to do something, that might take a while (milliseconds), which won't be any problem in a game (sudden drop to 15 FPS, who cares?!). I mean, every driver-update could basically "break" your application, although visually nothing changes. Tough luck.

Jan.

CatDog
07-17-2008, 02:32 PM
I do however feel your pain, because with GPUs and drivers being targeted towards gaming purposes
Yeah. Go and buy one of those Quadros if you want your problems solved.

Haha.

CatDog

obirsoy
07-17-2008, 02:37 PM
How about workstation cards (quadro, etc)? Do their drivers perform game oriented optimizations as well?

I guess if they target visual correctness, glitches might be more pronounced (due to different reasons though).

bertgp
07-17-2008, 03:37 PM
This will force the driver to upload the texture to VRAM. Draw a few points with each of your textures, right after loading the textures - and you'll have them all in vram.

Ok sure, but my whole problem is that textures are loaded from disk all the time, with varying formats. I can't upload them all at once when I load a level like in a game. What you propose will indeed make sure they are loaded in VRAm, but at the cost of a full texture upload.



How about creating a 'dummy' rendering thread (with wglShareLists) which draws the new textures to a 1x1 FBO to force an upload. Unless you're seriously CPU/GPU bound I doubt you'd see any relevant performance drop.

Yes that is a good idea which should work in theory. However, my experience with shared OpenGL contexts has been that there is some kind of global lock taken by a context when it is in use, preventing another context to issue commands. I tried this 2 context idea to compile shaders in a secondary another context (also to avoid glitches) before they are needed by the main drawing context. My results were that some OpenGL calls would then be stalled for a while when the other thread was busy compiling a shader. Compiling a shader doesn't even require the GPU and it stalled the other drawing context. I have very little hope that uploading textures will not cause stalls.



(no, honestly, that's one of the worst excuses for simply being a bad gamer :-P )

How can you aim when you drop from 20-30 fps to maybe 2-5 fps suddenly? Suddenly the mouse jumps all over the place. Maybe it's just me...



Well, i do not understand, why you would need constantly 60 Hz to be certified, but we are not here to discuss such strange requirements.


When simulators are certified, 1h of piloting in the sim == 1h of piloting a real airplane for training purposes. Imagine the savings of using a sim instead of a big 747. Not to mention that crashing a sim is a hell of a lot less painfull! Now in order for this to work, there must be no distracting artifacts. One of these artifacts is a drop in framerate.



Yeah. Go and buy one of those Quadros if you want your problems solved.


Could you elaborate on this a little more? To my knowledge, there isn't any mechanism in the OpenGL spec to force a texture in VRAM nor load it in small chunks to VRAM.

Rob Barris
07-17-2008, 03:58 PM
An application usually lives in one of two regimes, under- or over-committed on GPU resources.

If you are under-committed (you use fewer textures than card can hold) then problem solved, you preload everything and they should stay warmed up if you keep referencing them.

If you are over-committed as many apps are, you have two ways to go..

1) try to get back to under-committed. :)

2) carefully manage resources to avoid expensive hiccups.

I'm not sure there are general purpose API-level solutions for case 2.

With regards to smoother texture staging, I'm able to use PBO in conjunction with TexSubImage to replace bands of a large texture incrementally over multiple frames. Theoretically one could make a fixed set of large textures and then use subimage techniques to explicitly manage residency of tiles, while making the app appear to be under-committed again. Ultimately if the working set for a frame is too big, there is a problem that may not be tractable.

Korval
07-17-2008, 03:58 PM
Could you elaborate on this a little more? To my knowledge, there isn't any mechanism in the OpenGL spec to force a texture in VRAM nor load it in small chunks to VRAM.

You pointed out that the drivers are doing things that are acceptable for videogames, but not for others. Drivers for gaming cards are optimized for games. Drivers for Quadros and such are optimized for different purposes. They may have better behavior for your application.

bertgp
07-17-2008, 04:21 PM
With regards to smoother texture staging, I'm able to use PBO in conjunction with TexSubImage to replace bands of a large texture incrementally over multiple frames. Theoretically one could make a fixed set of large textures and then use subimage techniques to explicitly manage residency of tiles, while making the app appear to be under-committed again. Ultimately if the working set for a frame is too big, there is a problem that may not be tractable.

What you propose is exactly what we had to do. Some textures are big and are continuously updated (terrain) and some pop up semi-randomly with differing texture sizes and types. We have to waste a bunch of memory to allocate textures of different sizes and formats (RGB, RGBA, various dxts, etc.) This is one area where surface casting would have been very nice. Some reserved textures might never be used depending on the loaded content, which is a shame.

All this however, still relies on unspecified and undocumented driver behavior. There is nothing stopping the driver to swap out of VRAM (even if we are under-committed) one reserved texture slot that hasn't been used in a while to maximize the amount of free VRAM, causing a glitch when we finally do use it.

So, will OpenGL 3.0 offer a way to take the guesswork (and praying in some cases!) out of the whole memory management issue? I am not necessarily asking for a full-blown all inclusive solution; just some building blocks to build my own memory management scheme.

To answer the other question, no I will not attend Siggraph. I went last year (got the GL3 t-shirt!) and a colleague will go this year.

bobvodka
07-17-2008, 04:38 PM
I'm willing to bet it won't to the degree you want to have it.

Lord crc
07-17-2008, 05:14 PM
To me it sounds like you're trying to screw in a screw using a hammer. Switch to software rendering using a farm of rendering nodes.

Simon Arbon
07-17-2008, 10:58 PM
Well, instead of a "fallback-texture", one could add a flag, that the resolution of a texture is not critical. The result could be, that when VRAM is full, the driver removes the texture, but not the full mipmap-chain, but only as many high-res levels, as necessary.This idea could also be extended to LOD textures.
At the moment i do LOD MipMaps by keeping the current texture MipMap in RAM so that when the object using that texture comes twice as close to the camera i can stream the next level off disk, build the new MipMap by combining the two, load it to OpenGL, and use it as the new object texture.
This would work much better if i could pass the new texture level to OpenGL and ask it to add it to the existing MipMap, preferably asynchronously so it continues to use the low-res level until the high-res level is fully loaded.

One issue here is that some hardware requires all the mips to be loaded adjacently in VRAM. They aren't fluidly allocatable in disjoint areas of video memory. So trying to make this happen could involve some really gnarly gymnastics in the VRAM allocation on those implementations, as mips "come and go"It shouldn't be any worse than loading a new higher resolution MipMap, switching to it, then deleting the old one.
If the hardware requires a specific format then the cards firmware can allocate memory for a new properly formatted MipMap, copy the old MipMap data with 3 VRAM block-copies, then load the new data over the PCI bus.
This has the advantage that half as much data is copied over the bus, we dont need to keep a RAM copy, and we have no glitches as the low-res MipMap is used until the new one is ready.

Simon Arbon
07-18-2008, 12:08 AM
My OpenGL3 BOF Question #1:

For those of us who live 10,000km away and cant make it to the BOF,
can we please get downloadable audio files of the presentations, as well as the slides, whitepapers, OpenGL3.0 spec. and OpenGL3.1 spec.

Chris Lux
07-18-2008, 04:29 AM
My OpenGL3 BOF Question #1:

For those of us who live 10,000km away and cant make it to the BOF,
can we please get downloadable audio files of the presentations, as well as the slides, whitepapers, OpenGL3.0 spec. and OpenGL3.1 spec.
i second that question!

LogicalError
07-18-2008, 10:11 AM
My OpenGL3 BOF Question #1:

For those of us who live 10,000km away and cant make it to the BOF,
can we please get downloadable audio files of the presentations, as well as the slides, whitepapers, OpenGL3.0 spec. and OpenGL3.1 spec.
i second that question!

i .. uh.. third that question!
Oh, and OpenGL 3.0, is it done yet?

Korval
07-18-2008, 10:29 AM
can we please get downloadable audio files of the presentations, as well as the slides, whitepapers, OpenGL3.0 spec. and OpenGL3.1 spec.

If there's actually a finished spec there, it will be made available, just like the GL 2.1 spec is available. Slides have always been made available, and they don't do whitepapers for BoF presentations.

MikeC
07-18-2008, 07:36 PM
If you are over-committed as many apps are, you have two ways to go..

1) try to get back to under-committed. :)


Is there a reliable way for an app to determine at runtime that it's overcommitted? I don't remember one, but it's a long time since I did much of anything with OGL.

ZbuffeR
07-19-2008, 01:01 AM
a reliable way for an app to determine at runtime that it's overcommitted

I have a way, not sure if it is reliable : use vsync and log each frame time, so you will see if some frames are slower than refresh rate.

JoeDoe
07-20-2008, 11:30 AM
Do you really think that there is a finished specs that will be made available with NVidia hardware implementation right after Siggraph? What about conformance tests, SDK samples, headers and libs for C/C++ languages and compilers? Or we just see another stupid slides about OpenGL 3.0 architecture? If so, may be a beta version of specs should be publicly available for community for starting discussion?

JoeDoe
07-20-2008, 11:44 AM
Interestingly enough, do we see an explanation about one-year delay of the new API, just like post-mortem, or this is never be disclosed to community? What about future evolvement of OpenGL API, because MS DirectX 11 and new hardware is on the way? Will the API be redesigned again or just a few new extensions will be made for such things like the tesselation etc? Is new GLSL language will be similair with Cg/HLSL, but with pure C-syntax? Or receive a template-style syntax, such as texture.Load(), stream.RestartStrip() etc?

Brolingstanz
07-21-2008, 02:30 AM
Personally I don't give a hoot about the delay. Water under the bridge...

(T-23 days and counting to Siggraph.)

Don't Disturb
07-21-2008, 03:01 AM
I'm hearing reports of NVidia marketing people advertising their Quadro cards as OpenGL 3 ready. As OGL3 can work on any hardware that supports 2.1 this may just be PR nonsense, but it could mean that gl3 support will be in drivers very soon. Fingers crossed...

ZbuffeR
07-21-2008, 06:14 AM
"Ready for OpenGL3" does not means that it will contain GL3, but rather it can/will support it. Like any GL 2.1 card, as you pointed out...

pudman
07-21-2008, 10:03 AM
"Ready for OpenGL3" does not means that it will contain GL3, but rather it can/will support it.

Indeed. I've been "Ready for OpenGL3" for years.

Jan
07-21-2008, 02:04 PM
Hell, i'm ready for OpenGL 4 !
And a vacation...

Mars_999
07-21-2008, 09:31 PM
Dam with a new GL3.0 and new programming model now I will need a new Red Book! :) Hopefully they have one out day one to match the new GL3.0 standard!

Brolingstanz
07-21-2008, 11:25 PM
There might be a lot less incentive for folks to make the trip to Siggraph if they know they can watch a pod-cast from the comfort of their couches. Probably another good reason not to spill the beans on GL3 just yet, if OpenGL BOF attendance matters at all...

knackered
07-22-2008, 01:40 AM
So if 3 million people wanted to watch it, and they couldn't book a room big enough, then no videos for us? is that your idea of fair, modus?

Jan
07-22-2008, 01:42 AM
Well, i think most of the people, that ask for a pod-cast don't have the chance to visit Siggraph, at all. You know, some people tend to forget, that the world doesn't end at the US borders. Actually there's quite a lot of "world" out there, and it's mostly inhabited by humans, not dragons, some of them are even programmers, using OpenGL...

bobvodka
07-22-2008, 02:30 AM
Wait... so i'm the only one here working in a company full of dragons? o.O

Mark Shaxted
07-22-2008, 02:55 AM
That's why we have St. George!

Brolingstanz
07-22-2008, 03:09 AM
knackered, is it fair that movie theater seats are too small? The last flick I saw was No Country for Old Men, and I cried through the whole thing (not because it was a sad movie mind you).

knackered
07-22-2008, 05:26 AM
a discussion about an open standard should be charged for like a film now? is that the world we're living in?
stop the world, I'm getting off.

pudman
07-22-2008, 07:45 AM
is it fair that movie theater seats are too small?

Come to the US where the movie theater seats are as large as our big asses. Speaking of fat, what is GL3's BMI? So far it seems tall on expectations and light on reality... I guess that means it's in the healthy category!

Don't Disturb
07-22-2008, 08:30 AM
No, I think it's in the "morbidly underweight" category.

Korval
07-22-2008, 10:26 AM
knackered, is it fair that movie theater seats are too small?

You might recall that after films sit in the theaters for a while, they come out in convenient home versions.

Madyasiwi
07-24-2008, 09:21 AM
Have you guys seen this?

http://www.tweaktown.com/news/9858/nvidia_s_next_big_bang_coming_soon/index.html

*Apologize if it has been posted

Mars_999
07-24-2008, 10:01 AM
Nice, looks like September for drivers! WOOT

ZbuffeR
07-24-2008, 12:08 PM
Looks nice, indeed.

Korval
07-24-2008, 12:32 PM
Nice, looks like September for drivers! WOOT

Yeah, but what kind of drivers will they be? I mean, I know it's nVidia; they usually do a good job. But are they going to be solid or buggy?

Mars_999
07-24-2008, 02:15 PM
Come on now, have some faith, besides it's better than nothing at all! ;)

Jan
07-24-2008, 02:15 PM
"Yeah, but what kind of drivers will they be?"

They will be better than no drivers at all! That's much more than most people expected.

knackered
07-24-2008, 03:15 PM
one of the main reasons for the original GL3 design was to make it easier to write stable drivers, so theoretically they should be rock solid.

Korval
07-24-2008, 03:37 PM
one of the main reasons for the original GL3 design was to make it easier to write stable drivers, so theoretically they should be rock solid.

I guess this will be the test of that theory.

Zengar
07-24-2008, 09:12 PM
Nice, looks like September for drivers! WOOT

Yeah, but what kind of drivers will they be? I mean, I know it's nVidia; they usually do a good job. But are they going to be solid or buggy?

Let me think... first API implementation of it's kind? Buggy as hell :)

Of course, the driver itself may be easy to write, but there is always the shading language, which adds immence complexity to the whole stability issue...

CatDog
07-25-2008, 07:21 AM
They will be better than no drivers at all! That's much more than most people expected.
Absolutely.

Has anybody noticed that arrow "GF200 <---> Big Bang"? What's going to be revealed seems directly related to GF200 hardware. This could mean, that nVidia will support GL3 on GF200 only.

And it would explain the delay: hardware wasn't finished.

But maybe this screen was just a hoax. :)

CatDog

Mars_999
07-25-2008, 05:45 PM
I hope not, and there isn't any technical reason for it with GF8 series and newer GL3.1+ is supported hardware wise, unless Nvidia is just out to sell more G200 GPUs....

ebray99
07-25-2008, 11:23 PM
Well, I suppose it's entirely possible that they could skip previous cards other than the G200, but I somehow doubt they would do this. In the past, they've typically made extensions available on all hardware that supported it (or within 2 or 3 generations... see framebuffer objects and pixel buffer objects). That said, I would bet that they'd expose the new version of GL on at least the 8 series. However, this is all pointless speculation until we see what they end up with. No sense worrying about it until we see what they do.

ZbuffeR
07-26-2008, 02:23 AM
It will depends if GL3 bring new features, whereas the initial idea was to only rewrite the API, and features being those of GL2.1

bobvodka
07-26-2008, 05:36 AM
unless Nvidia is just out to sell more G200 GPUs....

Well, that would be one reason to by them... of course for gamers there are so many more reasons to buy the HD4 series ;)

Korval
07-26-2008, 12:29 PM
It will depends if GL3 bring new features, whereas the initial idea was to only rewrite the API, and features being those of GL2.1

The "initial idea" was to ship in September too.

All I care about is that a GL 3.0 implementation can be done on any GL 2.1 supporting card. And that there is some way to know if a particular feature or level of support is available.

Mars_999
07-28-2008, 05:13 PM
I had to post, just to keep it alive I say, 13 days and counting. It's like X-Mas! ;) I just hope I get what I asked for!

Korval
07-28-2008, 05:54 PM
I had to post, just to keep it alive I say, 13 days and counting.

Thirteen days? What are you talking about? The BoF is on the 13th of August. That's two and a half weeks.

Rick Yorgason
07-28-2008, 07:47 PM
It's only 13 days on the Martian calendar.

Mars_999
07-28-2008, 11:13 PM
It's only 13 days on the Martian calendar.

Yep! ;)

http://www.siggraph.org/s2008/ look at the top the counter is counting down. It says what.... ;)

Jan
07-29-2008, 01:15 AM
Honestly, our hopes are so high, i don't think that we won't be disappointed in some way. The question is only whether it will be only a minor disappointment or a major one.

MZ
07-29-2008, 06:33 AM
On the other hand, the morale has dropped so low during the period of silence, that any bone they throw us on SIGGRAPH will make us full of enthusiasm again. I mean, those of us who didn't leave :)

Rob Barris
07-29-2008, 09:43 AM
If all goes according to plan there should be a non trivial number of bones thrown to GL developers at the BOF.

dletozeun
07-29-2008, 10:13 AM
Now, hope there will be enough developers to eat all this bones...but I heard that they are really hungry. :)

Mark Shaxted
07-29-2008, 11:23 AM
If all goes according to plan there should be a non trivial number of bones thrown to GL developers at the BOF.



Like the 'plans' for last october ? :p

Rob Barris
07-29-2008, 12:09 PM
Alas I can't start posting SIGGRAPH slides here :(

Zengar
07-29-2008, 01:50 PM
Alas I can't start posting SIGGRAPH slides here :(



Why that?

MZ
07-29-2008, 02:06 PM
Because this would spoil the party?

Zengar
07-29-2008, 02:25 PM
Because this would spoil the party?

Oh, maybe I just misanderstood... I thoght he can "never" post any SIGGRAPH slides here; if he refers to here and now I understand ;)

zed
07-29-2008, 02:37 PM
way to go, having siggraph during the olympics
then again based on stereotypes it may not be so disastrous

CatDog
07-29-2008, 02:38 PM
It'll be interesting to compare the 2008 <s>slides</s> bones to those from 2007 (http://www.khronos.org/library/detail/siggraph_2007_opengl_birds_of_a_feather_bof_presen tation).

CatDog

Brolingstanz
07-29-2008, 03:27 PM
As long as they're not going to be slinging chicken bones... ;-)

P.S. Prayers go out to our friends in LA and surrounding areas.

Jan
07-30-2008, 02:51 AM
way to go, having siggraph during the Olympics


Yes, there was such a thing. But the Olympics are every 4 years, but when was the last time you got a really shiny new graphics API? 15 years ago?

Apart from that, yes, i fulfill the stereo-type, i don't care one bit for the Olympics. Though what's happening around it, now that it is in China, might be not so boring after all.

Jan.

JoeDoe
07-30-2008, 04:54 AM
It's will be interesting - to compare slides from one year to another, and so on :)

What about f..ked 30-day approval period?

Carl Jokl
07-30-2008, 05:07 AM
If SIGGRAPH is on the 13th of August maybe I could have OpenGL 3.0 as a just a day late birthday present from ARB....pretty please with sugar on?

I think it was interesting when there was discussion of having some API to access stream processors of the GPU to run code which stream processors are optimised to run efficiently.

This is somewhat of interest now to me considering I am working with AutoDesk MapGuide where performance issues relating to spacial querying are quite apparent. I wonder if the stream processors of a GPU could be used be used to more efficiently run spacial queries on spatial data. After all spatial operations are what GPU's were designed for though for rendering as opposed to queries. Occlusion queries though are an example of using a GPU to figure out if an object is visible via a 'query'.

MapGuide is pretty flaky to begin with but one thing noted was that when using the AJAX viewer where all the rendering is happening on the server, displaying tool tips has very poor performance because when a mouse click is detected on a particular area a query has to be done layer by layer to see what object if any is under the mouse cursor. When dealing with a lot of map layers this can be a real performance problem.

I did think that a GPU might be better suited to figuring out what the first object is under a coordinate of the screen if any. Many 3D applications deal with selection of items in 3D by clicking on something and figuring out what was clicked on.

I wonder if the worlds of databases and 3D Graphics are seen as so different that linking them has not been attempted. Or maybe it has been attempted but I just don't know about it. Your average database server though is not normally equiped with a powerful GPU. Spatial Databases are still pretty specialised and seem to be still very much under development and not very mature at this point.

Maybe I am just waffling now but I find this conceptually interesting....though not as fun as rendering graphics certainly.

V-man
07-30-2008, 06:28 AM
....
You probably are better off with something that is more for computing rather than graphics, like OpenCL or Cuda.

knackered
07-30-2008, 03:52 PM
i'm keeping an eye on this thread for news, but every time I see the post number go up and scroll through it's just off-topic waffle.
Oh bugger, now I've just done it.

bobvodka
07-30-2008, 04:40 PM
hey! stop that!

pudman
07-30-2008, 08:25 PM
There's link on opengl3.org that reads "Click here for all the details". I just want to point out that it's a bit misleading as I didn't get ALL the details, just those about the SIGGRAPH event.


If all goes according to plan there should be a non trivial number of bones thrown to GL developers at the BOF.

I prefer meat, not bones.
Is that what you've been doing? Eating all the meat and you'll just give the devs the bones?

Boredom strikes again!

Carl Jokl
07-31-2008, 02:43 AM
Perhaps best to check back on here after SIGGRAPH. OpenCL...I will have a read up about that.

I think that OpenGL 3.0 is interesting in the context of another discussion thread about OpenGL becomming a second class citizen on Windows Vista and potentially beyond. As I understand it OpenGL 3.0 just brings the core API up to Direct X 9 standard with 3.1 being more DirectX 10 level.

It is all good and well to complain about Microsoft whom I personally am not a fan of. I like the idea of a non Microsoft technology. That extends to everything from Unix to Java and OpenGL. It does not help though when the compertition is so slow to progress. It also seems the latest graphics cards tend to tout their Direct X capabilites with OpenGL becomming a footnote by comparrison. It might not be so unfair though. OpenGL isn't able to demonstrate the new feature the graphics card vendor has been working on easily. It seems also that DirectX seems to even be driving and dictating what features the graphics hardware is to support. I.e. Microsoft says to be able to support DirectX version ? you must support these features. Graphics card vendors then rush to try and comply with these.

I there are some positives about having a guarenteed platform. I wonder if Direct X were not an exclusively Microsoft thing whether the general developer community would be opposed to it.

bobvodka
07-31-2008, 03:04 AM
OpenGL2.x is already DX9 standard wrt features exposed, the original OpenGL3.0 plan was to clean up and improve the interface. Beyond that no other version numbers were specified and at least two more versions were 'planned'; "Longs Peak Reloaded" and "Mt. Evans", with the former being some updates to 3.0 and the latter adding DX10 level features.

The wording of your final sentence is odd... so you mean that you think the general developer community is opposed to DX (simply not true) or that the general developer community wouldn't be opposed to DX on platforms other than Windows and the 360?

Jan
07-31-2008, 03:24 AM
We should have a sticky thread "facts known about OpenGL 3", which will be a must-read for anyone, before posting in this thread. Otherwise every 10th post is some explanation for someone who just assumes this and that.

But now it's too late, if we had known, we could have set up such a thread a year ago.

Zengar
07-31-2008, 08:40 AM
Carl, no offence meant, but you should stop writung and start reading.

Korval
07-31-2008, 10:55 AM
We should have a sticky thread "facts known about OpenGL 3", which will be a must-read for anyone, before posting in this thread.

Unfortunately, the only "fact" we could state is that there are no facts. It's been a year since the last accurate information, and there was a rumor that GL 3 was redesigned. Basically, even stuff they gave us a year ago is in doubt.

We won't actually know anything until SIGGRAPH.

Mark Shaxted
07-31-2008, 11:32 AM
We won't actually know anything until SIGGRAPH.

Not true - we know the new API will, most likely, definately be called OpenGL 3...

JoeDoe
07-31-2008, 12:11 PM
Probably it is better to redesign OpenGL 3.0, even for one-year delay, rather then make it clumsy... But absent of information is strange for us.

santyhamer
07-31-2008, 02:45 PM
OpenGL 3.0 is inminent(drivers will be presented in September in the Big Bang 2 ) as this image from a leaked internal NVIDIA document:

http://www.chw.net/images/breves/200807/1216872708_NVIDIA_Big_Bang_II.png

The last line says OpenGL 3.0

More info on
http://www.chw.net/foro/nvidia-lanzara-big-bang-ii-en-septiembre-t170487.html

Korval
07-31-2008, 04:31 PM
Welcome to last week; glad you could make it...

knackered
07-31-2008, 04:51 PM
that document is dated from january 2008.
there's no way they could have known what state GL3 would be in 9 months later. It was probably an if-all-goes-according-to-plan gestimate based on the original Khronos plan.
...and things haven't exactly gone according to plan.

dor00
07-31-2008, 11:46 PM
Whatever you want to say peoples.. i cant wait until 13:)

Carl Jokl
08-01-2008, 01:56 AM
I have gone away and done some reading on the OpenCL, CUDO / CTM GPGPU and such on the spatial query front. OpenCL looks promising but as it was only launched this year it is very early days.

Perhaps there isn't opposition to DirectX per se appart from meaning that cross platform applications would have to juggle two API's. It is a shame that we cannot have just one graphics API to work with but this duality has existed for some time. What are the implications of Direct X on OpenGL 3.0? Microsoft controls the lion's share of the desktop market along with a large chunk of the console market too. I think it is fair to say though unlikely that Microsoft would directly admit to it but if they could I think they would happily kill off OpenGL. The future of the desktop market has implications for OpenGL 3.0 too. If Microsoft were to remove OpenGL or just make it work poorly on their platform to the point that it were no longer desirable to use OpenGL on Windows where would that leave OpenGL 3.0? Certainly there are other platforms like the Mac which use OpenGL but the economics could well change. There is inherent cost in developing the OpenGL drivers. If the use of OpenGL were to diminish the emphasis for the Graphics card vendors to develop high quality OpenGL drivers would also diminish. It is a shame that the effort could not be focused and unified in promoting one standard but this is just the way things are. Even if OpenGL 3.0 is lanched and proves to be a very capable API do you think it would attract people to use OpenGL rather than Direct X on platforms where both are available or will things just stay the same? Is Direct X going to be the platform which drives the new features the Graphics card vendors support?

I will endevor to take on board doing more reading and less writing but it seems a bit annoying that I get chastised for writing when there are so many off topic posts on here. As regards reading a common theme is the lack of information and updates. A year on from the original anouncement of OpenGL 3 and without updates it is by no means certain if what was going to be included in OpenGL 3.0 is still going to be there.

I am also aware of the extension mechanism of OpenGL which means features can be accessed if supported through extensions. This feature has some criticism in that it can be messy to code with. Therefore the more that can be included as part of the standard API the better.

Dark Photon
08-01-2008, 04:49 AM
I will endevor to take on board doing more reading and less writing but it seems a bit annoying that I get chastised for writing when there are so many off topic posts on here. As regards reading a common theme is the lack of information and updates....
Don't worry about it. That's only one person's take. Cluelessness is the theme of this thread. :p Welcome!
...just don't try that way off-topic stuff in any of the other boards. Great way to get shot down. :cool:

V-man
08-01-2008, 08:00 AM
Perhaps there isn't opposition to DirectX per se appart from meaning that cross platform applications would have to juggle two API's.

It depends on what you means by platform:
Wii, Nintendo DS, PlayStation 3, XBox 360, Linux, Apple, FreeBSD, Windows, various cellphones, various PDAs.

Carl Jokl
08-01-2008, 09:29 AM
There tends to be a common theme in terms of platforms. Any Microsoft platform will have some form of DirectX. Windows, Windows Mobile, XBox 360. By contrast as far as I am aware only the Windows desktop operating system supports OpenGL. Even that seems to be somewhat begrudgingly. In principle part of the attractiveness of OpenGL in the first place was that it was supposed to be cross platform. For many platforms it is. Virtually any non Microsoft 3D Accelerated platform uses OpenGL. Unfortunately Microsofts market share is so big on the desktop market that they are pretty much bigger than all the other desktop OS's put together. Phones are a very different scenario where JavaME and OpenGL ES have a dominent position. Microsoft seem to have the bulk of the business PDA market I think though am not completely sure on how Palm based PDAs are fairing in volume vs Pocket PC ones but everyone I know has a Pocket PC based model.

The two platforms are something we would have to live with for the forseeable future. It does annoy me though in the wider context that Microsoft seem to alway put a spanner in the works when it comes to the industry trying to settle on a standard.

When Java starts getting popular MS remove it from Windows and start pushing .Net instead. When there is a move to create a standard open document format Microsoft creates their own and puts it into compertition to be an industry standard.

How does this relate to OpenGL 3.0? I just think if the whole 3D graphics industry had just one standard to focus their energy on developing then OpenGL would likely be making faster progress as a platform than it is.

Mark Shaxted
08-01-2008, 10:19 AM
Check out my <s>dimentions</s> dimensions, length, width and for a limited time only..depth!

;)

Korval
08-01-2008, 10:40 AM
It does annoy me though in the wider context that Microsoft seem to alway put a spanner in the works when it comes to the industry trying to settle on a standard.

Maybe the industry should actually be trying to settle on a standard.

Microsoft developed Direct3D because they had to. OpenGL, quite simply, was not getting the job done. DirectX was conceived of as a low-level API to various kinds of hardware: graphics, audio, network, etc. OpenGL wasn't low-level enough. And of course, writing a full GL implementation was a stupidly huge undertaking.

So Microsoft developed (purchased) D3D v3.0. Which was God-awful, but it lead to the much less awful v5.0. Which eventually begot the not-entirely-unreasonable v7.0, which begot the perfectly useful v8.0. D3D evolved while OpenGL stagnated. D3D's API may have been atrocious and stupidly so, but it improved and it exposed features. OpenGL stayed at version 1.2 for so long, without bringing any of the increasingly large number of extensions into the core. It wasn't even until GL 2.0, a few years ago, that the "standard" got any form of shading language.

D3D, for all its initial ugliness, did a much better job of catering to the needs of its users: game developers. A world bereft of D3D would have been much worse on PC game developers.


When Java starts getting popular MS remove it from Windows and start pushing .Net instead.

Fact Check: Microsoft did not remove Java from Windows.

The Java situation with Microsoft went like this:

1: Sun makes Java.

2: Microsoft implements Java in Visual J++. They like it, but they see some weaknesses in it (and its class library), so they implement unauthorized extensions to it. Not just classes, but basic language stuff.

3: Sun sues Microsoft for those unauthorized extensions.

4: Microsoft decides not to support Visual J++ thanks to the lawsuit. So rather than deal with Sun's unwillingness to compromise, Microsoft starts working on their own equivalent.

5: Microsoft releases .NET.

Throughout all of this, and perhaps more, Java has remained perfectly function on Windows.

Also, calling Java a "standard" is laughable. Putting "open" in front of "standard" only makes it moreso.

Lindley
08-01-2008, 10:48 AM
2: Microsoft implements Java in Visual J++. They like it, but they see some weaknesses in it (and its class library), so they implement unauthorized extensions to it. Not just classes, but basic language stuff.

Embrace and extend. The Microsoft way of making sure as much software as possible only works with their stuff.....

bobvodka
08-01-2008, 10:59 AM
A position I've never seen a problem with and it's not like they force you to use it.. although some of those things were handy to play around with.

Now, maybe if Sun hadn't over reacted and instead looked at what MS had done and discussed things about it then Java wouldn't suck as much as it does :D

bobvodka
08-01-2008, 11:02 AM
Virtually any non Microsoft 3D Accelerated platform uses OpenGL.

NDS : nope.
Wii : not afaik
PS2 : not that i've seen
PS3 : yeah, but no one uses OpenGL|ES, they use the native lib instead
Linux : yep
OS X : yep

So, out of those platforms only Linux and OS X really use OpenGL and, from a gaming pov, those platforms aren't important (Wii, NDS, PS3, XBox360 are the main ones which count). So, what was your point again? :)

JoeDoe
08-01-2008, 01:08 PM
Probably OpenGL 3.0 make Linux and Mac OS more attractive to developers, and Linux will be more popular than now. Also, GL3 may better fit for PS3 like hardware. Also, what about GL3 on mobile devices? Does it replace OpenGL ES?

knackered
08-01-2008, 01:14 PM
there you go again, making sense and stuff.
on the plus side, the GL extension mechanism gave us the joy of register combiners, while d3d users struggled waiting for the next API rewrite from microsoft.

knackered
08-01-2008, 01:21 PM
while (i==alive)
linuxpopularity = 0.0;

look at me making a computer joke.

MZ
08-01-2008, 01:23 PM
Maybe the industry should actually be trying to settle on a standard."a standard" already existed before D3D had been conceived.


Microsoft developed Direct3D because they had to. OpenGL, quite simply, was not getting the job done. What are you talking about?


DirectX was conceived of as a low-level API to various kinds of hardware: graphics, audio, network, etc.I don't see how the need for conceiving DirectSound, DirectInput, DirectPlay etc. implies the need for conceiving Direct3D, Mr. logic Mastermind.

These APIs are totally independent of each other. The only thing they share is code convention. Let me express my strong disbelief in that a code convention justifies developing completely new 3D API from scratch, as an alternative to already entrenched, superior, portable one, developed by industry experts in CG.

The picture you're trying to paint is deceitful.


OpenGL wasn't low-level enough. Sure. That's why nobody licensed those silly Id engines. This OGL thingy just wasn't low level enough for a game...

You keep mentioning this low-level thing. How about being less vague? Please elaborate on what in OGL isn't (or wasn't) low-level enough.


And of course, writing a full GL implementation was a stupidly huge undertaking.There are many ways to address this problem without the need of development completely new, unportable, and (as you admitted) cumbersome 3D API: there used to be MCD, there is OGL ES, and there are extensions.

With Microsoft's muscle behind, any initiative to improve OGL would have succeeded long ago. If they only wanted to. Microsoft went different way, and it's obvious they had their reasons. But those reasons have nothing with to do with the apologetic nonsense you're posting here. So please, abandon those partisan attempts to rewrite history.

PaladinOfKaos
08-01-2008, 01:25 PM
while (i==alive)
linuxpopularity = 0.0;

Wait, does that mean I don't exist?

Mark Shaxted
08-01-2008, 02:26 PM
microsoft = winning;

while( knackered == alive )
{
DoStuff();

if( linuxPopularity > 0.0 )
knackered = dead;
}

microsoft = losing;

Korval
08-01-2008, 02:30 PM
These APIs are totally independent of each other. The only thing they share is code convention.

Not initially, but D3D and DDraw started to get some cross-pollination. Now DDraw doesn't even exist anymore, having been subsumed by D3D.


Sure. That's why nobody licensed those silly Id engines. This OGL thingy just wasn't low level enough for a game...

That worked out OK... for Id. No other game company could tell IHVs, "Hey, we're using this other API. You will support just enough of the API to run our game and run it fast." So that's what IHVs did: they supported just enough of OpenGL to run Id games fast. Anything else, they ignored for a long time.

Imagine if the API were actually a minefield, such that the only safe way through was to do exactly what your competitor did. Why would you use that API instead of one where you can render how you want to?


You keep mentioning this low-level thing. How about being less vague? Please elaborate on what in OGL isn't (or wasn't) low-level enough.

Um, everything? Everything's a thing, right? Basically, almost every reason to make GL 3.0 a complete and total rewrite.

Only moreso, because we're talking back in the late 90's/early 2000's. Back when most of OpenGL was software. Back when GL_CLAMP wasn't hardware or didn't do what it said. Back when the only fast path in OpenGL was the path that Id was using, and everyone who did something different was screwed.

What exactly was the time difference between OpenGL getting a standard extension (let alone core feature) for vertex buffers compared to D3D? 2 Years? 3? What about something as brain-dead simple as render to texture?

Basically, early OpenGL tried to do too much. Early D3D tried to do too little. D3D has already converged on its "just right" place; GL 3.0 is supposed to do that for OpenGL.


With Microsoft's muscle behind, any initiative to improve OGL would have succeeded long ago.

How? The ARB is made of many companies, all of whom have to vote. nVidia had as much power in the ARB as Microsoft, and look how well they've done at improving GL. Not to mention that Apple and Microsoft are competitors, thus making the ARB just another battleground with OpenGL in the balance.

That's another reason for Microsoft to take their ball and go home. If they want to launch a new API revision by date X, they can and nobody can say anything against it. They can't do that with the ARB's slow decision making weighing them down.

knackered
08-01-2008, 02:59 PM
it's hard for me to think in terms of core opengl, as I don't think I've ever written anything using just the core features. Everything has always been vendor or OS specific extensions (VAR and pbuffers, for example). You just got used to it, it wasn't that much of a problem. One thing I do remember though, was that it was nigh impossible to have a balanced discussion about the shortfalls of OpenGL on these forums - everybody seemed evangelistic about how great GL was. Like you linux bods have always been. How things have changed - now the flood gates of years of pent up frustration have been opened.
Let's all try to remember how bad direct3d was before 8.0 - maybe we can still get some mileage out of that. I just have to look at my existing d3d7 abstraction to know that.

V-man
08-01-2008, 03:22 PM
One thing I do remember though, was that it was nigh impossible to have a balanced discussion about the shortfalls of OpenGL on these forums - everybody seemed evangelistic about how great GL was. Like you linux bods have always been. How things have changed - now the flood gates of years of pent up frustration have been opened.

There can only be one God and his name is OpenGL :)

Rob Barris
08-01-2008, 03:24 PM
Yay, August is here. 12 days to the BOF.

Mars_999
08-01-2008, 04:13 PM
Yay, August is here. 12 days to the BOF.


Thank God, then the pain and suffering will end.

Lord crc
08-01-2008, 04:59 PM
So either you get what you want or you commit hara-kiri?

knackered
08-01-2008, 04:59 PM
ever the optimist.
we have the drivers to come yet...the ati drivers, probably written by out-sourced eskimos on minimum wage.

Mark Shaxted
08-01-2008, 05:55 PM
...and will intel even bother?

Ilian Dinev
08-01-2008, 06:42 PM
Intel doesn't bother even with D3D. It's only simple 2D desktop "acceleration" they're after, it seems.

Korval
08-01-2008, 06:45 PM
Supposedly, Intel will be part of the BoF, so I'm hoping.

Also, they've got Larrabe coming up, so they don't really have a choice (at least for D3D).

Carl Jokl
08-02-2008, 03:26 AM
Fixed that spelling error in my signature... :o

Carl Jokl
08-02-2008, 04:08 AM
It looks like quite a can of worms has been opened here. In terms of the introduction of Direct X I am going on what I read from the OpenGL SuperBible (so the source is biblical). That went over the history of OpenGL on windows and the different implementations. It talked about the Windows 95/98 era when Direct X was introduced. It talked about Microsoft's marketing saying OpenGL was only suitable for CAD work and that it lacked the performance to do games. This ignored the fact that OpenGL is an API and the speed is as good as it's implementation. Continuing from OpenGL super bibles version of history Microsoft's benchmark showed DirectX outperformed OpenGL on Windows for speed. SGI produced and OpenGL implementation for Windows and it outperformed DirectX. The version of events was that many games vendors started work on releasing games using OpenGL because they prefered the API. Microsoft responded by removing OpenGL from the Windows 9X family. Those vendors who had started writing their games for OpenGL suddenly faced with the potential of their games not working on windows anymore did a U-turn and used Direct X. The justification of taking OpenGL out of the Windows 9X family or O/S's was that OpenGL was only suitable for CAD and not for games so the CAD people would use Windows NT anyway and gamers would be using 9x. In the longer term once the Graphics card vendors started producing the implementation of OpenGL it did not matter so much but by then Direct X had become entrenched in the market. This version of events could be wrong or biased (given that it's source is an OpenGL book). The implication of the version of events was that many game developers wanted to use OpenGL but Microsoft pulled the rug out from under them. To say it is giving the developers what they wanted, I don't know about that. It may be true or it may not be. Certainly keeping people using a proprietary Microsoft API is beneficial to Microsoft. I agree that there seems to be a lack of activity and innovation in the OpenGL camp with things getting stagnated. OpenGL 3.0 has held lots of promise.

From a programming perspective I have done the basics of 3D programming using OpenGL and DirectX 9 and to be honest when it comes to the basics there seems to be not that much of a difference. DirectX seems to hide some of the funky rendering context binding that you have to do with OpenGL. In that sense on Windows it is probably a bit easier to use than OpenGL but given that Direct X was developed specifically for the Microsoft platform it is probably not that surprising.

I do have to wonder from a logical perspective given a lot of anti OpenGL pro Direct X feeling that some have espressed I am not sure why those people really care about OpenGL 3.0 to be involved in this site anyway? Is it begrudgingly because they want to use DirectX but their line of work requires some OpenGL? Do they just want to boo the OpenGL loyalists?

Also if the only OpenGL strong platforms are Mac OS, Linux and the PS3 (albeit stated no-one uses it on the PS3) are the only people who would care about OpenGL 3.0 the Mac and Linux community?

MZ
08-02-2008, 04:46 AM
You keep mentioning this low-level thing. How about being less vague? Please elaborate on what in OGL isn't (or wasn't) low-level enough.
Um, everything? Everything's a thing, right? Basically, almost every reason to make GL 3.0 a complete and total rewrite.

(...)
The question wasn't "I claim OpenGL is flawless. Prove me wrong". The question was specifically what was your meaning of "low-level enough", in the context of your hypothetised Microsoft's motives for coneiving of D3D.

So, you drifted away to more general waters. Everybody knows both APIs have their flaws, in different areas. It's known they both not always keep up with HW. Direct3D isn't a saint here, it has its fair share of late bloomers: user definable vertex formats (DX8), 3D textures (DX8), 1D textures (DX10), stencil buffer (DX6), scissor test (DX9), slope scaled polygon offset (DX10), occlusion query (DX9). And by the way, DirectX 11 is probably going to have display lists.

Let's set this one thing straight:

I'm not arguing that OpenGL has no age-old shortcomings. I'm arguing that your little story about Microsoft's motives for D3D, is bullshit.

This one was priceless:
What exactly was the time difference between OpenGL getting a standard extension (let alone core feature) for vertex buffers compared to D3D? 2 Years? 3? What about something as brain-dead simple as render to texture?Your claim: OGL wasn't "getting the job done" and "wasn't low level enough" so poor little Microsoft just had to roll their own.

Fact: initially, in Direct3D there were no vertex buffers and no render-to-texture.

*sitcom laughter track*



With Microsoft's muscle behind, any initiative to improve OGL would have succeeded long ago.
How? The ARB is made of many companies, all of whom have to vote. nVidia had as much power in the ARB as Microsoft, and look how well they've done at improving GL. Not to mention that Apple and Microsoft are competitors, thus making the ARB just another battleground with OpenGL in the balance.
Yeah, sure, because Microsoft is just another company. They managed to persuade Intel to adopt AMD's 64-bit ISA, but no, they wouldn't persuade those pesky nVidia & Ati to adopt hypothetical render-to-texture extension.

MZ
08-02-2008, 04:50 AM
It looks like quite a can of worms has been opened here.
You must be new here ;)

Don't worry, this little argument is nothing compared to the dark, barbaric past of this forum.

Jan
08-02-2008, 06:34 AM
we have the drivers to come yet...the ati drivers, probably written by out-sourced eskimos on minimum wage.

This comment would be so damned funny, if it wasn't so close to the truth.

Carl Jokl
08-02-2008, 11:44 AM
I hope AMD / ATI manage to turn their fortunes round soon. It has been a pretty poor year for then. Albeit it seems like both have been seen as underdogs to their competitors (AMD more so in the processor market than ATI in the graphics market). That said my last card was NVidia which seemed a good choice given that my motherboard is NVidia SLI chipset. I think I would rather be with ATI again but Nvidia is better supported on Linux.

Perhaps I had better not back OpenGL. I feel like I have the kiss of death on any platform or technology I back.

I am pretty used to negative reactions I get for liking Java. I am getting to the point though that in this industry it doesn't matter what technology platform or whatever you back there will be a line of people telling you why you picked the wrong one.

The irony is for someone with so much bad feeling towards Microsoft it seems a cruel irony that now I ended up as a .Net developer.

I feel a certain need to excuse myself. Many on here are veteran 3D graphics programmers. For me it is a hobby and I haven't progressed as far as advanced areas like shaders. I find 3D graphics quite facinating. I like the dicipline of it. Without the in depth grounding many others on here then I am going to get things wrong sometimes....(I am not great at spelling either).

Korval
08-02-2008, 04:34 PM
I do have to wonder from a logical perspective given a lot of anti OpenGL pro Direct X feeling that some have espressed I am not sure why those people really care about OpenGL 3.0 to be involved in this site anyway?

So pointing out that Microsoft had good reasons for making D3D in the face of OpenGL is considered "anti-OpenGL pro-DirectX" now?

If the facts happen to have an anti-OpenGL bias, then so be it.


The question was specifically what was your meaning of "low-level enough", in the context of your hypothetised Microsoft's motives for coneiving of D3D.

I thought I was pretty clear. Low-level enough that the quality of implementation is fairly irrelevant to the performance of any particular graphics card. Low-level enough that there aren't parts of the API that are implemented in software outside of the thinnest of hardware wrappers.

Basically GLide, only it works on other peoples hardware too. In the pre-DX7-8 years, that was really all game developers wanted. Anything more than that was either useless or actively getting in the way.

If we're talking about the actual decision to make/buy/support Direct3D, then it stems from the conditions of the time. And those conditions, for OpenGL, were not good.

MiniGL implementations (ie: enough of OpenGL to run Quake) were the order of the day. Honest-to-God full hardware OpenGL implementations were in far worse condition than they are today (if you think ATi's drivers suck now, look back at OpenGL drivers pre-nVidia). Microsoft needed to get a cross-GPU 3D rendering API that could actually be implemented. OpenGL simply required too much effort to implement, so Microsoft got an API that required far less effort to implement.

Was the ARB going to radically strip OpenGL down to the basics? No; the ARB was barely an organization at the time. It took years to go from 1.1 to 1.2, and from 1.2 to 1.3. Microsoft certainly isn't going to wait for some bureaucracy to get something done.

Once the decision was made to get a Direct3D, the rest was simply the result of that decision. As horrible as D3D 3.0 was as an API, it did the most important thing an API could: it worked. You may have hated coding to the API, but the functions did what they said they would, and hardware rendering happened. That was better than a lot of GL implementations of the day.

If OpenGL implementations had the quality then that they do now, then Microsoft would have less of a leg to stand on with regard to getting D3D. But back then, D3D was a godsend for game developers who were trying to make their games (rather than Carmack's games).


They managed to persuade Intel to adopt AMD's 64-bit ISA

I don't see how Microsoft had much effect on that. It's more like the marketplace as a whole did it. After all, there was an Itanium WinXP version long before an x86-64 version.

bobvodka
08-02-2008, 05:41 PM
Yeah, sure, because Microsoft is just another company. They managed to persuade Intel to adopt AMD's 64-bit ISA, but no, they wouldn't persuade those pesky nVidia & Ati to adopt hypothetical render-to-texture extension.

No, the market did that.
As Korval pointed out there were already Itanium versions of windows around, however the Itanium chips weren't great (iirc they didn't do 32bit in hardware, requring software emulation) and were expensive putting them out of the range of your average home user.

Then AMD gave the industry both barrels of awesome with its x64 chips which did x86 and x64 in hardware and were considerably cheaper. The market jumpped ship, AMD were on top and Intel had to swap over to follow the leader. (of course, since then Intel have turned things around with the Core series of chips but at that point everyone was committed and a fair amount of x86-64 was in the wild already).

Jan
08-02-2008, 05:52 PM
They managed to persuade Intel to adopt AMD's 64-bit ISA

That's indeed a strange "accusation", since Intel never released any mainstream-CPU with that ISA. It was/is really only the Itanium Server CPUs. If anyone persuaded anybody, it was AMD who created an ISA that Intel liked well enough to copy/license, instead of creating their own 64 Bit mainstream desktop CPU.

One might add, that the Itanium is a fully fledged 64 Bit CPU, whereas the x64 ISA is really stripped to the bare minium that is necessary to allow 64 Bit addressing. Some might see this as a clearly inferior design. But some, including Intel (otherwise they wouldn't have copied it), see it as the most clever idea to introduce 64 Bit computing (well, addressing). I do not believe, that Microsoft needed to actively persuade Intel. Nor anyone else.

Jan.

V-man
08-03-2008, 08:24 AM
[quote]If OpenGL implementations had the quality then that they do now, then Microsoft would have less of a leg to stand on with regard to getting D3D. But back then, D3D was a godsend for game developers who were trying to make their games (rather than Carmack's games).

If Carmack's code worked then I don't see why anyone else wouldn't be able to come along and write code that did a similar job.

Perhaps it has more to do with marketing rather than technological merit. People chose DirectX because it offered DirectSound, DirectPlay, DirectInput for their games. Windows was a popular OS and was about to become a gaming platform, so why not chose a Microsoft technology like DirectX?

Or perhaps back in the day, on some graphics cards, there was no GL driver at all while there was some D3D acceleration.

Korval
08-03-2008, 02:41 PM
If Carmack's code worked then I don't see why anyone else wouldn't be able to come along and write code that did a similar job.

And how do you know what Carmack's code was? This was in the days before Quake was open-sourced. Nobody except licensed Quake-engine users (who are behind an NDA of silence) knows how Quake rendering works. Everyone else would basically have to spend weeks or months guessing and checking.


Or perhaps back in the day, on some graphics cards, there was no GL driver at all while there was some D3D acceleration.

And who's fault is that? OpenGL was a very complicated API, 75% of which could not be implemented in the hardware of the day. D3D, by contrast, was just a graphics driver with some new, well defined functions. The D3D hardware abstraction layer did most of the hard work of turning user-function-calls into driver commands.

That is my point. OpenGL, in those days, was primarily concerned with making the user's life easier. You had features like accumulation buffers, selection buffers, etc. Features that need to be implemented to call it a GL implementation, but were not going to be implemented in hardware anytime soon. D3D, by contrast, was primarily concerned with abstracting hardware. Being as close to the metal as possible while still providing a real abstraction.

Game developers care if an API is obtuse or ugly, just like any other programmers. But if the choice is between ugly and non-functional/minefield, they'll take ugly any day of the week. Ugly can be made pretty with an abstraction, or you can just learn to live with it; non-functional cannot be made functional.

V-man
08-03-2008, 04:12 PM
I don't think there is much of a mystery to it. The Quake 2 renderer is basicly glClear, glColor, glBindTexture, glBegin, glVertex, glTexCoord, glEnd. It's not going to take months to figure out how to render 1 textured polygon. If you can do it for 1, you can do it for a few thousand.

Microsoft made this MCD thing available I believe so that vendors could write drivers easily.

Probably what happened back then is that the game software giants were encouraged to use DirectX. It was a full-suit to make games and this was a plus. Little game developers just followed along.
This kind of thing happens all the time. ID games makes a Linux version of Quake 3 and then a few other equally large game companies think, "we already have a nearly multiplatform code, let's show the world we can do it too."

Korval
08-03-2008, 04:57 PM
I don't think there is much of a mystery to it. The Quake 2 renderer is basicly glClear, glColor, glBindTexture, glBegin, glVertex, glTexCoord, glEnd. It's not going to take months to figure out how to render 1 textured polygon. If you can do it for 1, you can do it for a few thousand.

Nobody wants per-vertex colors, right?

If that was the only fast path in OpenGL at the time, then that alone was reason enough to abandon it. Even D3D 3.0 allowed you a per-vertex color.

Not only that, which is damning enough, it doesn't get into the specifics of how Quake renders. How many triangles does it draw in one batch? Implementations may not be able to handle more than this. And things like that.


Microsoft made this MCD thing available I believe so that vendors could write drivers easily.

Except that the MCD wasn't OpenGL; it was a part of OpenGL. It was certainly not sanctioned by the ARB, and it didn't get much play from IHVs.

In short, it was of no value.


Probably what happened back then is that the game software giants were encouraged to use DirectX. It was a full-suit to make games and this was a plus. Little game developers just followed along.

Yes, it was Microsoft's encouragement. Forget the fact that relying on a GL implementation unless you were working at Id was basically suicide for your game. Blame Microsoft.

Jan
08-03-2008, 06:54 PM
"Nobody wants per-vertex colors, right?

If that was the only fast path in OpenGL at the time, then that alone was reason enough to abandon it. Even D3D 3.0 allowed you a per-vertex color."

That doesn't make sense to me.

"Not only that, which is damning enough, it doesn't get into the specifics of how Quake renders. How many triangles does it draw in one batch? "

Take a look at the Q2 code. Back then OpenGL allowed to use vertex-arrays, but Q2 doesn't do so. It is really only glBegin (GL_POLYGON), vertex, ...., vertex, glEnd. It is so damned primitive! There was simply no known notion of "batches", as we are concerned with, today. Every polygon was it's own batch.


Having said that, i do agree, that OpenGL, to that time, might have been more complex, than needed and therefore too much work to implement. One thing one might also consider, is that the DOS era was only just coming to its end. In DOS it was the ISVs duty to program the hardware. Drivers were not provided by IHVs. However, OpenGL was to be implemented by the IHVs, so that was something they were not used to. D3D implemented the biggest part itself, forcing IHVs only to implement one small subset of the API. Back then they were even lazier than today, so that was certainly one point why to favor D3D.

In the end it all comes down to politics. MS wanted an API, that they have full control over, which is understandable, especially when you intend to make your platform the dominant gaming platform (competing mostly with DOS to that time). Software developers wanted something that released them from having to write their own software renderer AND guaranteed them to be available on every computer, running windows.

Lets just hope the future is a bit brighter for OpenGL.

Jan.

zed
08-03-2008, 09:39 PM
>>That doesn't make sense to me.

me neither

u could use glintercept or something similar to find out what calls are being made, IIRC there was a document about quake3 that described the fast path

WRT quake2 polygon method, remember this was in the days before hardware t+l, IIRC i think it used tri_strips + tri_fans which whilst messy were prolly less work as less vertices needed to be transformed

Rob Barris
08-03-2008, 10:11 PM
We've been making a lot of progress in speeding up World of Warcraft in OpenGL mode on Windows, and our drawing paths look nothing like Quake's. The meme of "you have to draw just like Id's engines to go fast in GL" seems outdated from where I sit.

pudman
08-03-2008, 10:37 PM
I think Korval was talking about Quake at the time of D3D's introduction. At that time Quake code had not been released and so knowing which path id used (and therefore was the fast path in OpenGL) was not easy. (And GLIntercept didn't exist back in '97)

Regarding Quake 2, it was the last id project that also included a software renderer and that might have driven the GL usage pattern. Or it was simply targeting the lower common denominator of hardware acceleration (w/ miniGL driver).


The meme of "you have to draw just like Id's engines to go fast in GL" seems outdated from where I sit.

Yes it is. But we were talking about the reason for DirectX's emergence so a decade ago it was true concerning GL for games on Windows 95/98.

Korval
08-03-2008, 11:13 PM
That doesn't make sense to me.

Wanting per-vertex colors makes no sense to you? I'm not sure how to respond to that.


u could use glintercept or something similar to find out what calls are being made, IIRC there was a document about quake3 that described the fast path

Quake 3 was long after D3D was made/purchased, and glIntercept was longer still.

And even if you knew how Quake rendered, you still had to follow that path. Even if you wanted to render better than what Quake could do (which apparently meant per-vertex colors was not possible).

Jan
08-04-2008, 03:57 AM
"Wanting per-vertex colors makes no sense to you? I'm not sure how to respond to that."

No, your sentence doesn't make sense. Unless OpenGL did not support per-vertex colors to that time. So please enlighten me, whether that was the case back then, because i do not know of any such limitation.

bobvodka
08-04-2008, 05:43 AM
I suspect what Korval is driving at is that if Quake didn't use it then it wasn't going to be fast. So, if Quake didn't do per-vertex colours anyone who tried to use per-colours would find themselves in a performance world of pain.

bertgp
08-04-2008, 07:49 AM
I suspect what Korval is driving at is that if Quake didn't use it then it wasn't going to be fast. So, if Quake didn't do per-vertex colours anyone who tried to use per-colours would find themselves in a performance world of pain.

It is also the case for quite a few other OpenGL features. For instance, when I started with GLSL using the Orange Book and the spec, I naively thought that the noise() methods were implemented in hardware. Guess what? Nobody supports a hardware accelerated version of them... Why put in a rendering path which will either be slow as hell (software rendering) or not implemented by anybody?

This, I think, is the kind of feature Korval was referring to. Don't make it hard to find the fast path for developpers and don't put in features that don't work in hardware. Postpone them to a later spec version.

It is however ironic that OpenGL now lags behind DirectX when it comes to supporting new GPU features in the core spec. Hopefully, OpenGL 3 will remedy this...

Brolingstanz
08-04-2008, 08:26 AM
I don't think noise is necessarily a good example of that, in that I don't see any harm or confusion in reserving/exposing keywords in tandem with documentation that clearly states hw support.

Then there's the multi-pronged approach to getting a rendering job done, where the API exposes several usage paths that are seemingly equivalent. Therein lies the potential for confusion and the need for extensive testing. Frankly I don't see how to avoid this completely, but the situation could be improved, and likely will be in GL3, as they continue to whittle this redwood of an API down to the proverbial nub ;-) (9 days left!)

Carl Jokl
08-04-2008, 08:56 AM
I wonder if we couldn't all just be friends.

Perhaps we should take a minutes silence to remember all those who died in the API wars.

V-man
08-04-2008, 09:47 AM
I suspect what Korval is driving at is that if Quake didn't use it then it wasn't going to be fast. So, if Quake didn't do per-vertex colours anyone who tried to use per-colours would find themselves in a performance world of pain.

The problem I have with discussions like this is that there isn't a cold hard fact. Perhaps if Korval can get some people who chose DirectX back in 1997 here and tell us what problems they were having, then we can get a clear picture.
It is very much possible that the major houses went with DirectX because quite a few of them just needed DirectDraw. Quite a few them liked the "Direct3D, DirectPlay, DirectSound, DirectInput" package.
Quite a few programmers didn't learn GL at all.

Brolingstanz
08-04-2008, 09:53 AM
Carl, I think we're all friends here.

I love everyone, but then I'm also a peace loving tree hugger in my spare time.

Mars_999
08-04-2008, 10:13 AM
As for the noise(), IIRC HLSL doesn't have support either. So a moot point. As the hardware doesn't support it. IIRC 3DLabs pushed for it, as their hardware had support for it, and possibly Quadro/FireGL cards do?

bertgp
08-04-2008, 10:38 AM
As for the noise(), IIRC HLSL doesn't have support either. So a moot point. As the hardware doesn't support it. IIRC 3DLabs pushed for it, as their hardware had support for it, and possibly Quadro/FireGL cards do?

I didn't mean to imply that HLSL supports it either. My point is that the function is in the spec and that nowhere does the latter mention that (hardware) support for this function is somehow optional. Reserving the keyword would be fine however (as for the #define, etc.)

@Modus : your example is indeed much better than mine.

bobvodka
08-04-2008, 11:02 AM
The GLSL Noise() issue however does bring up a "problem" with OpenGL as it stands; the spec is just that, a spec. It has no notion of how it is going to be implimented and as such there is no 'hardware or not' clause put in; as far as the spec is concerned the final destination for these commands could be a bunch of ants running around an ant farm with a sheep skull in it.

Of course, this is for the most part a load of rubbish in the real world as, aside from Mesa which is an OpenGL 'work-a-like', OpenGL DOES end up running in hardware right now and so such considerations should be given.

Who knows, maybe the new spec will fix this problem as well...

Brolingstanz
08-04-2008, 11:09 AM
Bertgp, actually I was just following through on something Korval brought up earlier.

I'd probably wait and see what happens with GL3 but the noise thing might be the stuff of the suggestions forum. My GLSL compiler doesn't even raise an eyebrow when given noise() calls, just silently produces zeros. By comparison my DX compiler fails to compile and dumps a message to an error blob. Something in the info log would be nice and/or a way to specify a noise requirement or something, something that'll fail compilation or validation on actual hardware...

Mars_999
08-04-2008, 11:50 AM
http://www.tomshardware.com/news/nvidia-driver-big-bang-geforce,6039.html

WOOT
OpenGL 3.0 drivers are out!

bobvodka
08-04-2008, 12:23 PM
Really? Is it september already?

Leadwerks
08-04-2008, 12:59 PM
Now to see what ATI does.

bobvodka
08-04-2008, 01:45 PM
Brands don't tend to do much of anything... now, AMD on the other hand...

(edit: and yes, I'm being picky, but if technical people can't even be bothered to get the details right then we might as well give up now.)

knackered
08-04-2008, 02:13 PM
for heavens sake, colour per vertex has always been supported in hardware accelerators - how else do you think they did software lighting in the days before T&L?
I'm disappointed in Toms Hardware.

From our side, it is sad to see how OpenGL ended up.......these features were introduced with first DX10 GPU in November 2006
That sort of gives the wrong impression of OpenGL, I think. My customers might think they've been short changed in some way, when in reality they've had the benefit of all the latest hardware features - even if we do use OpenGL.
Irresponsible, ignorant mutts.

Rob Barris
08-04-2008, 03:13 PM
Knackered, your quote (link?) implies that said features aren't available on OpenGL, but NVIDIA was shipping them with OpenGL 2.x extensions just about the same time Vista was getting released with DX10.

The bigger problem isn't that said capabilities weren't available on OpenGL, but rather that they were not available broadly and in a multi-vendor standardized way. DO we agree on that point ?

Mars_999
08-04-2008, 03:57 PM
I agree with Rob, I hated not having access to DX10 features on ATI, and been stuck using Nvidia hardware, which I have no complaints, but still sucks as my code base is only runnable on a GF8 Gfx card, since ATI has been absent with GL2.1+ if you may... But hopefully they will just use GL3 to get up to speed and if I have to recode some code to use GL3 from my current code so be it! ;)

MZ
08-04-2008, 04:15 PM
That's indeed a strange "accusation", (...)Whatever. If I knew this reference would attract this level of people's attention, and turn it away from what I was really trying to point out, I wouldn't have used it. Forget Intel and AMD. Korval's idea of "toothless Microsoft" vs. "obstructionist ARB" is ridiculous enough alone.

bobvodka
08-04-2008, 04:17 PM
All of which only applies if OpenGL3.0 introduces DX10 features from the off.

Also; why is it that no one gets that it's AMD now, not ATI, honestly is it that hard? or am I just some sort of super genius? (honestly; all evidence from my life points to the latter conclusion...)

knackered
08-04-2008, 04:20 PM
i have to admit to being ignorant about non-nvidia implementations of OpenGL. I got burned severely about 6 years ago by ATI cards, and haven't been back since (I've read nothing but bad things about their GL implementation in the past 6 years, except from the slightly biased Humus). It's been nvidia and the now defunct 3dlabs for me. OpenGL has been more than acceptable on nvidia hardware.
I realise others don't have this luxury.

MZ
08-04-2008, 04:22 PM
The bigger problem isn't that said capabilities weren't available on OpenGL, but rather that they were not available broadly and in a multi-vendor standardized way.What's even more sad, majority of G80 extensions were EXT. Not vendor specific, as it often happened in the past. They were just laying there, ready to be picked up by Ati.

Brolingstanz
08-04-2008, 04:25 PM
Hehe... well some old habits do die hard. Heck I'm still referring to the ARB, though I think it's officially the Khronos OpenGL Working Group now, or something equally long and difficult to remember. ;)

bobvodka
08-04-2008, 04:26 PM
That's indeed a strange "accusation", (...)Whatever. If I knew this reference would attract this level of people's attention, and turn it away from what I was really trying to point out, I wouldn't have used it. Forget Intel and AMD. Korval's idea of "toothless Microsoft" vs. "obstructionist ARB" is ridiculous enough alone.

Well, it helps if when you use an example that it's true.

The ARB and MS were a different matter; certainly if you go back even a short while it was dogged with companies blocking other companies because things wouldn't work well on their hardware and if you go back to the days when D3D was just starting to appear you didn't have a few major people pushing it, hell they probably didn't even think D3D was a 'threat' back then. Maybe if they had seen that 12 years later MS would have withdrawn from the ARB and that, outside of Linux and OSX, OpenGL hardly matters (I consider OpenGL|ES a seperate issue because... well, it is a seperate API and hasn't been horribly mismanaged) they might have got their collective arses in gear.

bobvodka
08-04-2008, 04:28 PM
Hehe... well some old habits do die hard. Heck I'm still referring to the ARB, though I think it's officially the Khronos OpenGL Working Group now, or something equally long and difficult to remember. ;)


Well, in that case it's same faces, different name; although I'm pretty sure the last siggraph's slides still hard ARB on them.

Still, I know all about habits as I have a mild OCD condition, it just seems like people don't even bother to try to be correct sometimes :|

Korval
08-04-2008, 04:57 PM
All of which only applies if OpenGL3.0 introduces DX10 features from the off.

They don't have a choice on this point. By now, Longs Peak was supposed to be well-established, and Mt Evans would have been released a few months ago. They don't have a choice in releasing this thing with DX10 functionality; I just want to preserve DX9 as the minimum basic requirements.

bobvodka
08-04-2008, 05:21 PM
In that case I hope we are getting OpenGL3.0 and OpenGL3.1 to enforce that difference as, if I use it, I really don't want to have to more than check more than a version number to tell the two classes of hardware apart.

Mars_999
08-04-2008, 06:47 PM
Honestly I could care less about DX9 support anymore. I want the extensions Nvidia has on the GF8 series. I use them all the time. But there isn't any reason why they couldn't have support for DX9 features in GL3. Why couldn't it backwards compatiable? I want DX10 features to be included with GL3. Now if they want emulate DX9 features go ahead and do that also. Everything you can do in DX9 you can do in DX10.

Leadwerks
08-04-2008, 07:50 PM
I know NVidia knows they can't support OpenGL 3 without ATI's cooperations, but I sure would like to hear something from ATI as well on this.

Stable drivers and standard support for Shader Model 4 extensions = me happy.

Korval
08-04-2008, 08:05 PM
In that case I hope we are getting OpenGL3.0 and OpenGL3.1 to enforce that difference as, if I use it, I really don't want to have to more than check more than a version number to tell the two classes of hardware apart.

I'd be happy if the design that they were talking about comes to fruition. That is, an object-based design where if the object constructed correctly, it would be something that was supported in hardware, and if it didn't, then it wouldn't. A quick compilation of a Geometry shader should be sufficient to test that.


I know NVidia knows they can't support OpenGL 3 without ATI's cooperations

Feh. nVidia has been supporting OpenGL without ATi's cooperation before. Like, constantly. So what would it matter for GL 3.0?

Leadwerks
08-04-2008, 10:54 PM
Extensions are one thing, but nobody is going to move to an NVidia-only API.

Korval
08-04-2008, 11:59 PM
Extensions are one thing, but nobody is going to move to an NVidia-only API.

I was referring to the fact that ATi drivers are not terribly trustworthy.

Carl Jokl
08-05-2008, 01:22 AM
I don't know what the real problem is refering to ATI as ATI rather than AMD. I know AMD bought them out but they still maintain the ATI branding and identity and as long as they continue to do so I will call them ATI.

Jan
08-05-2008, 01:47 AM
I agree with Carl.

Also, "back then" i actually asked, on this board, how we should call the ARB now, after Khronos took over, and someone from the ARB responded, that they are still called the OpenGL ARB. Referring to them only as the Khronos group would be imprecise, since they are not OpenGL only. Just as with AMD/ATI, when you refer to the manufacturer of graphics-cards, you should be specific and call them ATI. When you talk about the company in general, call them AMD.

Jan.

bobvodka
08-05-2008, 02:42 AM
Except AMD are the manufacture of the graphics cards, ATI is just a brand. It would be like referring to NV as 'GeForce' or 'NForce'; it's nonsensical.

bobvodka
08-05-2008, 02:48 AM
In that case I hope we are getting OpenGL3.0 and OpenGL3.1 to enforce that difference as, if I use it, I really don't want to have to more than check more than a version number to tell the two classes of hardware apart.

I'd be happy if the design that they were talking about comes to fruition. That is, an object-based design where if the object constructed correctly, it would be something that was supported in hardware, and if it didn't, then it wouldn't. A quick compilation of a Geometry shader should be sufficient to test that.


Tbh, this is one of those areas where I personally feel that being able to check a simple version number would be better. It seems a waste to do a shader compile just to check if we are on DX10 or DX9 hardware.

Mars;
As much as you and I might not care about DX9 hardware the truth of the matter is there is still alot of it out there so if GL3 wants to see any adoption from a commerical pov then it's going to need to support it. Plus, as I see it, if you can get a DX9 path effectively 'free' you might as well take advantage of it.

knackered
08-05-2008, 02:53 AM
Sorry, I didn't realise we were taking bobvodka seriously. Bob, I think you're due your medication.

CatDog
08-05-2008, 03:17 AM
Extensions are one thing, but nobody is going to move to an NVidia-only API.
In fact I did something like this two years ago. I even stopped testing on ATI hardware. Of course, this doesn't work for games and mainstream software, but if you're doing some specialized application for a niche market, you simply tell your customers to use nVidia only.

And my impression is, that most of the customers agree on this, since ATI stands for malfunction and missing features.

So I'm sharing knackereds ignorance concerning OpenGL and ATI hardware (or whatever you might call it).

CatDog

ffish
08-05-2008, 03:36 AM
Everything you can do in DX9 you can do in DX10.Not triangle fans. I'm currently writing a DX10 renderer to emulate our rendering interface (that currently uses OpenGL). We use(d) triangle fans in a few places so it's a PITA having to rewrite them as triangle lists.

I'm sure there are other differences too but that's the most obvious example that kept me busy recently.

Carl Jokl
08-05-2008, 04:00 AM
This could work out pretty well for me if it goes according to plan. In theory I will have access to OpenGL 3.0 by some time next month. One thing I wonder is how quickly developer resources will be available for OpenGL 3.0 as it is all good an well having access to the platform but without some documentation on how to use it I would still be a bit stuck. Hopefully I will be able in the longer term to replace my trusty OpenGL Superbible with a new edition which covers OpenGL 3.0 when one becomes available but in the mean time I will have to rely on the internet. It will be interesting to see how it compares as this was going to be quite an overhaul of the API from what I had heared. Potentially it may be Object Oriented (though I am not sure to what degree).

bobvodka
08-05-2008, 04:05 AM
And my impression is, that most of the customers agree on this, since ATI stands for malfunction and missing features.

If we are talking OpenGL, then sure, but in general my GT8800 has given me more trouble in the last 7 months than ANY ATI card did from the 9700 to the X1900 which is why I'll be jumping ship back to AMD for the HD4870 as in my experiance AMD drivers have been more stable.

Stephen A
08-05-2008, 04:15 AM
And my impression is, that most of the customers agree on this, since ATI stands for malfunction and missing features.

If we are talking OpenGL, then sure, but in general my GT8800 has given me more trouble in the last 7 months than ANY ATI card did from the 9700 to the X1900 which is why I'll be jumping ship back to AMD for the HD4870 as in my experiance AMD drivers have been more stable.
I share the same experience: my nvidia 6800 and 7600 cards have given me trouble (driver updates generally broke as much stuff as they fixed), while my ati 9600 and X1950 have been much better. The difference was especially evident on Vista, where it took close to a year for nvidia to provide some form of stability.

That said, the nvidia cards did (and still do) have better support on Linux, although even that is starting to change now.

It will be interesting to see who will provide stable GL3 support first. It's a fairly safe bet that intel will trail far behind the other competitors (they *still* don't support GL2.0), but hopefully we won't have to wait 6-12 months to see anything useful.

Carl Jokl
08-05-2008, 04:20 AM
The ATI / AMD naming things is a bit more complicated in that ATI was a manufacturer in itself. In a way that component of the organisation still exists as part of AMD. We don't refer to GeForce as a manufacturer but ATI is not equivalent to that. The equivalent to Geforce is Radeon. AMD has not wanted to trample all over the ATI identity even though that team and devision is part of AMD. I like to think of ATI as a division of AMD, a child company or component. I can't really feel right calling them AMD.

Carl Jokl
08-05-2008, 04:41 AM
I think the NVidia vs ATI (or AMD) thing is just not going to be an argument which has a conclusion. My experience has been that each side has problems. I have had problems (but quite different problems) with both of them. I have generally preferred ATI thought felt like the only one sometimes.

I am personally ready for a continuation of the Ruby saga. Ruby...where are you?

(to music of Alex Coopers 'Poison')

ATI's... Mascott...
You like... to kick butt...
Taking down... Optico...
Mountain boarding... Through Snow...

I want to love you but I know you're not real
I want to hold you but there nothing there to feel
I want to kiss you through the monitor's seal
I want to taste you but your lips are shaded POLYGONS!!!
Your'e polygons rendered with texture and light!!
Your'e POLYGONS!!!
I don't wanna leave ATI!

CatDog
08-05-2008, 04:52 AM
It will be interesting to see who will provide stable GL3 support first.
Yes. Maybe my opinion about ATI is a little outdated... if it turns out that they (AMD) make it better, I will not hesitate to act accordingly. But for me, nVidia starts this race from the pole position.

CatDog

bobvodka
08-05-2008, 06:21 AM
Well, AMD have made improvements and a fair few extensions did appear of late but not the DX10 level stuff. My theory on this is that someone decided that with GL3.0 etc 'coming soon' the development effort to make the EXTs for the current driver wasn't worth it, certainly when hardly anyone in the commerical world uses OpenGL, instead holding for a GL3 release instead.

The Vista OpenGL driver from ATI and then AMD was better than the legacy XP version of it, so they were making improvements.

But as you say, we'll see.

Carl Jokl
08-05-2008, 06:49 AM
Will OpenGL 3.0 be available for XP / XP x64 from the outset or would I have to use Vista to get them. XP is soon to be unsupported but I don't want to switch if I can help it.

Korval
08-05-2008, 10:38 AM
Will OpenGL 3.0 be available for XP / XP x64 from the outset or would I have to use Vista to get them. XP is soon to be unsupported but I don't want to switch if I can help it.

XP may be unsupported by Microsoft, but that doesn't mean that other people can't support their parts of it.

nVidia and ATi still have Win2000 drivers. I doubt they will require anyone to upgrade to Vista just to get GL 3.

Rick Yorgason
08-05-2008, 11:06 AM
Except AMD are the manufacture of the graphics cards, ATI is just a brand. It would be like referring to NV as 'GeForce' or 'NForce'; it's nonsensical.
You're being silly. ATI is a subsidiary of AMD. In other words, ATI is still a company, and AMD is their parent company.

It's kind of like how Bungie used to be a subsidiary of Microsoft, but everybody kept referring to the studio as Bungie. If anything, it's more accurate that way!

Leadwerks
08-05-2008, 11:39 AM
ATI/AMD/The Ruby People/Those guys are pretty good about fixing driver bugs. I got something fixed just in the last couple weeks. You need to submit a demo showing the error and a complete report, and they will get right back to you.

Let's compromise and call them ATD.

Mars_999
08-05-2008, 01:02 PM
People lets drop it who gives a crap what AMD/ATI is called use either one they don't care and neither should we! ;)

Really DX10 has not triangle fans? Didn't know that. Tell you how much coding I have done in DX10!

I bet GL3 will still have fans, no pun intended. :)

And as for Gf8 series stability, I honestly haven't had any issues... on Vista64 at all. I did find on bug with glGenerateMipmapEXT and they are going to fix it with the next driver release. And I use a lot of the newer DX10 extensions.

Correct Bod, there are alot of DX9 cards out there, but Dx10 cards market share I bet would surprise you. IIRC Nvidia has 10s of millions of GF8 cards out.

Seth Hoffert
08-05-2008, 01:31 PM
I can attest to having few issues as well. I've reported two bugs regarding the geometry shader to NVIDIA and one was fixed promptly, and the other is being inspected.

tanzanite
08-05-2008, 01:36 PM
... which is why I'll be jumping ship back to AMD for the HD4870 as in my experiance AMD drivers have been more stable. That stability has been my experience too - unfortunately, it can't do [censored]. It seems that stability is easy to archive when it doesn't do anything (i care of). And that is why i have (atm completely) abandoned ATI.

... sort of funny.

bobvodka
08-05-2008, 03:27 PM
I should point out, I don't just mean OpenGL stability I mean NVs drivers have bluescreen x64 Vista two or three times doing nothing more that displaying the output from my tv card. Something ATI/AMD never did and only done before by Creative's first two attempts at drivers (it's been fine since June last year).

bobvodka
08-05-2008, 03:32 PM
Correct Bod, there are alot of DX9 cards out there, but Dx10 cards market share I bet would surprise you. IIRC Nvidia has 10s of millions of GF8 cards out.

I'd be very VERY surprised if that was the case, the last Valve hardware survey (http://www.steampowered.com/status/survey.html) shows a combined total of 8800, 8600, 8500, 8400 to be around 310,000 and the 8600M and 8400M to be around 26,000. Granted, the 8800 had the highest single percentage of the results but even that pales when compared to the 'unknown' not to mention the total of DX9 cards out there.

Korval
08-05-2008, 08:02 PM
I'd be very VERY surprised if that was the case

Tens of millions is clearly hyperbole, but modern DX10-capable cards that are actually quite fast are pretty cheap. I wouldn't be surprised if that number went up by a good 50-100%.

That's not to say that GL 3.0 should forget about DX9-level cards.

ffish
08-05-2008, 08:56 PM
Really DX10 has not triangle fans? Didn't know that. Tell you how much coding I have done in DX10!

I bet GL3 will still have fans, no pun intended. :)I seem to remember reading that there's a reason why DX10 excluded them (makes sense that there would need to be a good reason). Google doesn't turn up much but apparently GPUs don't deal with them as well as they might other primitives. Maybe an IHV can comment on why that might be. So I wouldn't be surprised if GL3 did drop them, since modern APIs tend to just mirror the hardwares' capabilities.

Mars_999
08-05-2008, 09:56 PM
Yesterday during their conference call NVIDIA's CEO Jen-Hsun Huang announced his company has shipped more than 2 million GeForce 8800 GT graphics cards in just four months of production.



And that was awhile ago! and that was just 8800 alone, that doesn't include GF8400,8500,8600,9500,9600,9800 cards, and all of ATI's DX10 class hardware. All in all I think there is plenty of GPUs out there now that are capable to use GL3.0 as its baseline. And as for no support come on people $50 you can get a cheap DX10 card, and most likely if you are that cheap you probably could careless that the app or game you are playing ran on GL3 for now anyway.

Simon Arbon
08-05-2008, 10:27 PM
Getting back to OpenGL3 for a moment, the siggraph 2008 schedule (http://www.khronos.org/news/events/detail/siggraph_2008_los_angeles_california//) implies that the OpenGL3.0 Specification will be released Wednesday, 13 August at 6:00pm (California time).
But nobody seems to be prepared to say how many mountains will be included in this release (ie. the cut-down Longs Peak or the complete with Mt Evans full specification)
Personally I will be very disappointed if this isn't the full specification with all the advanced features.

Addison-Wesley are announcing a new range of OpenGL books at the BOF, and it would be pretty pointless to publish books for LP as nobody would buy them, everyone would wait for the full Mt Evans version before buying a complete new set of OpenGL reference books.
The GLSL 1.3 specification is also being released, so time to order a new copy of the orange book i suppose.

Someone said that we would probably have to wait a long time for drivers after the spec was released, but the IHV's must have had the final spec for at least the 1 month final review period, and have probably been experimenting with incomplete alpha drivers for months so i dont see why we shouldn't get a beta version at least.
Its only going to take a few days to read the spec and then i want to start writing and testing some OpenGL3 code.
Interestingly the GeForce driver Release 180 seems to be timed for release close to Siggraph.

The ARB Ecosystem group is giving a talk, so we hopefully will also be getting the promised SDK, documentation, sample programs, libraries, tools and conformance tests soon.

There is also a mention of the impact of OpenCL on OpenGL, but arn't these meant to be completely separate API's?
Perhaps OpenCL and GLSL are being combined into a single language?
Microsoft has a "DirectX 11 Compute Shader" session on thursday directly before the OpenCL session from Apple, so OpenCL could be OpenGL's Compute shader rather than something separate.

I also noticed that Intel are having several sessions about Larrabee on thursday including one called "Fully Programmable Graphics".
Could this possibly mean that EVERY part of the pipeline is going to be programmable in the near future? even the clipping and rasteristion stages?

bobvodka
08-05-2008, 10:50 PM
I also noticed that Intel are having several sessions about Larrabee on thursday including one called "Fully Programmable Graphics".
Could this possibly mean that EVERY part of the pipeline is going to be programmable in the near future? even the clipping and rasteristion stages?


For Larrabee; yes.
Afaik the only graphic 'specialised' hardware will be for texture sampling. Aside from that Larrabee is basically a group of vector processors. Check out anandtech.com for some inital details.

Carl Jokl
08-06-2008, 12:32 AM
On the O/S front I believe that last I heard 70% of the deskop market was still running XP. I think Vista was only 10% but these figures could be wrong or outdated. There are plenty of people I know who used Vista and hated it and so downgraded to XP. Some with new computers. Once XP Goes out of support there will be a bit of a strange position where Vista will be the only supported Client O/S (appart from maybe the XP vairants like Media Center Edition or x64). For the gaming market I know a lot of PC Gamers have hated Vista because the same games run slower.

The Jump from Windows 9X to XP or Windows 2000 to XP was a lot more positive. It may have been a bit slower going from 9X to XP but XP was rock solid compared to the 9X family (sometimes I can forget how much more unstable they were). Going from 2000 to XP I believe there was actually a bit of a speed increace as XP had been tweaked a bit over 2000 to make it go a bit faster.

Going from XP to Vista from a gaming perspective. Vista just seems to use copius amount of ram. Even if Aero is turned off it still seems to run slower than XP. Stability wise there have been problems though perhaps not all Vista itself but the new driver architecture particularly for graphics has meant lots of problems as the more imature drivers cause Vista to crash or fall over. At one point at work on my Vista machine I was getting the BSOD about once a day.

I think in the end some will move to Vista only because they have to and not because it is percieved as a benerfit. I don't know that new games have pushed a requirement for DirectX 10 because perhaps this would be unwise since many of their target audience are avoiding Vista.

As for me personally. I was going to migrate at some point but at this rate I might just skip Visa and Migrate from XP to Windows 7.

I think XP offers perhaps in my opinion the best OpenGL development platform as in XP it is kept on pretty equal footing with DirectX. The Aero interface on Vista from what I have read on other threads on this forum can have issues interoperating with OpenGL or OpenGL performance can be poorer than it should be.

zed
08-06-2008, 01:26 AM
Someone said that we would probably have to wait a long time for drivers after the spec was released
IIRC with ogl1.2->2.1 nvidia have always had capable drivers within the month, so based on their track record I'ld say we're very likely to see something not long after siggraph.


I also noticed that Intel are having several sessions about Larrabee on thursday including one called "Fully Programmable Graphics".
Could this possibly mean that EVERY part of the pipeline is going to be programmable in the near future? even the clipping and rasteristion stages?
I've read a bit about Larrabee the last couple of days, it seems they're wanting to do what I proposed for the next version of opengl (+ got told in this thread it was a bad idea).
Though I dont have much faith WRT intel, I still remember when they were telling all and sundry that the i740 was gonna revolutionize graphics hardware, + we all know how well that went :)

Stephen A
08-06-2008, 01:40 AM
Larabee is going to be an interesting experiment. If it gets any traction, it will bring a revolution to graphics programming. The return to software rendering, as predicted by Tim Sweeney (of Unreal fame) almost a decade ago!

August 13, then - right when I won't be having any internet access for a week. Damn... :)

Just hope the .spec files are released soon after, to bring Tao and OpenTK up to date before the new drivers start shipping.

PkK
08-06-2008, 02:12 AM
Getting back to OpenGL3 for a moment, the siggraph 2008 schedule (http://www.khronos.org/news/events/detail/siggraph_2008_los_angeles_california//) implies that the OpenGL3.0 Specification will be released Wednesday, 13 August at 6:00pm (California time).


From the information on the schedule it could just be another update on the current state of their specification draft. Something like "GL3 will do X, Y and Z", "the specification is nearly complete", etc. But after the long silence that would still be some progress (maybe combined with the promise to give an update on the GL3 spec every Siggraph from now on).

Philipp

Jan
08-06-2008, 02:14 AM
August 13, then - right when I won't be having any internet access for a week. Damn... :)


Me too :-(

Mars_999
08-06-2008, 06:24 AM
Larrabee will be awesome if they can pull it off. Why, well you will be able to code it however you want to, and no more waiting around for GF10 or GF15 to get feature X or Y to be able to do this, if Larrabee doesn't have it code it up on the C/C++ side of the Larrabee hardware and drop it back into GL3 and walla your running. Your imagination will be your limitation not hardware. The only reason to get a new Gfx after Larrabee will be you want more FPS. I love this, and the Larrabee software side will be pure C/C++ which is great considering MOST coders with GL know C or C++. From what I read this could be the next big thing for guys like Sweeney or Carmack since the first time they coded 3D games, and usher in a new era of graphics. Great time to be a gamer/graphics coder!

Rob Barris
08-06-2008, 06:42 AM
Getting back to OpenGL3 for a moment, the siggraph 2008 schedule (http://www.khronos.org/news/events/detail/siggraph_2008_los_angeles_california//) implies that the OpenGL3.0 Specification will be released Wednesday, 13 August at 6:00pm (California time).


From the information on the schedule it could just be another update on the current state of their specification draft. Something like "GL3 will do X, Y and Z", "the specification is nearly complete", etc. But after the long silence that would still be some progress (maybe combined with the promise to give an update on the GL3 spec every Siggraph from now on).

Philipp


I see the BOF speaker list is up. Between BartholdL, JeremyS and BillLK's segments I expect attendees to have a much clearer picture of OpenGL 3.0 before DanielK and I get to chat.

http://www.khronos.org/news/events/detail/siggraph_2008_los_angeles_california/

We should reserve future events for higher revision numbers only!

Mars_999
08-06-2008, 07:09 AM
Hey Rob, is the glFX the shader file framework that everyone has been waiting for like .fx files for DX but for GL?

Rob Barris
08-06-2008, 07:32 AM
Sorry don't have any details I can post here other than what I find by googling:

http://www.khronos.org/glfx/

http://www.khronos.org/news/press/releases/new_glfx_and_composition_working_groups/

Brolingstanz
08-06-2008, 10:37 AM
Great time to be a gamer/graphics coder!

you can say that again... especially if you grew up on pong ;-)

*beep..doop...... beep...*

knackered
08-06-2008, 10:46 AM
if they've been wasting time working on a glfx framework I'll scream.
API first, fluff for the hobbyists second.

Korval
08-06-2008, 12:04 PM
if they've been wasting time working on a glfx framework I'll scream.

You say that as if it would be the same people working on both.

I would also point out that the press release from the glFX people came out in March of last year. So they've had plenty of nothing to do while GL 3.0 was delayed.

Rob Barris
08-06-2008, 02:22 PM
I would say that the glFX effort hasn't affected the GL 3.0 work in any measurable way, they have been very active but also very independent. I'm looking forward to seeing what they have to talk about at SIGGRAPH next week.

knackered
08-06-2008, 02:47 PM
So the job was given to the inexperienced fresh graduates, I take it? :)
I suppose if it hasn't been discussed at any ARB meetings, all is well.

magwe
08-06-2008, 02:49 PM
I've read a bit about Larrabee the last couple of days, it seems they're wanting to do what I proposed for the next version of opengl (+ got told in this thread it was a bad idea).


It's not a bad idea to expose the full hardware to developers. But it just doesn't make sense to do it in OpenGL. OpenGL is for rasterized graphics. If you want to do something else you should use some other interface to the hardware.

bobvodka
08-06-2008, 03:12 PM
I've read a bit about Larrabee the last couple of days, it seems they're wanting to do what I proposed for the next version of opengl (+ got told in this thread it was a bad idea).

No, you said (paraphrased) the next version of OpenGL should just let you play directly with the hardware.
Intel on the other hand are going to support D3D, OpenGL AND their own method of talking to the hardware via C or C++ (or whatever I guess) code. Seperate API, which no one was against, the arguement was (and still remains) with current and near future hardware just going 'here is how the hardware works, go write your own APIs' is a bad idea, something even Intel aren't going to do as they are going to be writing the D3D and OGL backends for us. If anything this is simply a more advanced form of OpenGL+CUDA, more advanced because unlike NV and AMD chips it's ALL programmable.

knackered
08-06-2008, 03:12 PM
even in the old days of software renderers when we had complete access to the device doing the drawing (the CPU), we all used (and wrote) API's to abstract the details. Such as the lovely BGI from Borland.

magwe
08-06-2008, 04:13 PM
even in the old days of software renderers when we had complete access to the device doing the drawing (the CPU), we all used (and wrote) API's to abstract the details. Such as the lovely BGI from Borland.

I'm in no way against APIs. I'm just saying that if it isn't rasterized graphics, that API shouldn't be OpenGL. I have done my share of coding in Turbo Pascal ;)

pudman
08-06-2008, 06:13 PM
I'm trying to guess where in the BOF they're going to talk about the Big Silence. I see two openings for that discussion: The introduction and the "ecosystem" update. I did a search to recall what this Ecosystem Working Group did and found this (http://www.opengl.org/pipeline/article/vol001_2). So they seem to be the people on point to discussion the issue.

As seen in that link, they "took a poll" and found the community wanted 1) an OpenGL SDK and 2) Better communication. Well, they gave us a kind of SDK. And they gave us four exciting newsletters. Then what? They forgot that the community still wanted to communicate?

Curious... How many people even found the "SDK" useful?

Korval
08-06-2008, 07:55 PM
I'm trying to guess where in the BOF they're going to talk about the Big Silence.

I don't expect them to. I expect that they'll just provide a statement, maybe some small justification about making a "better" API or somesuch, and get straight to GL 3.0.

An appropriate mea culpae would take a good 10-20 minutes.


How many people even found the "SDK" useful?

I do. But only in the sense that it is accurate documentation for OpenGL.

pudman
08-06-2008, 09:08 PM
But only in the sense that it is accurate documentation for OpenGL.

When I think of an SDK I don't think of "place I can get documentation online". It should at least be downloadable. And consistent: The API reference "doc" is the only actual documentation on the SDK page. The "spec" is on another page and the GLSL quickref is a PDF. Where's the "Kit" in Software Development Kit? Even the SDK tutorials haven't seemed to be mentioned on this forum. It's usually "check out NeHe's tutorials".

However, instead of complaining more (why is the SDK menu bar pink?!) I'll try to be more constructive. My concept of an SDK would be: A downloadable package containing documents, examples, utilities and debuggers. Which documents? Why not all of them! Which utilities? Take a community poll to see if there are preferences. Which examples? Take some initiative and ask prominent demo makers if their stuff can be made available and possibly tweaked to conform to some "SDK tutorial standard".

Anyway, as psyched as I am to hear about GL3.0 finally, I'm now quite interested in what this "ecosystem" guy has to say.

Korval
08-06-2008, 10:18 PM
When I think of an SDK I don't think of "place I can get documentation online".

Oh, I don't disagree. I'm not saying that what's there is good; I just pointed out that it wasn't fundamentally useless.

dor00
08-07-2008, 01:28 AM
Well, at least OpenGL3 got a new site... must be something behind..

I hope we will not be disappointed on 13. 6 days to go:)

Eddy Luten
08-07-2008, 06:31 AM
OpenGL3.ORG will be a marketing website for the new API, don't expect anything else.

Brolingstanz
08-07-2008, 08:41 AM
I find the SDK helpful, in particular the GLSL cheat sheet is the bee's knees. All in all it's a valiant effort to make GL documentation more accessible and convenient.

But let's face it, the old timers will be interested primarily in the core and extension specs, which collectively form what is hands down among the best API documentation one could reasonably hope for.

Korval
08-07-2008, 10:45 AM
which collectively form what is hands down among the best API documentation one could reasonably hope for.

Specifications are not documentation and never have been. Specifications are exactly that: they explain in minute detail what entrypoints do. Because of that, they are needlessly complicated and arcane. Yes, you can deduce what a function does by reading it, but that doesn't mean that real documentation isn't useful.

After all, the "SDK" has documentation that is separate from the specification. The "SDK" documentation is a much better way of finding out what a function does, rather than picking through the arcanum inherent in the specification. Yes, the spec is nice to have, when that arcanum becomes important (or for finding out if certain function behavior is correct), but it's better to have real documentation for the other 95% of the time.

Eddy Luten
08-07-2008, 11:11 AM
As far as games and simulations go:

A proper SDK will be the a heavily contributing deciding factor for OpenGL's future. Many developers (even here on this board), including myself have stepped over to Direct3D as their primary graphics API because of its documentation and provided samples. Even through the Direct3D API is not the best -to say the least-, I find it more comfortable to work with since I can reference the SDK on my computer's hard-drive whenever I want. This is a huge plus for Microsoft.

The biggest decider for OpenGL is community support, which is heavily diminishing. The current OpenGL SDK is a joke and not really an SDK. And how many games are still being made with OpenGL? Two per year?

If anything, the NVIDIA SDK for OpenGL is more of an OpenGL SDK than OpenGL's own. Linking to others' websites is dangerous since there is no control over the contents of the pages and to me, projects laziness on Khronos' part.

Brolingstanz
08-07-2008, 11:41 AM
I'll grant you that calling a specification "documentation" in the classical sense may be stretching things a bit, depending on your point of view; but it gets the job done in a pinch ;-)

PaladinOfKaos
08-07-2008, 01:34 PM
As I said over here (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=242950#Post242950) (shameless plug), we can't expect the ARB to create a full-fledged vendor-neutral OpenGL SDK. It's way beyond what they do, and way beyond what the vendor's are capable of cooperation-wise[1]. The only way OpenGL is going to get a proper SDK is if those who know about it write the examples and utilities themselves. Yeah it's a lot of work, but working together it'll be much easier.

[1] No offense meant to those of you here that work for the companies. I'm sure you all want to cooperate. But your legal and marketing teams...

Korval
08-07-2008, 04:06 PM
we can't expect the ARB to create a full-fledged vendor-neutral OpenGL SDK.

Then they shouldn't act like they're going to create one and then create something that's only slightly useful. Just say that they can't do it, that they don't have the resources to do it, and leave it at that.

The current SDK is a marketing gimmick; its a way for them to say, "See? We do have an SDK."

bobvodka
08-07-2008, 04:56 PM
Agreed; I would have said it's pretty much universally accepted that an SDK is more than 'a few web pages'.

Leadwerks
08-07-2008, 05:30 PM
I am increasingly interested in Intel's Larabee technology, for the purpose of writing a software rasterizer. The vendors' failure to provide working drivers, and Khronos' failure to provide an explanation of why OpenGL3 was delayed for a whole year will not be without consequences. The minutes we devs have a viable alternative platform a lot of us will take it, remembering all those years of driver hell and renderer fallbacks.

FYI.

pudman
08-07-2008, 06:05 PM
Then they shouldn't act like they're going to create one and then create something that's only slightly useful.

I agree with this. If the community wants it then it's a great opportunity for IHVs to fill the gap, like nvidia (http://developer.nvidia.com/object/sdk_home.html).

Documentation-wise, I still would prefer a bit more consistency with the presentation. And they could definitely throw a crap load more links to tutorials. But call it an SDK? I'm sure the "ecosystem" guys have better things to do with their time.

ZbuffeR
08-07-2008, 07:12 PM
I would not expect something fast from intel on the 3D front.
Repacking a bunch of old first generation Pentium CPUs and calling it a Larrabee GPU will not be enough. It will have to show real power, not just flexibility.

pudman
08-07-2008, 08:12 PM
It will have to show real power, not just flexibility.

If it showed "adequate" power but far surpassed nvidia/ati in flexibility, allowing cool stuff just not possible on their hardware, then it could be a game changer. However, the longer they take to get to market (2010?) the more time nv/ati has to extend their own programability/flexibility.

The analysis of the Larabee paper at anandtech is an interesting read but we just won't know anything real until the first hardware has been delivered.

Korval
08-07-2008, 08:48 PM
If it showed "adequate" power but far surpassed nvidia/ati in flexibility, allowing cool stuff just not possible on their hardware, then it could be a game changer.

Not necessarily. What "cool stuff" it allows would be the determining factor. It would have to be something very cool, especially if ATi starts offering true virtual textures in its Fusion products (thus allowing for effectively infinite texture space).

mfort
08-08-2008, 12:40 AM
Interesting paper:

Tessellation of Displaced Subdivision Surfaces in DX11 (10MB PDF)
http://developer.download.nvidia.com/pre...tion-Slides.PDF (http://developer.download.nvidia.com/presentations/2008/Gamefest/Gamefest2008-DisplacedSubdivisionSurfaceTessellation-Slides.PDF)

notice page 30, OpenGL stuff in DirectX11 presentation.

V-man
08-08-2008, 11:16 AM
Interesting paper:

Tessellation of Displaced Subdivision Surfaces in DX11 (10MB PDF)
http://developer.download.nvidia.com/pre...tion-Slides.PDF (http://developer.download.nvidia.com/presentations/2008/Gamefest/Gamefest2008-DisplacedSubdivisionSurfaceTessellation-Slides.PDF)

notice page 30, OpenGL stuff in DirectX11 presentation.



There is no OpenGL stuff. If it was GL, float3 would be vec3.
gl_ThreadID probably means Global ThreadID.



uniform int vertexIndex[K];
global float w[K][16];
in float3 V[K];
out float3 pos[16];
void main() {
float3 p = 0.0;
for (int i = 0; i < K; i++) {
int idx = vertexIndex[i];
p += V[i] * w[idx][gl_ThreadID];
}
pos[gl_ThreadID] = p;
}

Dark Photon
08-08-2008, 11:59 AM
I am increasingly interested in Intel's Larabee technology, for the purpose of writing a software rasterizer.... why OpenGL3 was delayed for a whole year will not be without consequences. The [minute] we devs have a viable alternative platform a lot of us will take it
Also eyeing Larrabee. However...

Larrabee and OpenGL are <u>not</u> mutually exclusive. Intel is allegedly developing both OpenGL and DirectX drivers for Larrabee (http://www.fudzilla.com/index.php?option=com_content&task=view&id=8533&Itemid=34). That'll get it out there and used (we'll be using the OpenGL path on Linux, if Intel comes through). And if/when they catch on, they've got smart guys like Pharr, Forsyth, the old Neoptica clan to dev next-gen rendering pipes and methods for this new system. Here's hoping they succeed. Decent industry competition is really great for consumers.

SIGGRAPH '08 Larrabee paper: here (http://softwarecommunity.intel.com/UserFiles/en-us/File/larrabee_manycore.pdf)

Leadwerks
08-08-2008, 01:54 PM
I don't have any interest in Intel's OpenGL and DX drivers. I don't see any point in that. I am interested in writing my own software rasterizer and having complete control of the renderer.

Mars_999
08-08-2008, 03:48 PM
ATI has a SDK for OpenGL? Wow when did that happen?