PDA

View Full Version : Pointer to Framebuffer



Sheepie
05-14-2001, 12:14 PM
I'm sure there are many technical reasons why this would be a tough thing to do, but it would be nice if you had a way of retrieving a pointer to the framebuffer memory. This way we wouldn't be forced into the glReadPixels, glWritePixels which prove to be much too slow IMHO.

I realize that GL is designed to work across a network, but what about when the Server,Client are the same machine? In that case, why not make the function available. Possibly by implementing some flag that can be read that indicates if it is/is not possible to retrieve a pointer to the framebuffer.

ET3D
05-14-2001, 12:53 PM
The question is, what would you do with this pointer? Reading directly from the card would always be slow, no matter how you do it.

Maybe a better choice would be a "raw framebuffer" extension, similar to NVIDIA's Z+stencil extension.

j
05-14-2001, 01:44 PM
There have been several discussions about this before.
http://www.opengl.org/discussion_boards/ubb/Forum7/HTML/000077.html

I think this one covered the topic pretty well.

j

Sheepie
05-14-2001, 06:33 PM
Unfortunately the other topic seemed to wander off into getting pointers to Texture memory.

I read all of the posts and have went back and forth on the issue. On the one hand, you have D3D's miserable experience with pointers, however I don't see why D3D was even brought up, it certainly is usually bashed in these forums.

Providing a pointer in and of itself is not the problem, it's the fear of what will be done with this pointer, or that it won't be handled correctly by applications.

I have programmed VGA and SVGA using the VESA bios extensions back in the ye old DOS days, and it was complicated indeed, but I studied how to do it, and got it working. Then I started Windows programming and tried my hand with DirectDraw. It was complicated, but I learned it, and got it working, but it was slow. Then I tried creating my own buffer and doing the swaps via fast Assembly language routines. Worked out about twice as fast. My video card isn't the hottest thing since sliced bread(an ATI Rage Pro), and yet writing my own backbuffer, blitting routines was much faster than the DirectDraw FlipBuffer.

Now along comes OpenGL. I have no hardware acceleration as I have to use non-OpenGL compliant drivers. I learned to use glReadPixels, glDrawPixels, it wasn't hard, but it was sloooooooow. I'm talking orders of magnitude slower than DirectDraws FlipBuffer routines. So slow in fact that I would not even consider using OpenGL for 2D games, applications.

I resent the comment in the last thread that we should all run out and by Geforce II's if we want speed. If that isn't getting hardware specific, than tell me what is? I mean hell, if everyone goes out and buys a G2, then we can all get any features we want under the sun as there will be a single card manufacturer left.

So until all of us go out and buy G2's(never gonna happen) hows about a pointer. Yes, it may be hard to program, and many may abuse it, but that is the case with anything. And lest everyone forget, not all of us actually get improved performance letting the hardware take over, sometimes it is faster, much faster to write your own buffer routines.

john
05-14-2001, 07:18 PM
Hello,

there are a number of reasons why ptrs are bad. Some even take the view (and can convingly argue) that pointers are even bad in languages like C because it makes it impossible to know about side effects. (I can't remember the quote exactly, but someone once said "pointers are like gotos, an error from which we may never recover", or something.)

Anyway, the merits and potential pitfalls of pointers in languages isn't what this is about.

I just wanted to make the observation:

Silicon Graphics machines have never exposed a memory mapped frame buffer. I appreciate that not everyone has an Onyx; but, then, that isn't my point...

cheers,
John

Sheepie
05-14-2001, 07:44 PM
And...as far as pointers being bad per se, or not working correctly, well that is not entirely correct. I have a commercial software program that uses direct writes(via assembly language routines) to the fornt buffer created in DD. This program has been used succesfully on thousands of different setups with graphics cards from all manufacturers. I have yet to receive a single complaint of any lockups, BSOD's, or irregular display results. This program has been out for over two years now. This program works in 16/24/32 bit depths, and multiple resolutions. It even works on a ancient 486 with no video card.

I didn't do anything magical, just obtained a pointer to the buffer and blasted it into it using a method very familiar to Assembly programmers-

rep movsd

Now I realize that is a pretty sophisticated line of code there, and I'm sure it could be trimmed down a bit.

I'm sure that it could be implemented in OpenGL and it would allow GL to finally have some worthwhile 2D aspects. DirectX has DirectDraw, but as of yet, OpenGL has-

glReadPixels
glWritePixels
glCopyPixels

So, I need to allocate some *system memory* for my data, then using glWritePixels to copy that to the framebuffer? Sounds slow to me. So how is that avoiding System ram?

I think the reason DirectX is so popular is all of the capabilty it has. DirectDraw, D3D, DirectSound, DirectPlay, DirectMusic, have I forgot some? How about a little of that goodness in OpenGL?

How about a pointer?

Korval
05-14-2001, 09:34 PM
OpenGL, despite the name's implications of being a graphics library, is mainly a 3D library. It is designed as an abstraction to hardware, so that you neither have to, nor are permitted to, directly access the hardware in any way.

Besides, there's more than one way to solve a problem. Ignoring page flipping for a moment, OpenGL's texture mapping facilities are quite good for dealing with the issue of getting a bitmap on screen (assuming texturing is hardware accelerated. And, virtually any OpenGL-compliant card provides this). It does good rotation and scaling (as good as the card, at any rate).

And, as long as information goes in one direction, ie. towards OpenGL, it is reasonably fast. True, OpenGL makes it intentionally difficult to interface with things by yourself, but if you did that, what would you need with OpenGL?

ET3D
05-15-2001, 06:04 AM
Accessing a frame buffer directly is more complicated than it would be for 2D, and while 2D buffer formats are likely to remain the same in the future, 3D ones change more easily. For example, it may be that the back buffer is 4 times the size of the front buffer, because the user has enabled antialiasing. Or, the same can be done by creating four different buffers, which are later combined by the hardware when outputting to the screen. Using OpenGL functions can hide such complexity from you.

> So until all of us go out and buy G2's(never gonna happen) hows about a pointer.

The problem is, even if this becomes part of a future OpenGL, it won't help you a bit in getting more performance out of older chips. Chip makers will have to provide a compatible ICD for their older chips, and they don't even provide OpenGL 1.2 compatible ICDs now, so they're unlikely to be compatible with an even newer version.

BTW, DirectDraw is also pretty much dead. It's not even in the DirectX documentation these days, even though it's still lurking in the background. DirectGraphics supports 2D sprites using the 3D hardware, which provides more flexibility. To paraphrase Korval, "DirectGraphics, despite the name's implications of being a graphics library, is mainly a 3D library." You'll find very few 2D games these days, and fewer still as time passes. So while OpenGL may not be suitable for 2D games, who wants 2D games??

And about the "never gonna happen" believe me that in four years you'll find it hard to come up with a chip that's not as powerful as a GeForce.

> I have no hardware acceleration as I have to use non-OpenGL compliant drivers

Why do you have to use non-OpenGL compliant drivers?

Sheepie
05-15-2001, 10:07 AM
There are many cases where it would be nice to have direct access to the framebuffer. I'll use a screen saver I wrote for an example. The program calculates interference patterns using highly optimized Assembly routines and buffers. I would like to be able to incorporate 3D shapes as well, however that sticks me into a bind. I can't use OpenGL's Draw Pixels as it is far too slow, but to render the 3D shapes manually in software would be too slow. So I need a pointer. I can do this using DirectX and that is what I feel is missing in OpenGL.


Using OpenGL functions can hide such complexity from you

I am perfectly happy learning whatever it is I need to access the framebuffer. OpenGL is supposed to be 'low-level'.


The problem is, even if this becomes part of a future OpenGL, it won't help you a bit in getting more performance out of older chips. Chip makers will have to provide a compatible ICD for their older chips, and they don't even provide OpenGL 1.2 compatible ICDs now, so they're unlikely to be compatible with an even newer version.

I do understand that previous boards will not recognize this. That is why one would need to query the particular card at runtime to find out. I believe this needs to be done with extensions currently, so how is this any different?


So while OpenGL may not be suitable for 2D games, who wants 2D games??

That is a narrow viewpoint and really not a good argument against having pointers. There are many cases where a pointer would help, not only in 2D games.


Why do you have to use non-OpenGL compliant drivers?

The computer I have uses an ATI Rage Pro but the latest OpenGL1.1 'compliant drivers' are not at all functional. I get BSOD's, lockups, no textures in some apps, and scanning problems. I have attempted to try several different drivers and none work save the driver from '98, therefore I am stuck in software-emulation mode. I don't mind as I have found out many little tricks to getting OpenGL faster and I can readily achieve 10-12 fps in the Model Editor I am finishing up. I look at many demos, example programs and such and find them running 1-2 fps.

It seems that the Cards are a double edged sword. On the one hand, they have caused many gains in speed and made programmers jobs easier, on the other hand, they have also created lazier programmers. I guess I'm really old school where squeaking a byte or two out of some optimized assembly code was the thing to do. Efficiency was the call of the day. Nowadays, it's 'shut up' and let the card do it for you. I believe I remember those very words when Windows came out.

I still want my pointer.

ET3D
05-15-2001, 01:01 PM
The point of "using OpenGL functions can hide such complexity from you" was that, unlike 2D, you can't "happily learn" what you need, because there can always be another method of display that you didn't program for. So you took a certain method of antialiasing into account (for both 2x,3x and 4x AA) - but what if the driver has a switch that enables automatic stereo support?

The comment about old chips was because you referred to them as a target for programs, and used the "we can't all buy a GeForce" argument. My point was that even if such an extension is approved, it will likely be only the GeForce level cards that implement this extension anyway.

And I don't think that the speed gains made programmers lazier. If you start doing some 3D programming, you'll learn that tweaking bytecode is not the most effective way to optimize 3D programs. I think that it will be hard for you to realize, because you're using software rendering.

Don't get me wrong, I can still see how a faster read/write pixels routine could be helpful, but I think that you're overrating the pointer thing.

Korval
05-15-2001, 01:26 PM
I still don'y understand exactly what you are trying to do that requires glDrawPixels. If you generate, through an optimized assembly routine, some 2D image, you can get it onto the screen easily (and quickly) enough by texturing it to a qaud and rendering that.

Also, don't forget OpenGL does have functionality for running on other machines and piping the results to yours. In this case, the pointer you seek is far beyond the reach of your program.

Lastly, I've always prefered the "shut up and let the card (or somebody else in general) do it for you" approach. That way, you spend more time on the actual application rather than optimizing fast-block copies and the like.

ET3D
05-15-2001, 01:46 PM
Korval, my guess is that he's changing the image every frame, which is why he needs to upload an image to the card each frame.

Sheepie, assuming that this is what you need, it'd still be better to do with a texture. It may even be faster than using a pointer.

The point is (stupid pun), when you get a pointer, you have to prevent the card from doing any rendering at that time. That's one good way to slow down a card. If you put your image into a texture, you could integrate it into the normal rendering process, which is what a 3D card does best. This will also add a lot of possibilities for you, such as warping that background, or using it as a texture, or a bump map, or whatever.

Perhaps you should try using glTexSubImage2D to get that image of yours into a texture. Could well be faster and more interesting than glDrawPixels. Maybe not on a software implementation, though.

Sheepie
05-15-2001, 02:16 PM
I think the point I was trying to make is that I am NOT concerned with the specific implementation on the card itself. I have a Device Context to the Window already, although you obviously can't use it unless you want it to be overdrawn.

I made the G2 observation from the other post that made the point that all people should go out and buy G2's.

Where did you learn that speed gains don't make programmers lazier? It most certainly does. Case in point...Windows. Am I the only one that notices the enhanced bloat and slowdown associated with each new release? And yes, if I can make a 3D modeling program that kicks out 10-12fps with texturing and blending, why is it that every demo, or GL application I have run into is running something on the order of 1-2fps. My guess is that no thought was put into software only setups, and that is being lazy however you sugarcoat it.

I can not use a Texture for the reasons mentioned. Each image is calculated on the fly and stored in a buffer. That buffer is the buffer give to me by DirectDraw. Your way would require writing the data first to a seperate buffer and then using glTexSubImage2d to get it to the card. I can't even begin to imagine how slow that would be for a full screen 1024x768 32bit resolution. I'll bet it'll be far slower that the 100 clocks/pixel I get currently. I imagine an order of magnitude slower as transferring all that data would be fierce.

As far as the pointer not being available from another machine. I specifically mentioned that in my post above.

You may prefer the 'shut-up and let the card do it' approach, but I like to know the in's and out's of the things I program. If everyone took that approach, it would be a sorry state of affairs in the programming world.

As far as preventing the card from drawing to the framebuffer. How about this-

Draw 3D scene
glFinish
Draw 2D stuff
wglSwapBuffers

Now I must have misread the manuals when I read this-


The glFinish function does not return until the effects of all previously called OpenGL functions are complete. Such effects include all changes to the OpenGL state, all changes to the connection state, and all changes to the framebuffer contents.

Sounds to me like I could get my *pointer* and start drawing.

ET3D
05-15-2001, 02:49 PM
Sheepie, I think that you're putting a strange meaning to the word "lazy". If one programmer brings out a great looking product with a lot of features that runs slowly, while another programmer brings out a limited product that runs very quickly, does that make the first programmer lazy? Quite the opposite. I think that it can often be more difficult and more work to produce a fully featured product that takes advantage of modern hardware than to tweak code to make it run more quickly.

How is not optimising for software OpenGL being lazy? The vast majority of users have hardware accelaration. IMO it would be a waste to invest a lot of time for a minority of users, when you could invest your time in making your software better. I would also suspect that those users who run software OpenGL don't buy products that need OpenGL, since for the price of a game (or less), you can buy a 3D card that does support OpenGL in hardware.

I disagree with you about the state of affairs. I think that it's much better when programmers take full advantage of the hardware, instead thinking of ways to bypass it.

Regarding your specific app, I don't understand why you'd use the DirectDraw buffer. Why not allocate your own buffer, do the calculations in it, and write it to a texture? Should be easier, less limiting, and faster. From your description, I'd guess that you have to copy to another buffer for glDrawPixel, too, which would explain part of the slowness.

As for the glFinish(), yes, that's exactly what it does, even though I'd think that you would want to draw 2D first (I thought you were drawing on it). The point was, you're stalling the card to do your 2D. If you let the drivers/card do it with texturing, it could perhaps go something like this: you tell the driver to load the texture, the driver copies it to AGP memory (meanwhile the card is drawing your objects), and tells the card it's there. The card can then textures the background directly from AGP. I'm not saying that this is the exact way every (or any) card would do it, but you can see how you're not stalling the card in this way.

Sheepie
05-15-2001, 09:31 PM
Sheepie, I think that you're putting a strange meaning to the word "lazy". If one programmer brings out a great looking product with a lot of features that runs slowly, while another programmer brings out a limited product that runs very quickly, does that make the first programmer lazy? Quite the opposite. I think that it can often be more difficult and more work to produce a fully featured product that takes advantage of modern hardware than to tweak code to make it run more quickly

I have developed both so I wait to stand corrected. If you don't think it takes work to optimize code then maybe you haven't coded assembly. I am not going to even try to incite a flame war as that would be unproductive, but as of late, many programmers simply view programming as features and add speed later(if at all). It only takes a small amount of planning and a state of mind to write efficient code from the get go. I'm not suggesting you should go as far as unrolling loops(that is no longer a proper optimization method), but there are many ways to write the same piece of code.


How is not optimising for software OpenGL being lazy? The vast majority of users have hardware accelaration.

O.k., maybe you think we should all go out and buy G2's. You'd make a great salesperson for NVidia. I think you may think everyone and his Uncle has a great system with great specs. Sorry to break it to ya, but most of the people with high-end systems are gamers. While the gaming community may be large, it is not the majority. Maybe you think everyone has high-speed internet connects as well?

And as far as the lazy factor, yes, it's a strong word but unfortunately true. They may have some fast card in their system and the 'it runs ok on my system, but a new card' may work fine if your only talking demos and such, but it does not work if you are selling it. And you know what's funny? My software only optimized GL modeling program runs twice as fast on Hardware then a competitive program.


IMO it would be a waste to invest a lot of time for a minority of users, when you could invest your time in making your software better.

Your opinion is not unnoticed, again. Making your software better? I thought making things faster was part of that?


I disagree with you about the state of affairs. I think that it's much better when programmers take full advantage of the hardware, instead thinking of ways to bypass it.

I kinda figured you disagreed from the huge number of replies. What if the hardware isn't there or doesn't support a feature? Fallback to non-optimized software.


Regarding your specific app, I don't understand why you'd use the DirectDraw buffer. Why not allocate your own buffer, do the calculations in it, and write it to a texture? Should be easier, less limiting, and faster. From your description, I'd guess that you have to copy to another buffer for glDrawPixel, too, which would explain part of the slowness

So I would need to allocate a buffer, draw to it, create a texture or modify existing with glTexSubImage2D. Sounds like two writes to me? Am I mistaken?


As for the glFinish(), yes, that's exactly what it does, even though I'd think that you would want to draw 2D first (I thought you were drawing on it). The point was, you're stalling the card to do your 2D. If you let the drivers/card do it with texturing, it could perhaps go something like this: you tell the driver to load the texture, the driver copies it to AGP memory (meanwhile the card is drawing your objects), and tells the card it's there. The card can then textures the background directly from AGP. I'm not saying that this is the exact way every (or any) card would do it, but you can see how you're not stalling the card in this way

I would still need to do 2 writes. One to my buffer, and then when the driver needs to upload it onto the video card. Then the whole process repeats on the next frame. I can not see how that is going to be any quicker than having a *pointer.

Please explain to me how your method is any faster than direct writes to the framebuffer, or even to the display buffer when it has to be written each and every frame? If the answer is it won't be, then where is the harm in providing a pointer? If you don't wish to use it, don't. I personally am sick of big companies telling us what we need. It is a vicious circle. They add a feature in hardware, then take a feature away in software in the guise - 'it's good for you' The only ones it's good for is those that sell the hardware. We eventually will become so dependant on hardware to do every little thing. This is called the 'dumbing down of programmers' and is a well known phenomenon that has been happening ever since the first computers.

ET3D
05-16-2001, 04:15 AM
I have optimized assembly (when it was still a useful thing to do), and it is thoughless work. It's applying a set of rules and measurements. True, it can be tricky, but IMO there's much less thinking involved than getting a good looking 3D effect to work, or using the right space subdivision and LOD algorithms to speed up your code (which, believe me, is much more effective than low level optimising a bad algorithm). You can still gain some speed by using assembly code (like SSE) for specific things, but it's the last thing a programmer should do, IMO.

I'm not saying get a GeForce2 (although the MX versions are getting pretty cheap). Just buy an 8MB TNT2 M64, for heaven's sake. It can be found for under $30, and it will accelerate OpenGL for you. If you can't afford it, then most likely you can't afford any software that uses OpenGL, unless it's freeware, in which case you've got no right to complain http://www.opengl.org/discussion_boards/ubb/smile.gif

In case the hardware doesn't support a feature, the fall back is to use fewer features. I don't plan to support software OpenGL at all, and I'd have loved to be able to detect software rendering, and be able to adjust the features I'm using accordingly. If you did want to support software rendering, I'd program (or buy) a software engine, since it'd be much faster than a generic OpenGL implementation.

But I don't think that a software engine is a real necessity these days. I don't assume that everyone and his uncle has a great system, but I'll probably be not too far off the mark assuming that most users have rendering power at least the equivalent of a 300MHz CPU and the i810 integrated chipset.

About your program: in the current state of affairs (no pointer), you always need to first create your data in a buffer, and then copy it to OpenGL. My suggestion was to copy to a texture instead of to the display buffer. The number of copies is the same, so this method shouldn't be any worse.

Why the texture method is faster than the pointer method: it's called parallelism. Read back my description. With the texturing method, all the time that you're calculating your image and copying it to AGP, the 3D card can do work.

john
05-16-2001, 05:14 AM
Hello,

McCraigHead has eloquently explained why pointers to video memory is a proverbial bad thing. It won't happen, and your arguments FOR it and your arguments against not having it (!?) are misdirected, IMHO.

The motivating argument seems to be that glRead/Draw pixels is too slow and a stubborn refusal to use texture mapping as a method of uploading bitmap data to the card is a good reason to get a pointer to video memory because it will be (somehow) faster. Why? You might equally suggest that your harddisk is too slow, so you *must* have a memory mapped harddisk pointer, or that you're unhappy with your laser printer's printing speed, and so want a pointer to that, too.

Why will a pointer make transferring bitmap data to the card faster? The consensus seems to be that having a ptr to video memory will eliminate the need for a duplicate copy of the image in system memory and it'll get around calling glDrawPixels(). This can, argubably, reduce bus traffic since the data is going straight to the video card and therefore only needs to use the bus once. (Although the second transfer from system memory can be done with DMA.)

It is not just as simple as getting a pointer to video memory and merrily reading/writing to it at whim, however. Video access has notoriously been slow. The original poster would have rememebrd this from his DOS VGA/SVGA coding days. Remember how reading from the video card seemed to be a feature the h/w vendors only added for backwards compatiblity? Reading/writing to video memory isn't going to run at the same speed as reading/writing to system memory. Your argument for speed up is that your combined read/writes to system memory and transfer to the video card is slower than the read/writes to a slower resource.

Another argument prposed by some is that since OpenGL is *meant* to be low level, then it *should* provide a pointer. This isn't the point of OpenGL. OpenGL is a low-level *abstraction* of the graphics hardware. Abstraction, by its very definition, hides away the implementation details. (So, by the very defn of abstraction, then OpenGL *SHOULDN'T* expose a pointer to the programmer.)

Suppose that you could get a pointer to the graphics display. How is the programmer going to be able to USE this pointer? Someone talked about using a string instr to blit the image across, but this is fairly naive. Who says that the video memory is linearly addressed, or that the data is encoded in RGB triples on byte aligned boundaries, or any other myriad of schemes?

Someone retorted that learning pixel formats is no different from learning new techniques off the advanced graphics forum, but this is naive also. Device drivers exist so programmers don't NEED to learn how different hardware is structured. Doesn't anyone remember the pain of non-standard SVGA cards? How some games only supported three major SVGA cards because they were all different until VESA brought out a common interface?

Suppose, however, that a pointer was available, and (as someone suggested) the programmer could figure out how the pixel format through some interface extnesion string, then what happens when a NEW pixel format comes out? Does the application suddenly break because it can't understand that this new pixel format isn't linearly addressed, but uses bitplanes and stored in seperate buffers for each R G B, or something equally perplexing?

Ultimatetly, however, graphics cards are another shared resource, and they need to be controlled by the operating system. In the same way that an appliction can't have direct access to a disk because the o/s doesn't trust it to respect the file system, and in the same way a program's printer output must be spooled so multiple applications don't try and print simultaneously, the o/s has to know how the graphics resource is being used. Just because an application might request a window doesn't mean that it owns that window all the time. Its fairly easy to show that (under IRIX, for example, at least) that buffers are shared between windows. (If, for example, an application just captures even the back buffer and writes it to disk, then you can see windows materialise in the files as they're moved over the capturing program's window.) Most users would find it is unacceptable for a application to get hold of a window but write over other windows when they overlap. Not only can a graphics resource be given to another application, but that resource might change and make pointers to it invalid (if a window is moved, for instance, or the virtual desktop panning thing kicks in).

Pointers will also stall the card, because suddenly the graphics card doesnt know what you are changing about the frame buffer.

Pointers are bad. But there might be alternatives.

This is just a wacky idea. Truly wacky. Freaked out. On drugs, man =) But what if there could be a pixel buffer extension which you COULD get the pointer to? That way you could write to memory on the video card (and it could be segregated from the frame buffer memory so the monitor doesn't soak the read port on the chips all the time) in whatever format you wanted, just like system memory. Then you could use glDrawPixels from this memory into the frame buffer. it'd save on bus bandwidth because the data is already ON the card.

Well, this ought to do it for now.

Some final quick things i wanted to elaborate on, but can't be bothered:

- leaving it to the h/w is good. its abstraction. the h/w vendors can make improvements "behind your back". This is *exactly* why object oriented languages have private member functions.

- get over assembly. the days of hand optimising assembly code is dead. algorithmic optimisations are the way to go. balancing the pipeline for one processor is all very well and good, but what happens when you want to run your code on another processor? I mean, do you know ANYTHING about memory latencies, the number of pipeline stages, how superscalar the chip is (how many pipes it has, what set of instructions certain pipes have), how the branch prediction mechanism works, instruction timings and so on and so on. So, you spent 9 months of your life figuring thatout, and then someone brings out a new processor with more pipeline stages, and all that carefully balanced code is out the window.

- and i disagree about coders writing FOR the h/w. they shouldn't be writing to FIT h/w. [snipped]

cheers,
John

mcraighead
05-16-2001, 07:49 AM
Read what john said. After that post, there's really nothing more to say.

- Matt

ET3D
05-16-2001, 10:17 AM
What do you mean, nothing else to say http://www.opengl.org/discussion_boards/ubb/smile.gif

Nicely said, John. You saved me from making a few points http://www.opengl.org/discussion_boards/ubb/smile.gif

About the idea of an accessible pixel buffer in video memory, I think that it still suffers from the format problems that you mention - what if it's interlaced, etc. Although I think that it would work if this extension will be limited to certain formats that every card implementing it will have to support. This would of course not guarantee good performance, since the copy to the frame buffer would possibly be done by copying to main memory, changing the format, and writing back to the card. There's sadly no way to guarantee that an OpenGL operation will be done purely in hardware.

I still think that reading and writing pixels using functions could work well, as long as the function doesn't need to do conversion. Unfortunately, the only way to know what format has best performance is to hope that there's documentation for a specific chip about that, and gear your routing for that chip. If you had such data for all chips, you could possibly find something that works decently well for all chips (my guess would be RGB 565 and RGBA 8888). Unfortunately AFAIK there's no 565 internal format definition for textures (although GL_RGB5 will likely select this).

I think that some information from the chip makers would be helpful here. For example, while NVIDIA mentions the fastest formats for texture loading, there's no information about the fastest way to read back the frame buffer. Although again I'd imagine that 565 and 8888 are good formats to use for 16 bit and 32 bit modes, respectively.

mcraighead
05-16-2001, 11:56 AM
The fastest formats for reading back the different framebuffers are as follows:

16-bit color: GL_UNSIGNED_SHORT_5_6_5/GL_RGB
32-bit color: GL_UNSIGNED_BYTE/GL_BGRA

16-bit depth: GL_UNSIGNED_SHORT/GL_DEPTH_COMPONENT
32-bit depth/stencil: GL_UNSIGNED_INT_24_8_NV/GL_DEPTH_STENCIL_NV

If you want depth only, GL_UNSIGNED_INT/GL_DEPTH_COMPONENT and GL_UNSIGNED_SHORT/GL_DEPTH_COMPONENT are both all right. If you want stencil only, GL_UNSIGNED_BYTE/GL_STENCIL_INDEX is probably best.

DrawPixels is a bit trickier... I won't get into it here.

- Matt

ET3D
05-16-2001, 12:58 PM
Thanks, Matt!

So, where if not here are you planning to get into DrawPixels? http://www.opengl.org/discussion_boards/ubb/wink.gif

Sheepie
05-16-2001, 07:26 PM
Well, in the end, I really won't be using OpenGL for my applications, and the reaction and endless negative posts has sealed it for me.

Although my Assembly language programming of late is mainly reserved for speed critical areas, I have indeed done the optimizations you mentioned. And I diagree vehemently against the idea that optimizing Assembly is

thoughless
...thoughtless work. I have actually optimized the BTC opcode to be twice as fast as Intels *hardware* implementation. I'd love to see you do it using your 'rules'.

I never made any mention of where I would use my optimized assembly and yet you seem to presume where I will. As the program does the calculations at the same time as storing it to video ram, your entire theory of using the parallel nature of the card is moot. Why then I would need to wait until I was finished calculating and filling my buffer to swap it to the screen. Maybe those things happen instantly.

You know ET3D, you're right, just because I can't afford a video card, I can't complain. What was I thinking. Only those with good hardware setups have that right.

Yeah, those are some good comparisons John. A printer, and a harddisk. Yeah, be a dear and get me pointers for those too while your at it.

VESA was unfortunately not as widely adopted as it should have been, but yes, I had learned it and hoped it would be adopted. Thank god we have advanced to the point where hardware determines what we need. Now the people selling the cards get to determine our fate.

Shared resources are nothing new, and yet still...some applications DO mess up other programs. You'd think with all the restrictions in place, nobody would have gotten through, so the obvious solution is more restrictions.

I am *over* assembly and only used it as an example of a program that utilized software means to provide the speed, and yes Assembly is dead.

[MORE sarcasm]Gee, I'll bet you all can't wait till they make CISC chips that process C commands and they completely do away with Machine code/Assembly all together. Of course C would then become just as unused because it would be *too hard* and *too unstable*.[/MORE sarcasm]

And lastly, this is a forum for posting suggestions for future implementations of OpenGL, however it seems to me more a bashing of ideas presented by others. One thing I have noticed on this board as a whole. Are there nice people on this board? Lot's, and I have gotten a lot of great help from users on this board, yet there seems to be this arrogance and smugness that emanates throughout. Why no posts in here supporting my opinion? Who would argue with this kind of resistance. I know I'm through.

I'm finished with the model editor now anyways, and not a moment too soon.

Korval
05-16-2001, 10:22 PM
I think it's more about design ideology.

If you want a pointer to the frame buffer, fine. I'm sure there is a way to get it. There may even be libraries to help you do so. But, by the design ideology of OpenGL, it should not. By doing so, it kills any platform/hardware/implementation independence that OpenGL provides.

I'm not arguing whether it is right or wrong for you to want or have that pointer. I'm just saying that, by the design ideology of OpenGL, it has no buisness providing one. OpenGL is supposed to hide these things.

Wanting an extension for OpenGL that provides direct video memory access is like trying to use a hammer to screw in a screw. Screws and nails are similar, but a hammer makes a poor screwdriver.

Bob
05-16-2001, 10:54 PM
If you really want a pointer to the framebuffer, you can use another API. Since you are stuck with the software implementation of OpenGL, you can aswell use another software API that is spcifically designed for software rendering. OpenGL is known to be a slow software renderer because it's suppose to give you visual quality over performance.

Second, yes this is a forum where you can post new ideas. But we argue because we can't just toss just about anything into OpenGL. Like in this discussion, you post the idea of a way to retrieve a pointer to the framebuffer. We disagree because we don't think it would fit into what OpenGL is supposed to be.

ET3D
05-17-2001, 07:16 AM
Sheepie, I'll refrain from arguing the more meaningless points (such as the thought put into low level optimisation), and just try to explain again the speed benefit of not writing directly to the frame buffer. You seem to have not understood the idea of doing things in parallel, so I'll give you an example.

Suppose you're using your extremely efficient code that "does the calculations at the same time as storing it to video ram". Suppose it takes your code 20ms to do these things. Now you want to draw some 3D shapes on that background. You make some 3D call, and the driver digests your data and passes it to the card to draw, and the card then takes 30ms to draw them. Total: 50ms (ignoring the driver work).

Now suppose that you do it another way. You make your OpenGL calls to draw the data. The card now starts its 30ms of work. While it does that, you draw your image into a buffer, and call the driver, which copies it into AGP memory. All this is done within those 30ms when the card is working. Now the card still has to read your texture and draw it. Even if it did read all your data, then it would be faster than your direct writing to vidmem, since it will use AGP bursts. But it doesn't even need to read all your data - if you're using the image as a background, and there are already shapes drawn (which was done in the 30ms of work), AGP texturing can just read the data it needs to fill the places where the background will show. So this work will take a lot less than the 20ms it took you before, and you end up with a speed gain.

A note about the attitudes in this forum: I believe that the reason that nobody supports your opinion is because nobody agrees with it. This specific forum is not a support forum (unlike the others here at OpenGL.org). People mostly get bashed in this forum (including me) for a reason - it's your job to make a convincing argument that a feature should be part of OpenGL. If you can show how the feature you're suggesting will be useful to a lot of people (give examples of real applications), and address the concerns raised (for example the inefficiency, as I've explained in the previous paragraphs, your inability to address memory layouts that you haven't encountered before, and other things), then you may be able to convince people. About the smugness, well, programmers are typically both arrogant and helpful.You're extremely arrogant.

Sheepie
05-17-2001, 10:48 AM
First off, I didn't come in here bashing your examples. I provided examples. What would you have me do? Just say I want a pointer and not give examples?

I read every word of every reply in this topic and I understand your point. You however do not stop making it. I can not use a texture. What would you have me do with a 256x256 texture to get it to full screen and still maintain the quality? The only way would be to use seperate textures and tile the screen, which of course means determining the screen resolution and then breaking that down into power of 2 textures, and breaking down the algorithm into sections(one for each tile). Sounds like a lot of effort and a lot of code and a lot of speed lost.

You continue to maintain that the stall would be bad enough that using textures would be far faster. The example program that I mentioned has very high CPU/Bus utilization. How would this effect the transfers that would inevitably need to be made to the graphics card? For a simple scene, it may not be an issue, but when the graphics card starts using system memory, then there will be timesharing.

Nobody supports my opinion? A simple search for glDrawPixels yielded numerous posts, and I myself have seen many posts asking why glDrawPixels was so slow, if there was a way around it, and invariably the answer always came back to textures. When pursued that a full screen would require more than a single texture, the argument was towards a higher end card(something that would utilize glDrawPixels better). But hey, it sounds good to say things that promote your side of the argument even if they are incorrect.

Arrogance is defined as someone who makes unwarrantable claims to superior performance or rights. My claims are warranted as I have actually done the examples I listed. I do not claim that you or anyone else needs to use this pointer, merely that one could be used, and I give reasons why. The people who have posted here have made it quite clear that they have made their minds up, and have went so far as to indicate that I am some dumbfounded person who needs pointers for of all things...a printer. You(ET3D) have even referred to Assembly optimization as thoughtless. But I expected the reaction as my post was heavily worded. I have no personal animosity towards any individuals on this board, but I think having an atmosphere where people are immediately put into a defensive posture will only serve to decrease the attitude of teamwork.

It is my responsibility to promote the idea of a pointer. I have given examples of where it could be a benefit and have given the reasons why it would not be effective using textures. I don't really expect it will be implemented as the vicious circle is already well underway. Nobody uses glDrawPixels because it is too slow, and nobody optimizes glDrawPixels because nobody uses it. At least, that's my opinion, I'll leave you to yours.

ET3D
05-17-2001, 12:42 PM
Okay, Sheepie, sorry. I apologize about the "thoughtless" thing. I just retorted to the generalisation that programmers who put importance in other things are lazy (and that's arrogance, BTW). I'm really not here to attack you personally, but to try to understand your point of view, and make you understand mine.

First of all, most chips, and I believe that the software implementation also, support textures of at least 1024x1024 (correct me if I'm wrong - you can query that). The main exceptions are the Voodoo family (3 and lower). Which is indeed important enough, but for our purposes would not matter.

As I mentioned before, even if your pointer feature does get adopted, it will only be implemented in newer chips, and most likely won't be implemented for the Voodoo family (since 3dfx is dead). So you'll still have to use other means for these card. Same goes for other limited cards. You will pretty much be able to take for granted that you have a powerful card.


I can't give you a good answer about how a program with high CPU/bus utilization would affect performance, but that's something that you may want to check. It highly depends on the scene and what you're doing. My guess would be that parallelism would still work (even though your guess may be the opposite). In any case, you at least acknowledge that in some cases this method will be faster.

BTW, I was a little surprised to read that your program has a high bus utilisation. But I guess that you're doing enough pre-fetching to offset the high latency of reading from main memory.

Yes, people do find DrawBuffer and ReadBuffer slow, but it's possible to want them faster, and still be against a pointer. If you read back my posts, I always say "yes, a faster way to read/write will be better, but I don't think a pointer is good". So I'm not arguing. Matt did post what formats are best to read from GeForce hardware, and NVIDIA docs say what texture formats are fastest to transfer. This in not a complete solution, but it's better than nothing. Other people suggested ideas other than a pointer to the frame buffer. A frame buffer pointer is not the only way.

About examples, the only one you gave, AFAICR, is your screen saver. That's not enough of a good argument for me. I just don't have high regard for screen savers. Do you have other examples where your idea may be helpful?

BTW, you'll probably love programming a GeForce3. That should put your talent to good use optimising vertex programs and pixel shaders.

j
05-18-2001, 08:29 AM
Dang, lost my post.

Ok, let's try this again.

As far as I can see, the reason Sheepie wants a pointer to the framebuffer is because glDrawPixels is considered too slow for what he wants to do, and he doesn't want to go to the trouble of using textures and/or they would be slow on his software implementation.

So, why not a compromise? Add a function to OpenGL called glFastDrawPixels or something like that. The user would specify the pixel location of the rectangle to be drawn, the height and width of the data, a data format, and a pointer to the data. Basically a 2D blit. This should be faster and easier to optimize than DrawPixels because there would be no transformation, fogging, texturing, scaling, or RGBA color manipulation, all of which can happen under DrawPixels.

And if this isn't fast enough, there could be an extension similar to VAR whereby the pixel data can be put into AGP or video memory and the card could pull it through DMA.

In Sheepie's case, he could put the results of his calculation straight into an AGP buffer, and the video card could pull it from there. Or if he still wanted to, he could copy it straight into video memory and have only one transfer of data, which is what he wanted. Although I have a feeling the AGP would be faster.

The advantages of this as far as I can see are:

1. Abstraction. You wouldn't need to know anything specific about the card; you just tell it what you want it to do instead of doing its work for it.

2. Speed. In some cases, this may be faster than a pointer to the framebuffer. By having the video card DMA the data it needs, the CPU can work on something else at the same time. Not to mention, writes to AGP are usually faster than writes to video memory.

Disadvantages:

1. Speed. In the case of a software renderer where the pixel format of the framebuffer is known, a pointer would be faster.

Well, any comments?

j

ET3D
05-18-2001, 10:13 AM
Yes, I agree. That's what I wanted to suggest initially, then reconsidered. I still have some reservations, but, reading again the description of glDrawPixels, I think that it may be worth having glFastDrawPixels (more correctly, having glDrawPixels, while the original should be called glDrawFragments).

My reservations are:

1) It's not necessarily faster than an optimised glDrawPixels. It's not just a '2D blit'. The frame buffer can always be a different format than the image you're drawing, and conversion will be needed. You can disable all fragment operations before glDrawPixels, including depth buffering, and an optimised code path would probably work just as well as glFastDrawPixels.

2) It's difficult to define this well. The simplest definition would be a simple copy to the frame buffer, which is fine in Sheepie's case. But what if you want to have your image in the foreground? You'll have to define this somehow, which means that you'll need to either define a depth or use alpha testing. You'd probably want a depth value in any case, so that your image has some relation to the rest of the render. Which kind of brings you back to being like glDrawPixels.

Still, a "fast" form of glDrawPixels has an advantage not so much in speed but in simplicity, since I'd imagine that most people don't need all the extra features that glDrawPixels provide. glFastDrawPixels will save the need to turn off all the fragment operations, then back on. There will also be some speed benefit in that, in saving these state changes. The definition problem is not a serious one. I imagine that implementations will not write to the display directly, but use the image as a texture (direct from AGP, probably) for a screen size quad, thereby adding the Z and alpha testing. Still, I'm sure that people will want alpha blending, too, so it may be a little difficult to make a clear cut here.

mcraighead
05-18-2001, 11:58 AM
Automatically disabling fragment ops really does no good. You can just disable them yourself and we can notice that. In fact, if you want optimum DrawPixels DEPTH_COMPONENT performance, for example, you need to set up the proper fragment ops. Specifically, you need depth test on, depth func ALWAYS, and color writes off (i.e. ColorMask FALSE/FALSE/FALSE/FALSE, or DrawBuffer NONE). In short, you need to set things up so that you are really just writing depth into the depth buffer and doing _nothing_ else in the operation. You also need to pass in the right formats. I might as well fill the depth component DrawPixels formats for best efficiency:

16-bit Z: UNSIGNED_SHORT/DEPTH_COMPONENT. (Requires fragment op setup.)
32-bit Z and stencil: UNSIGNED_INT_24_8_NV/DEPTH_STENCIL_NV. (Note that you don't need to set up the fragment ops for this one. Read the spec [which I wrote, BTW].)
32-bit Z only: UNSIGNED_INT/DEPTH_COMPONENT. (Requires fragment op setup.)
32-bit stencil only: UNSIGNED_INT/STENCIL_INDEX. UNSIGNED_BYTE/STENCIL_INDEX isn't bad, either. (Again, no fragment op setup required. Stencil DrawPixels works differently. Read the OpenGL spec.)

As for DMA, yes, that would help. I have nothing to say on that front at the present time.

- Matt

ET3D
05-18-2001, 12:35 PM
Thanks, Matt! This thread is accumulating some helpful info from you. Any chance this will be in the next revision of the GeForce performance FAQ? BTW, are these tips also true for the TNT family?

I hope you won't mind sharing a little more. Let's go back to Sheepie's problem. Suppose he wants to use a certain image as a background to some 3D rendering. I'd assume that the following would work quickly, but I'd like you to nudge me in the right direction if I'm wrong:

Clear Z
Disable Z, texturing, blending, ... All except ColorMask(true,true,true,true)
Use DrawPixels with BYTE/BGRA or 5_6_5/RGB depending on 32 bit or 16 bit mode

mcraighead
05-18-2001, 12:43 PM
Many of these things will _not_ be true for TNT; not all of them are accelerated.

I would suggest sticking to GL_BGRA/GL_UNSIGNED_BYTE for DrawPixels whether you are in 16-bit or 32-bit, for the moment. In the future more options may be available. (Other formats will still certainly work and run "reasonably" quickly, but they will not be optimal in performance.)

- Matt

Sheepie
05-18-2001, 02:15 PM
Well, I apologize for being rather inflammatory as well. I tend to feel very passionate about issues and do take things personally at times(something I need to work on). I just didn't see what people *were* saying as being helpful but rather being negative.

I'd be more than willing to give up the pointer idea if there was some chance that a Blt(glDrawFastPixels) function might appear on the scene as j suggested. I really don't understand why the current glDrawPixels is so darn slow except for the reasons I mentioned above.

The reason I use the screen saver as my example is that it uses fairly high resources and I am considering adding 3D to the next release. This has unfortunately thrown me into the problem of getting that data to the screen fast enough. I also work on other projects(model editor, and an upcoming game) that could use this feature but as the model editor doesn't require 2D and the game is in the drawing board stages, I thought those would make poor examples.

Thank you Matt for providing some information on making glDrawPixels faster. I try to make the formats as similar as possible.

Maybe if someone could provide a Fast Draw Pixels(comparable to a BitBlt), that would be a great start. Maybe that could be enhanced later to providing some degree of masking. Many effects could be achieved if the BitBlt could be used for reading and writing.

Mike Welsh
05-23-2001, 01:09 PM
I do indeed realize that all of the wonderful arguments the 3D guys were making are mostly truthful, but a pointer to the frame buffer is the only way to maintain proper bus speed for high speed data transfers. If for instance you were Moving 240 mega-bytes per second across the PCI bus you can direct target an AGP buffer (GL??) and use a max of 240 megabytes per sec. If on the other hand you were to 1st hit a target in memory then let GL move the buffer onto the card you are now making your memory do 480 megabytes per sec worth of work (240 on the pci side and 240 on the agp size). Which means that should you want to do anything fun with the video data you have less time to do it and you are really pushing your ram speeds which slows the rest of the systems down. I currently have applications that require unhindered pipes of around 400 megs per sec to the AGP card something i would loath to do thru system memeory.

To make a long story short High End realtime 3D systems and HD / Film Res video systems are very difficult to make without direct targeting (ie going from pci to agp without a hop thru memory). This has so far kept many companies like mine working with proprietary API's which give us speed with out all the robustness of GL.

I would be very willing to do much of the work myself to the API and even show how to get direct buffer access (if needed i am an engineer who has done this) if it would allow me to use GL.

ET3D
05-23-2001, 03:28 PM
Matt hinted that it may be possible to use an AGP buffer in the future.

john
05-23-2001, 03:38 PM
Your argument would only hold if information was flowing one way, ie. you were doing something like:

for(each line)
for(each pixel in each line)
*(ptr++)=generator();

where generator() is a self contained function (ie. doesn't reference any external data).

as soon as you start relying on other data (a convolution, for example, that sucks up an array and turns neighbouring values into a new value) then your argument begins to lose credibility. If you did this, for example:

for(each line)
for(each pixel in each line)
*(ptr++)++;

then you're reading from a slow resource.

So, your argument will only hold if the cpu can generate cool stuff without state. How useful is this? Furthermope, the above example can't use burst writes, since you're not providing a pointer to a buffer and getting DMA to suck it straight through.

In summary:

your argument will only hold so long as
- your code doesn't use state from its previous calculations
- writing single values at a time is not slower than block writes.

if these don't hold, then it won't take MUCH computation when reading/writing to a faster resource and then DMA to the card won't start winning.

IMHO, of course.

cheers,
John

OpenGLRox
05-29-2001, 08:52 AM
Heh heh. I came to this forum in order to post a suggestion of having a pointer to a frame buffer, and it looks like someone else beat me to it. =)

Now, the reason I want one is this:

I create a 640*100 interface for a game. I want to put it on the screen just like it is, no lighting, no fog, no multiple poly's, no scaling. If I had a pointer to a frame buffer, I could just blit it into the buffer after all my 3d draws were done, and life is good. It would look EXACTLY as it did in the paint program.

What I am using now is Ortho view which is essentially overkill, because I also have to disable effects and all that JUST to get it on the screen the way it looked in my paint program. I also have to split the graphic up into smaller textures just to get a non standard size graphic of 640*100. That is a pain in the butt, IMHO. Yes, getting a 640*100 graphic can be done in OpenGL, if you want to jump through a bunch of hoops.

I am not suggesting a pointer to the video card memory (although it might be nice to have around anyways). I am suggesting a pointer to a ram buffer. That is if OpenGL draws to a ram buffer, then blits it to the hardware. In that case, a pointer to that video buffer would be a very nice, and easy to implement, feature to us developers to make our lives a little bit easier.

Now, since I am not entirely sure how OpenGL draws to the hardware, if a ram buffer is not used in any way, then having a pointer to the video ram would still be a welcome change. It may be slow to access, but heck, if a developer is willing to use it at the risk of speed in his application, then let him. =) Nothing wrong with that.

Anyways, just my 2 cents or more like 5 bux.

ET3D
05-29-2001, 12:10 PM
OpenGLRox, I would say that this is a good example of when you don't want to draw the image as is. Drawing that 640x100 image as is means that you're limiting the player to a specific resolution (presumably 640x480). This is considered a bit low today even for pseudo-3D games. If you draw using textures, you can scale the interface to fit higher resolutions (even if it won't look exactly like in the paint program).

mcraighead
05-29-2001, 01:09 PM
If that's what you want to do, you should either use DrawPixels or a texture. (A texture has big advantages, as noted, since it can scale w/ nice filtering.)

- Matt

OpenGLRox
05-29-2001, 01:31 PM
ET3D, the 640*100 was simply an example of a non-standard size piece of art that someone might want to draw. I'll change it to 1280*100 then. =)

The reason I might want to draw a graphic at actual size it I may not want the art to get lossy from scaling down. I may not want the art to get pixelated from scaling up. I may not want lights on it, or fog. I just want it on the screen exactly as it is.

The point is, limiting a developer in any way is a bad thing. Take a look at Internet Explorer vs Netscape. Have you ever tried to design a nice looking web page with lots of goodies? In IE, it's easy because they give developers a lot of stuff they can play with. In Netscape however, they limit the developer to the w3 statndard and refuse to budge. So, when I have done web pages for people, I have had to say, "Well, if you want a really nice web page, IE people are going to see it. We MAY be able to do a FEW nice things in Netscape, but we have a bunch of hoops to jump through, and more code to write to make it happen". 1 person asked me to put a margue on their web page. No prob in IE: <marque>some text</marque>

In Netscape though, get out the java script manual. There are hoops to jump through. As a result, I try to steer people away from Netscape now, which isn't hard considering you can have some nice activex stuff in IE.

I think OpenGL is awesome, but tying a developers hands in any way is not a good thing, and being the parent by saying, "We're not going to let you have that. You might hurt yourself", is also most likely not going to make Joe Developer who has been coding for 30 years very happy.

john
05-29-2001, 03:50 PM
[/QUOTE]I think OpenGL is awesome, but tying a developers hands in any way is not a good thing, and being the parent by saying, "We're not going to let you have that. You might hurt yourself", is also most likely not going to make Joe Developer who has been coding for 30 years very happy.[/QUOTE]

tying the developers hands? Have you not read the many extensive posts on this topic? What about tying the hands of the h/w vendors? <smacks his head against the wall> it isn't just a matter of stopping developers from doing what they want to do. You forget... an operating system is meant to manage LOTS of different processes. It isn't just an opegnl thing. freak, some unix systems beat the compiler around the head to follow conventiosn so everything can talk to each other and be civil. its not just a matter of a single user and a single process thinking its somehow more important than anything else. What do you want next? pointers to semaphores so you can override locks?! Pointers to the IVT so you can change interrupts at WILL with no regard to any other process? How about network mapped pointers to every machine connected to the internet (so long as you *promise* not to shoot yourself or any other user in the foot PUH-LEASE)??

Its not a matter of sulking because someone is not giving you a feature. Give up on it. Get over it. Stop crying about something you shouldn't have. I can only assume you guys don't program in object orientated langugaes (but i WANT to see this private variable even though i'm not a member of the class and i can't make it a friend), or a functional language (but i WANT state and side effects, even though tihs will invalidate the implicit parallelism) or a multi user operating system (but i WANT to be able to get chmod a file, even though I don't own it), or any myriad of other things.

operating systems are, these days, sufficiently complex as to enforce a large set of rules so every process may play happily together. Yes, the operating system will stop a given process from doing stuff. JVM's in your web browser, for example, won't let just ANY byte code do anything to the parent o/s. (but, hey!! look what happens when Outlook will let just ANY script execute, because it KNOWS that program writers won't shoot themselves in the foot, 'cos the programming community is just a BIG HAPPY FAMILY!!)

Bleah. BLAH. I say again, BLAH. Go to an o/s course. Ask THEM what they think about protecting s/w from shooting themslevs in the foot. See just HOW MUCH of it is done by other parts of the o/s, INCLUDING the compiler (in some cases).

Hell, even Sun Microsystems are dead keen on writing their own device drivers because they don't want to give supervisor priviledges to just ANY piece of code. Microsoft have to jump through hoops with virtual device drivers and all the rest of it to try and combat instability from third party code with o/s access privildges.

grr

John

Sheepie
05-29-2001, 07:28 PM
Hey John...lighten up.

I don't think you are going to encourage people to see your point of view by insulting them.

I have programmed on both sides of the fence and I do understand the need for private variables and have also written my code as of late to be as globally free as possible.

I think most of us here can appreciate the need to keep the OS happy and I for one don't think my application is that important. I strive to make my programs as friendly to the OS as possible. I perform garbage collection, save and restore any settings the program may have modified while operating, and try to keep resources low when the application is minimized.

I'm sure you could easily make us all look like 1st graders with your extensive knowledge of the subject. Try to remember that we each have talents and we are not asking for pointers to every friggin resource on the computer. I for one would be more than happy at this point with a *fast* gldrawpixels *glreadppixels.

Do I think it'll happen? Almost certainly not.

Let's say this. What if GL provided a Special buffer that would be specific to the process that needed it. Almost something akin to a Private Variable. Only Program A could access it. Program B could also have it's own Special Buffer. I think this is called Virtual Memory Mapping but I'm sure I'll be corrected if I'm mistaken. Now let's say I could get a pointer to that and that was used as the Framebuffer.

I'd be willing to take the performance hit for 3D in cases where I had mainly 2D.

I'd like to know how John would feel if all pointers were taken away. You seem ready to make a point that we want pointers to everything under the sun. How about no pointers? How about we just make everything accessed through some OOP interface? While it might look pretty, you would most probably see a performance hit.

john
05-29-2001, 09:50 PM
Hello,

lighten up? Coming from the guy who previously derided to sarcasm and petty insults just a few posts ago? Incidentially, *NEVER* call me dear ever again.

There are two parts to this message:

- explaining how you missed my point "pointers to everything", and
- why pointers are not all that good

Firstly: You miss the point when I retort "but next you will want pointers to the printer because it prints too slow." It is attacking the form of your argument to show that your argument is flawed EVEN from its form. Its a common enough trick, incidentially. For example:

All octopi live in the sea
All octopi have tentacles,
Therefore:
Everything living in the sea has tentacles.

Now, without even knowing anything about marine life, this argument can be shown to be structually flawed (irrespective of whether it is correct or not, although it should be well known that the conclusion is wrong). If its shown to be structually flawed then this is a good enough reason to not support the arguments.

I mean, this could equally be applied to:

All widgets have Gadgets
All widgets use Thingymabobs,
Therefore:
All Gadgets use Thingymabobs.

You just need to use another agument with the same structure that is demonstratably wrong to undermine the argument. Feh. This is not new. Even Lewis Carroll uses an example in Alice in Wonderland:

"I mean what i say... I mean, I say what i mean. It's the same thing, you know," says Alice. To which the Mad Hatter retorted something along the lines of:
"No. Its like saying that "I like what I eat" is the same as "I eat what I like", or "I breath when I sleep" is the same thing as "I sleep when I breathe"

It's attacking the form of the argument to demonstrate that its unsupportable.

You said that you needed a pointer to the frame buffer because this will make it faster.

If THIS is your stance, I counter, then a pointer to ANYTHING will make it faster. BUt, clearly this is wrong. A pointer to the PRINTER won't make it faster, because hte printing speed is independent on how it is addressed. I was demonstrating that your argument as you proposed it (pointer==speed) is unsupported.

okay, okay. I'm settling down. <relaxes... breathes cool, fresh air>

no, seriously. =) haha. I jest. I get excited sometimes about stuff.
What i say is true, tho', but I guess i could have been more "gentler" about it. A thousand apologies.

But, I digress. Briefly, then:

Seocnd point: Pointers are NOT that important and they aren't that magical, and (in fact), they ARE seen to be eville.

Why do you need pointers? A common enough reason is the argument "because I don't want the compiler to dereference my array indicies". THere are others (and i'd like to hear them, eh), but ppl seem to think that:



GLubyte buffer[10000], *ptr;
int t;

for(t=0, ptr=buffer; t<10000; t++)
*(ptr++)=0;

is somehow faster than

[CODE]GLubyte buffer[10000];
int t;

for(t=0; t<10000; t++)
buffer[t]=0;[\CODE]

Who thinks this? Why? I mean, some have said that to compute buffer[t] the compiler has to multiply t by the size of int and add it to buffer to get the address... which is more work than just incrementing the pointer. Right? But.. the compiler can see this, too. Once upon a time that argument WAS valid, because compilers were dumb. But compilers are sufficiently complex these days that they do a HELL of a lot of flow analysis to see that buffer[t] CAN be automagically transformed into pointer arithmetic. Its like unrolling loops, too. Once upon a time the prorgammer had to unroll loops, but.. these days.. compilers know about loop unrolling. They even know about cache sizes, too, and so know just how MUCH unrolling is going to be beneficial to pack it into a cache line, and so on. Compilers know this stuff.

So, when is a pointer useful? Certainly there is a whole class of languages which get by quite happily WITHOUT a pointer.

Pointers are also seen to be bad because they impede flow analysis by changing state behind the compiler's back because it makes it difficult if pointers refer to the same data. Hmm. I can't remember more of this.. I fell asleep during that lecture. hmm. (It was a coupel of years ago, too, so... oh, and compiler construction was two years before that again, heh=)

Pointers aren't magical.

But I apologise for getting carried away =) Your diea is good, tho'.

cheers
John

Sheepie
05-29-2001, 11:34 PM
Well...you are a fiesty chap, aren't ya.


Where did you learn that speed gains don't make programmers lazier? It most certainly does. Case in point...Windows. Am I the only one that notices the enhanced bloat and slowdown associated with each new release? And yes, if I can make a 3D modeling program that kicks out 10-12fps with texturing and blending, why is it that every demo, or GL application I have run into is running something on the order of 1-2fps.

I'm sure I'm not the only one that feels this way.


I have developed both so I wait to stand corrected. If you don't think it takes work to optimize code then maybe you haven't coded assembly. I am not going to even try to incite a flame war as that would be unproductive, but as of late, many programmers simply view programming as features and add speed later(if at all). It only takes a small amount of planning and a state of mind to write efficient code from the get go. I'm not suggesting you should go as far as unrolling loops(that is no longer a proper optimization method), but there are many ways to write the same piece of code.

Written as a retort to the comment that optimization was *easy*.


O.k., maybe you think we should all go out and buy G2's. You'd make a great salesperson for NVidia. I think you may think everyone and his Uncle has a great system with great specs. Sorry to break it to ya, but most of the people with high-end systems are gamers. While the gaming community may be large, it is not the majority. Maybe you think everyone has high-speed internet connects as well?

I don't see anything here as a direct insult rather making another optimization that I have run into all of the time. The tendency to think everyone has a system that is tweaked when in fact they do not. Also as a retort to-


How is not optimising for software OpenGL being lazy? The vast majority of users have hardware accelaration. IMO it would be a waste to invest a lot of time for a minority of users, when you could invest your time in making your software better. I would also suspect that those users who run software OpenGL don't buy products that need OpenGL, since for the price of a game (or less), you can buy a 3D card that does support OpenGL in hardware.

and...


You know ET3D, you're right, just because I can't afford a video card, I can't complain. What was I thinking. Only those with good hardware setups have that right.

Yeah, those are some good comparisons John. A printer, and a harddisk. Yeah, be a dear and get me pointers for those too while your at it.

VESA was unfortunately not as widely adopted as it should have been, but yes, I had learned it and hoped it would be adopted. Thank god we have advanced to the point where hardware determines what we need. Now the people selling the cards get to determine our fate.

Shared resources are nothing new, and yet still...some applications DO mess up other programs. You'd think with all the restrictions in place, nobody would have gotten through, so the obvious solution is more restrictions.

I am *over* assembly and only used it as an example of a program that utilized software means to provide the speed, and yes Assembly is dead.

[MORE sarcasm]Gee, I'll bet you all can't wait till they make CISC chips that process C commands and they completely do away with Machine code/Assembly all together. Of course C would then become just as unused because it would be *too hard* and *too unstable*.[/MORE sarcasm]

This is probably my main rant...Yes I was sarcastic, but thankfully most people would see that from the nice SARCASM tags that I embedded. I will defend myself if attacked.

and in the end...


Well, I apologize for being rather inflammatory as well. I tend to feel very passionate about issues and do take things personally at times(something I need to work on). I just didn't see what people *were* saying as being helpful but rather being negative.

But after reading your reply to OpenGLRox who didn't make any inflammatory comments like myself before the calvary got sent in to shut me up, you said-


tying the developers hands? Have you not read the many extensive posts on this topic? What about tying the hands of the h/w vendors? <smacks his head against the wall> it isn't just a matter of stopping developers from doing what they want to do. You forget... an operating system is meant to manage LOTS of different processes. It isn't just an opegnl thing. freak, some unix systems beat the compiler around the head to follow conventiosn so everything can talk to each other and be civil. its not just a matter of a single user and a single process thinking its somehow more important than anything else. What do you want next? pointers to semaphores so you can override locks?! Pointers to the IVT so you can change interrupts at WILL with no regard to any other process? How about network mapped pointers to every machine connected to the internet (so long as you *promise* not to shoot yourself or any other user in the foot PUH-LEASE)??

and this very nicely worded argument-


Its not a matter of sulking because someone is not giving you a feature. Give up on it. Get over it. Stop crying about something you shouldn't have. I can only assume you guys don't program in object orientated langugaes (but i WANT to see this private variable even though i'm not a member of the class and i can't make it a friend), or a functional language (but i WANT state and side effects, even though tihs will invalidate the implicit parallelism) or a multi user operating system (but i WANT to be able to get chmod a file, even though I don't own it), or any myriad of other things.

I just had to say something.

I won't even go to your last post as it speaks for itself.

I think for the record-

"We have heard you and understood where you are coming from."

and for the record-

I do not think pointers are a panacea but they do, at times, help.

I do not want pointers to every blasted thing on my computer.

I am not an expert programmer, a systems developer, hardware designer but I have had jobs as an Electronics engineer. I have ran my own software development business for 2 years now and have been programming for over 15 years.

I would like a fast Draw/Read Pixels or a pointer.

ET3D
05-30-2001, 04:12 AM
Don't know about "nice ActiveX stuff". And I usually browse without images, as I'm interested mainly in content. And Netscape's main problem is not that it follows the standard strictly, but that it doesn't follow it well (for Netscape 4.x, at least, which still seems to be the most common version).

Anyway, since this is not a web development forum, I'll go back to OpenGL. I agree that having to disable and enable things just to draw what you want can be a little annoying, and it would have been nice to have simpler functions that can do similar things without all this moving to ortho, etc. - such as sprite functions. Although you can put all state changes into a display list, which should at least make the program cleaner.

Still, for one there's the flexibility thing, as I mentioned. You probably want that anyway. If you go with 640x480, it will look too low-res for users of new cards. If you go with 1280x1024, it might not even work for users of older cards and monitors.

As for cutting the image into small textures, if you're aiming just at newer cards, you won't have a problem with the 256x256 texture size limitation, which will simplify things. If you're aiming at 256x256 cards, then you won't get a pointer to frame buffer extension to them even if it is added to OpenGL, since they will be old cards.

So while I agree that cutting your image into several textures can be pretty annoying, I also think that it's the best solution, that will provide the best results for the largest number of users.

I think that this is a major problem with the pointer to framebuffer idea - it tries to solve problems which are more pronounced with older 3D hardware, and this is exactly what it can't solve. Even if such an extension is approved, you'll end up having to still do everything you're doing now for older chips, that don't get driver updates, and will only be able to use the extension on newer chips, where it's less needed.

mcraighead
05-30-2001, 10:20 AM
Originally posted by OpenGLRox:
The point is, limiting a developer in any way is a bad thing.

If that's the way you feel, I suggest you go back to DOS.

There are many very good reasons for limiting developers' control.

- Matt

Korval
05-30-2001, 02:51 PM
Like I've said before, I don't have a problem with gaining direct access to the framebuffer, excepting details about the OS and other issues that brings. Really, the question I have to ask is, "Why does this belong in OpenGL?"

OpenGL is, basically, a specification. OpenGL's extensions are specifications. What you are asking is for some command like this, "void *glFrameBufferPointerEXT()". Well, the extension specification must tell you exactly what the format of this pointer is. It must provide details that are highly implementation dependent. In fact, that pointer might be invalidated because of a change in OpenGL's state, so you can't even be assured that it is avaliable.

The only way for something this low-level to be part of OpenGL is if the function returns a pointer to a buffer whose format is specifically defined (RGBA8, for instance). Then, when you are finished with the buffer, it would have to copy your changes into the main framebuffer. Of course, this happens to be almost exactly what glReadPixels/glWritePixels does.

A specification cannot really give out unspecified data; hence the term, "specification". Think about it. The extension specification for void *glFrameBufferPointerEXT() would basically read, "Returns a pointer to unspecified data of indeterminant length." This pointer's length and format could be different depending on the pixel format in question. It might even change with a driver update, thus breaking old code that relied on a particular format (breaking old code is something that extension specifications try very hard not to do). And, once you force this pointer to point to data of a specific format, it becomes glReadPixels and glWritePixels, with all the inefficiency inherient in them.

It is theoretically possible that the extension could give you some structure detailing the format of the data. However, many hardware vendors are highly secretive about such low-level details of their hardware, as it could give their compeditors important information they could use to enhance their products.

It is even possible to imagine an implementation that does not possess a complete framebuffer that you could get a pointer to. The OpenGL specification (and, by extension, its extensions) simply cannot allow granting data that is simply undefined.

Humus
05-30-2001, 10:46 PM
Originally posted by OpenGLRox:
The reason I might want to draw a graphic at actual size it I may not want the art to get lossy from scaling down. I may not want the art to get pixelated from scaling up. I may not want lights on it, or fog. I just want it on the screen exactly as it is.


Easily done with textures too. Point sampling and drawing it at actual pixel size.


Originally posted by OpenGLRox:
The point is, limiting a developer in any way is a bad thing. Take a look at Internet Explorer vs Netscape. Have you ever tried to design a nice looking web page with lots of goodies? In IE, it's easy because they give developers a lot of stuff they can play with. In Netscape however, they limit the developer to the w3 statndard and refuse to budge. So, when I have done web pages for people, I have had to say, "Well, if you want a really nice web page, IE people are going to see it. We MAY be able to do a FEW nice things in Netscape, but we have a bunch of hoops to jump through, and more code to write to make it happen". 1 person asked me to put a margue on their web page. No prob in IE: <marque>some text</marque>

In Netscape though, get out the java script manual. There are hoops to jump through. As a result, I try to steer people away from Netscape now, which isn't hard considering you can have some nice activex stuff in IE.

I think OpenGL is awesome, but tying a developers hands in any way is not a good thing, and being the parent by saying, "We're not going to let you have that. You might hurt yourself", is also most likely not going to make Joe Developer who has been coding for 30 years very happy.

While IE may provide you with more features and possibilities it also has the most security holes, mostly because of features that may not be needed has just been added because someone wants it and without thinking about the consequences. Overall commonly done at M$, they let the developers do too much, leaving security holes everywhere. Especially in the DOS days, as Matt said, where everything was open a faulty or malicious application could easily bring the whole computer down. This happend many times for me in those days when a bug in my app caused the screen to go blank and the computer restarted with hardware errors that didn't go away until I turned the power off. Now in the case with pointers to framebuffer, what garantuees that the application doesn't screw up things for the graphic card? What if a bug makes the app write outside the framebuffer?

john
05-30-2001, 11:13 PM
Hello,

ah.. i remember the good old days of microsoft extending specifications because developers REALLY like it. Remember Microsoft Visual Java? =)

cheers
John

DJSnow
06-08-2001, 12:36 AM
YES!
oooohhhh....yes !!!!

GIVE US A POINTER TO THE FRAME-BUFFER !!!!

the "monkey has landed" if this occurrs...

vikky
06-22-2001, 03:36 AM
And it seems to me.... we don't need frame buffer pointer to access framebuffer! If we write
glClearColor(1.0,0.,0.)
glClear(GL_COLOR_BUFFER_BIT); we get red background (it does changes a framebuffer memory),but we don't get framebuffer ptr for it!! So, everything we need is not a pointer, but a set of functions to operate a frame buffer, and it's not important to us, what's going on inside OpenGL, when we trying to access it. For example glClearTexture(texptr)-to make a texturized background (it's a common task, and I have to draw a texturized rectangle to solve it usualy, but it's slow)

Robbo
07-11-2001, 02:39 AM
The above is a bit silly. If you check out http://www.useit.com/alertbox/, you will see that a `really nice' web-page is not quite what you think it is.

I think Netscape sticking to the standard is good for designers and users all over the place. Microsoft want you to use all their advanced features, so you all need MS browsers to see the pages. Its a lock-in.

Same deal with DX and OpenGL. I would rather be portable than strapped to Microsoft for the rest of my working life.