Hello,
McCraigHead has eloquently explained why pointers to video memory is a proverbial bad thing. It won’t happen, and your arguments FOR it and your arguments against not having it (!?) are misdirected, IMHO.
The motivating argument seems to be that glRead/Draw pixels is too slow and a stubborn refusal to use texture mapping as a method of uploading bitmap data to the card is a good reason to get a pointer to video memory because it will be (somehow) faster. Why? You might equally suggest that your harddisk is too slow, so you must have a memory mapped harddisk pointer, or that you’re unhappy with your laser printer’s printing speed, and so want a pointer to that, too.
Why will a pointer make transferring bitmap data to the card faster? The consensus seems to be that having a ptr to video memory will eliminate the need for a duplicate copy of the image in system memory and it’ll get around calling glDrawPixels(). This can, argubably, reduce bus traffic since the data is going straight to the video card and therefore only needs to use the bus once. (Although the second transfer from system memory can be done with DMA.)
It is not just as simple as getting a pointer to video memory and merrily reading/writing to it at whim, however. Video access has notoriously been slow. The original poster would have rememebrd this from his DOS VGA/SVGA coding days. Remember how reading from the video card seemed to be a feature the h/w vendors only added for backwards compatiblity? Reading/writing to video memory isn’t going to run at the same speed as reading/writing to system memory. Your argument for speed up is that your combined read/writes to system memory and transfer to the video card is slower than the read/writes to a slower resource.
Another argument prposed by some is that since OpenGL is meant to be low level, then it should provide a pointer. This isn’t the point of OpenGL. OpenGL is a low-level abstraction of the graphics hardware. Abstraction, by its very definition, hides away the implementation details. (So, by the very defn of abstraction, then OpenGL SHOULDN’T expose a pointer to the programmer.)
Suppose that you could get a pointer to the graphics display. How is the programmer going to be able to USE this pointer? Someone talked about using a string instr to blit the image across, but this is fairly naive. Who says that the video memory is linearly addressed, or that the data is encoded in RGB triples on byte aligned boundaries, or any other myriad of schemes?
Someone retorted that learning pixel formats is no different from learning new techniques off the advanced graphics forum, but this is naive also. Device drivers exist so programmers don’t NEED to learn how different hardware is structured. Doesn’t anyone remember the pain of non-standard SVGA cards? How some games only supported three major SVGA cards because they were all different until VESA brought out a common interface?
Suppose, however, that a pointer was available, and (as someone suggested) the programmer could figure out how the pixel format through some interface extnesion string, then what happens when a NEW pixel format comes out? Does the application suddenly break because it can’t understand that this new pixel format isn’t linearly addressed, but uses bitplanes and stored in seperate buffers for each R G B, or something equally perplexing?
Ultimatetly, however, graphics cards are another shared resource, and they need to be controlled by the operating system. In the same way that an appliction can’t have direct access to a disk because the o/s doesn’t trust it to respect the file system, and in the same way a program’s printer output must be spooled so multiple applications don’t try and print simultaneously, the o/s has to know how the graphics resource is being used. Just because an application might request a window doesn’t mean that it owns that window all the time. Its fairly easy to show that (under IRIX, for example, at least) that buffers are shared between windows. (If, for example, an application just captures even the back buffer and writes it to disk, then you can see windows materialise in the files as they’re moved over the capturing program’s window.) Most users would find it is unacceptable for a application to get hold of a window but write over other windows when they overlap. Not only can a graphics resource be given to another application, but that resource might change and make pointers to it invalid (if a window is moved, for instance, or the virtual desktop panning thing kicks in).
Pointers will also stall the card, because suddenly the graphics card doesnt know what you are changing about the frame buffer.
Pointers are bad. But there might be alternatives.
This is just a wacky idea. Truly wacky. Freaked out. On drugs, man =) But what if there could be a pixel buffer extension which you COULD get the pointer to? That way you could write to memory on the video card (and it could be segregated from the frame buffer memory so the monitor doesn’t soak the read port on the chips all the time) in whatever format you wanted, just like system memory. Then you could use glDrawPixels from this memory into the frame buffer. it’d save on bus bandwidth because the data is already ON the card.
Well, this ought to do it for now.
Some final quick things i wanted to elaborate on, but can’t be bothered:
-
leaving it to the h/w is good. its abstraction. the h/w vendors can make improvements “behind your back”. This is exactly why object oriented languages have private member functions.
-
get over assembly. the days of hand optimising assembly code is dead. algorithmic optimisations are the way to go. balancing the pipeline for one processor is all very well and good, but what happens when you want to run your code on another processor? I mean, do you know ANYTHING about memory latencies, the number of pipeline stages, how superscalar the chip is (how many pipes it has, what set of instructions certain pipes have), how the branch prediction mechanism works, instruction timings and so on and so on. So, you spent 9 months of your life figuring thatout, and then someone brings out a new processor with more pipeline stages, and all that carefully balanced code is out the window.
-
and i disagree about coders writing FOR the h/w. they shouldn’t be writing to FIT h/w. [snipped]
cheers,
John