Have you thought the theoretical possibility of

Have you thought the theoretical possibility of CPUs becoming remarkably faster compared to GPUs at some point in the future?

The theoretical scenario is that a major breakthrough in hardware engineering produces a way to make CPUs 100x faster than they are now. In the same theoretical scenario GPUs remain at their normal speed.

At that point the ‘sending stuff to the GPU’ trend will lose merit.

Would the solution be to simply implement the OpenGL API on the CPU side?

Say, “due to recent developments, our company will release new drivers that take the burden away from the GPU; the API use will remain the same”.

Will that be enough, or will OpenGL need rewriting?

I wonder who is going to support that. If GPU manufacturers wouldn’t have much of the pie, who would make the drivers?

That appears to indicate the OS. Or even the CPU manufacturers.

yes.

The biggrest flaw in your prose is that GPU speeds accelerate faster than CPU speeds, even taking into accounts the many-core approaches.
Further, this type of hybrid cpu/gpu computing is more in the realm of OpenCL, witch already can work hand in hand with OpenGL.

You start off by talking about a theoretical possibility and then you finish up by talking about it as if it was a doomsday scenario that is definitely going to happen.

I don’t believe this is the case at all unless some wildly unforseen advances in CPU technology occur. The CPU is a jack-of-all-trades-master-of-none, and it will always have to be that; able to do anything but consequently not particularly outstanding at specific jobs.

GPUs are dedicated to the specific processing tasks required for transforming vertexes and shading fragments. Their hardware specialises in this area and can afford to be weak and/or limited in others.

Comparing one with the other is like comparing a nuclear scientist with a general handyman. I know which one I’d prefer to have do odd jobs around the house, but if I wanted my Uranium 235 to be shielded properly I’d pick the scientist any day.

Raw speed isn’t everything either. Parallelism is also important here, as is pipelining, and these are two areas where a dedicated GPU shines. CPU advances could be relevant here but you can bet that it won’t be at the expense of being able to number-crunch Excel worksheets.

Finally, I think that hardware vendors are far more likely to invest in next year’s technology revisions than worry about the potential impact of something that’s completely imaginary.

The theoretical scenario is that a major breakthrough in hardware engineering produces a way to make CPUs 100x faster than they are now. In the same theoretical scenario GPUs remain at their normal speed.

Your theory makes no sense. Essentially, your hypothetical scenario requires a magical process that causes CPUs to improve performance by two orders of magnitude. And it simultaneously requires that this magical process cannot work on GPUs.

That’s not possible.

CPUs and GPUs are silicon. Currently. They’re just arrays of transistors and such. Whatever process that can be used to improve one can be used to improve the other. Maybe not as much, depending on the process, but they would certainly not “remain at their normal speed”.

So essentially, I’m saying that your hypothetical is not possible.

In any case, the future is already happening: the merger of CPUs and GPUs is already underway. Eventually, low-to-mid-grade GPUs will be just specialized CPU coprocessors. Possibly with some internal on-die memory.

Will that be enough, or will OpenGL need rewriting?

OpenGL could use rewriting now. But does it need to be rewritten to be an effective CPU-only API? No. There’s nothing in OpenGL that would make it less effective as a CPU API. We even have software OpenGL implementations.

He probably thinks that OpenGL is a driver that needs to be rewritten for CPUs. In reality, this is OpenGL -> http://www.opengl.org/documentation/specs/

It is just a description on paper.

There’s also the fact that when 3D APIs were in their infancy the whole world was software/CPU only; there was no consumer hardware acceleration.

Nitpick : this is not relevant when talking about OpenGL, which did not become a consumer API before Quake 1 was ported to use it, and minigl-to-glide added to take advantage of 3dfx consumer cards.
The infancy of IRIS GL, “father” of OpenGL, was already for hardware acceleration, on Silicon Graphics workstations. At the beginning, not everything was accelerated of course. Sometimes even texturing had to be done on CPU.

I zee dis eez happening :slight_smile:

Remember the idea of Physics Accelerator Cards? The same will apply to GPUs but it’s taking longer for more technicalities than being silicon or not :wink:

However in this case, and not only OpenGL, API specification has to be re-written from scratch, at least eliminating the shader language and it’s API, because there will be no need for it. Shaders will be programmed through normal CPU instructions.

Implementation is SW or HW will make no sense since in both cases (GPUs or CPUs) implementation is a software. The “acceleration” term will be dead.

I think Mesa3D (with shaders eliminated) will be a lucky candidate to replace all existing APIs :wink:

When this happening? Hmmmm…2099? :smiley:

First, I would like to join the ones talking before me: this is not likely to happen. The “worst” thing that could happen is that every GPU will be integrated into the CPU (see e.g. Sandy Bridge) like it happened with the x87 math coprocessors.

However in this case, and not only OpenGL, API specification has to be re-written from scratch, at least eliminating the shader language and it’s API, because there will be no need for it. Shaders will be programmed through normal CPU instructions.

We need a shader language for abstraction, not just because the GPU instruction set is more limited. If you have to write everything in your renderer from scratch then you don’t need an API at all, just write your software renderer.

GLSL defines built-ins (e.g. texture fetching functions, variables like gl_Position, etc.). Besides that it also defines a language, but that is very similar to C anyways. The key is the built-ins and if you drop GLSL you drop the built-ins thus you end up with having to write the whole renderer without API support. If you did not mean this, but you just meant that you can write the shader code as a function in your engine source code and having e.g. a C library for the built-ins, then it means you still need GLSL, even though everything is executed on CPU.

No need for GLSL or anything. It be developer-written rendering pipeline including the rasterizer.

I think this is how recent consoles work…PS3 i.e

But again when this happening, or even gonna happen…depends on commercial factors more than technical. Some will be out of business :smiley:

I predict such a processing unit will be developed by someone new, no not Intel or AMD, sorry…

No need for GLSL or anything. It be developer-written rendering pipeline including the rasterizer.

Yes, you could. But why would you want to, when you already have a perfectly good renderer/shading language in OpenGL?

Larrabee was made under the assumption that everyone would want to write a full rendering stack specifically for it. It failed because it turns out that people don’t want to. And without writing a full rendering stack, it couldn’t perform nearly as well as real hardware-based solutions like OpenGL/Direct3D-based hardware.

Writing and maintaining a full rendering stack is hard work. It’s hard work to write and maintain a renderer based on abstractions. Adding the actual details of rendering to that doesn’t help. It’s work that doesn’t need to be done, nobody wants to do, and wouldn’t help anyway.

I think this is how recent consoles work…PS3 i.e

Um, no, they don’t. They still use shading languages, and they still use specialized GPU hardware.

No need for GLSL or anything. It be developer-written rendering pipeline including the rasterizer.

Anyway, if this would happen then there is no need for OpenGL. It would be just a software renderer.

Considering this, I don’t understand why you were talking about rewriting the GL API specification.

For the Playstation 3, it’s a
550 MHz NVIDIA/SCEI RSX ‘Reality Synthesizer’

and it has its own GDDR3 of 256 MB.

http://en.wikipedia.org/wiki/Playstation_3

Larrabee died because it isn’t fast enough. When Intel comes up with a Star Trek processor that does 20 trillion ops per nanosecond, then it will become a reality. But, by then nVidia will have holodecks. lol

Oh my! What if automobiles become massively more fuel-efficient than cars? Yikes! Cars will be obsolete. Economies will fall. World Wars will begin. The planet will explode!

A GPU is a [multicore] CPU. And, in fact, the SIMD/SSE2+ aspect of current CPUs is almost (but not exactly) what GPU cores are. Any invention that applies to one, applies to the other, since they are the same (just like “automobile” and “car”).

Of course, we’ll always need certain graphics-specific aspects of current GPUs, otherwise the work needed to be performed by CPUs would skyrocket. Annoying stuff like clipping and such.