Originally posted by nlopes:
[b]The CPUs that everybody have can do operations with 32 bits of precision.
But if you implement a large number library, like GMP, you can work with huge numbers, that in my case can have up to 700 digits.
So, is it possible to implement such a library in a graphic card?[/b]
32 bits for integers you mean? There are plenty of ancient CPU’s out there that can do 64bit in hardware, but that’s not enough if you want 700 digits.
The problem with GPU’s is input, intermediary stages and output.
What I mean by this is that you might put an integer in, then this is converted to something else (most likely 32 bit IEEE float), then finally you get a color output which can be 32bit float too now (great times ahead!).
So anyway, to answer the question, yes I think the functions are there. I think you’ll need some compare, jump instructions (NV_vertex_program2).
To give an idea on what to do, assume your float frame buffer is an array of integer numbers, not pixels.
Let’s say we want to add values 10 billion and 10 thousand billion.
Send a GL_POINTS down the pipe, with a couple of vertex attributes.
attrib 10 for point might be (0.0, 0.0, 10.0, 0.0)
attrib 11 for point might be (0.0, 00.0, 10000.0, 0.0)
you need to write a special vertex program to add the values.
The result will be (0.0, 0.0, 10010.0, 0.0)
which is written to the frame buffer somewhere.
Now you do what you need to do what that RGBA value.
Anyone see a problem with the method?