PDA

View Full Version : distributed computing project



nlopes
04-26-2003, 08:42 AM
Hi everybody,


I'm currently working on a distributed computing project. What I want to do
know is to use (if possible) the gpu of the graphic cards to do large
scientific calculations. I couldn't find much on this on web, but I think
this is possible with OpenGL.
Can you help me?? I'm programming in C++.

(Does anybody knows any program that tells the % of usage of the gpu??)

Thanking in advance,
Nuno Lopes

nlopes
04-26-2003, 08:43 AM
Look what I've found: http://www.cs.caltech.edu/courses/cs101.3/cs101_files/notes/week2/RuSt01a.pdf

Overmind
04-26-2003, 02:24 PM
GPU stands for Graphic Processing Unit, not scientific processing unit http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Seriously, you shouldn't try to use it for anything else than graphics. The GPUs are designed to accelerate graphic calculations, they are not useable for anything like CPUs.

There may be some cases where a scientific calculation is similar to a 3D graphics problem where you could use the power of GPUs, but they are usually very tricky. If you really plan to abuse the GPU for other calculation, you should first learn how to use the GPU for advanced graphics in order to move on to other calculations that are more complex, but based on some graphics algorithm.

In my opinion, you may find that it is much more hassle than it is worth.

Regarding the question about reading the CPU%, this depends on the OS you are using and has really nothing to do with OpenGL.

graphicsMan
04-26-2003, 03:01 PM
While it can be harder to write a more general purpose program on the GPU, it can have significant benifits. If utilized properly, you average GPU today has about the processing power of 40 CPUs. For many examples of cool stuff being done with graphics hardware, see Mark Harris' webpage:
http://wwwx.cs.unc.edu/~harrism/gpgpu/index.shtml

Brian

JONSKI
04-26-2003, 05:16 PM
Overmind, don't spread your bullsh*t on people who are more pioneering than you. That's the same closed-minded crap that brilliant people who put piston engines in RX-7's have to deal with. The potential for a GPU to compute non-conditional, parallel computations is enourmous! It has a limited data path and a single memory bus, but it also doesn't need to relay data over a network, doesn't require a menagement system, and is probably cheaper.

31337
04-26-2003, 05:54 PM
Simply amazing topic! However I am a little skeptical. I think that 40 times the power is a bit exaggerated and has to be weighted though.

Obli
04-26-2003, 11:49 PM
I fear the biggest problem is precision here.
Even if you use the highest precision possible, chances are you won't get the same results.
For example, if a numer should be 1.0000015 you may get 1.0000018 on a NV and 1.0000011 on an ATi. Even card from the same vendor may get different results... I think this is inacceptable for scientific calculations, however...

For what this is being used... well, nobody here heard about GP-GPUs (General Purpose GPUs)? Maybe there will be a day in which this would be done as a standard. In fact, some limited physic processing is avaiable even now (fog volumes, cloth animation)...

john
04-27-2003, 12:44 AM
Hello,

for what it's worth: I'm using OpenGL hardware to make computers see. :-)

well, I've investigating an idea we've had to use graphics to accelerate creating models from photographs. It still early days, but its showing promise.

cheers
John

nlopes
04-27-2003, 01:07 AM
Can anybody give me some examples on how to do this, please?

Some source code of how to implement an algorithm in OpenGL.

Thanks for your help!

john
04-27-2003, 04:57 AM
the question you are asking is too open ended

code examples to do _what_? its like asking how long is a piece of string.

side rant:

anyways, code examples are ... well, it bugs me when people ask for code examples. code is just a vehicle for expression, just like mathematics or english or any other language (human, symbol based or computer based) you can think of. code example suggest "i don't want to understand this, i just want to put it in my program".

nlopes
04-27-2003, 05:58 AM
Originally posted by john:
anyways, code examples are ... well, it bugs me when people ask for code examples. code is just a vehicle for expression, just like mathematics or english or any other language (human, symbol based or computer based) you can think of. code example suggest "i don't want to understand this, i just want to put it in my program".


I don't want code examples to just copy-paste in my program. I'm asking for examples because I never programmed OpenGL before and I don't know how and where to start!
I'm asking only if anybody can explain me how to implement an algorithm to sum numbers, multiply, divide, etc... If possible I want to work with big numbers.

jwatte
04-27-2003, 08:32 AM
The precision on a typical graphics card today is 8 bits per scalar, in fixed format. The latest batch of cards can store 32 bit floating point quantities per scalar. Graphics cards typically process scalars in groups of 4 (as in "R,G,B,A").

"Large" numbers won't work at all on typical graphics cards, and will only work to the extent that 32-bit floating point is useful on the modern cards. (There are also some cards that do 16 bits per scalar as an intermediate trade-off between precision and quality, and some cards that do 24 bits instead of 32 bits of precision).

Supposing you have set up a high-resolution render target (typically a pbuffer with floating-point format) the easiest way to perform operations is to texture out of the target, and use that as one source, and use whatever data you feed in as a second source.

To add two 2-dimensional vector fields, bind them to texture units 0 and 1, and draw a quad covering the target from 0-1 in both directions with the following fragment program:




!!ARBFP1.0

TEMP t1, t2;

TEX t1, fragment.texcoord[0], texture[0], 2D;
TEX t2, fragment.texcoord[0], texture[1], 2D;
ADD result.color, t1, t2;


For more examples, you might want to look in the nVIDIA OpenGL SDK (which is a big download) or the ATI OpenGL SDK (which is smaller); look for example at the "High Dynamic Range" examples which show how to use floating-point render targets.

For the basics on OpenGL, you can get started by following the tutorials at http://nehe.gamedev.net/ -- do and understand at least the first 7 or 8 to get your feet wet before going to those other referecens.

Last, I recommend that everyone download and refer to the OpenGL specification, version 1.4 is available as a PDF on the front page of this site. When I want to look something up, this is my preferred reference!

nlopes
04-27-2003, 10:19 AM
The CPUs that everybody have can do operations with 32 bits of precision.
But if you implement a large number library, like GMP, you can work with huge numbers, that in my case can have up to 700 digits.

So, is it possible to implement such a library in a graphic card?

V-man
04-27-2003, 11:01 AM
Originally posted by nlopes:
The CPUs that everybody have can do operations with 32 bits of precision.
But if you implement a large number library, like GMP, you can work with huge numbers, that in my case can have up to 700 digits.

So, is it possible to implement such a library in a graphic card?

32 bits for integers you mean? There are plenty of ancient CPU's out there that can do 64bit in hardware, but that's not enough if you want 700 digits.

The problem with GPU's is input, intermediary stages and output.
What I mean by this is that you might put an integer in, then this is converted to something else (most likely 32 bit IEEE float), then finally you get a color output which can be 32bit float too now (great times ahead!).

So anyway, to answer the question, yes I think the functions are there. I think you'll need some compare, jump instructions (NV_vertex_program2).

To give an idea on what to do, assume your float frame buffer is an array of integer numbers, not pixels.
Let's say we want to add values 10 billion and 10 thousand billion.
Send a GL_POINTS down the pipe, with a couple of vertex attributes.

attrib 10 for point might be (0.0, 0.0, 10.0, 0.0)
attrib 11 for point might be (0.0, 00.0, 10000.0, 0.0)

you need to write a special vertex program to add the values.

The result will be (0.0, 0.0, 10010.0, 0.0)
which is written to the frame buffer somewhere.

Now you do what you need to do what that RGBA value.

Anyone see a problem with the method?

jwatte
04-27-2003, 01:35 PM
No, you cannot for all practical intents and purposes implement a large-number library for a GPU.

Using the vertex processing part of the GPU for scientific processing seems useless, as it's mostly a traditional vector processor. Modern pentiums can probably out-process the GPU, especially if you consider CPU memory bus (6.2 GB/s) to AGP 4x (1.0 GB/s) or 8x.

Using the fragment processor is more interesting because of its ability to represent a vector field or other 2D data as textures/rendertargets, and the higher degree of parallelism and faster memory (REALLY fast memory -- 20 GB/s on the high end!)

graphicsMan
04-27-2003, 01:56 PM
I have to agree. Doing arbitrary precision on a GPU would be rediculously difficult. You're pretty okay using floating point, but you have to remember that while they use a IEEE floating point format, they don't guarantee anything about the accuracy of the operations. I don't believe that up to this point and of the card manufacturers have published anything about the floating point accuracy, so I'd be very wary about high precision stuff.

Won
04-27-2003, 09:25 PM
Performing computation on a modern programmable GPU is a pretty good idea for several kinds of computation, specifically low-precision SIMD computation. Perhaps this generation's hardware isn't quite up to the task, but it will only become more and more appropriate. GPUs need only be made more flexible in a few more aspects to be fairly useful in non-graphics applications.

Would a large number library really be that hard to do? I suppose arbitrary precision math is realistically impossible, but I don't see why it would be necessarily so hard to obtain better precision than what is natively supported. It probably wouldn't take much more than what is already out there, and a bit of cleverness. http://www.opengl.org/discussion_boards/ubb/smile.gif

-Won

PS JONSKI - The RX-7 is cool because it has a rotary engine, not a piston engine, but I think that's what you meant. The RX-8 is coming out so the wankel is making a comeback. Yea.

graphicsMan
04-28-2003, 08:53 AM
I think that arbitrary precision would be very hard to accomplish on GH. You have to be able to store an arbitrary number of bits for the numerator, and an arbitrary number of bits for the denominator... THEN you have to perform the actual number crunching.

I agree that soon, the floating point computation will probably be to the point of being as accurate as a CPU's computation... but I highly doubt that GPU's will go to even double precision any time soon (like less than 10 years).

That being said, there is a large space of problems that are interesting from a GPGPU standpoint... solving ODE's, PDE's, linear systems, and much more are all currently plausible problems for a floating point GPU.

Brian

V-man
04-28-2003, 09:12 AM
Originally posted by jwatte:
Using the vertex processing part of the GPU for scientific processing seems useless, as it's mostly a traditional vector processor. Modern pentiums can probably out-process the GPU, especially if you consider CPU memory bus (6.2 GB/s) to AGP 4x (1.0 GB/s) or 8x.


What's a "traditional vector processor".
Modern Pentiums can do 4 numbers in parallel using SSE, SSE2 while the GPU probably can do more. It is perhaps 8 or 16 on Geforce class. Who knows.

The fragment end of the GPU might be a better place to do calculations. I imagine there is some kind of balance between the units in the pipeline to maximize performance. As long your triangles have a certain area not larger then number X, and you are not doing anything complex in fp, there wont be any stalls and the GPU will run at peak performance.

and then you take those numbers and send them to marketting. http://www.opengl.org/discussion_boards/ubb/smile.gif

nlopes
04-28-2003, 10:03 AM
If I understood, you are advising me to give up of my idea and wait until the next generation of GPUs (maybe when graphic cards return to pci!).

In the middle time, I sent an e-mail to ATI and nVIDIA, but I haven't received any answer till now.

nlopes
04-28-2003, 10:06 AM
I don't want to give up, but I need anyone's help because I don't know anything of OpenGL. I have never programmed it before!!

So I'm really http://www.opengl.org/discussion_boards/ubb/confused.gif confused!!

If somebody with advanced knowledges can help me, please post here and/or send me an e-mail.

Thanks everybody http://www.opengl.org/discussion_boards/ubb/smile.gif !

nlopes
04-28-2003, 10:45 AM
It's me again!

Just some topics I've found in others distributed computing projects about this:

Seti@HOME
--------- http://setiathome.ssl.berkeley.edu/bb/bb4/bboard.cgi?action=viewthread&num=2226

Folding@HOME
------------ http://forum.folding-community.org/viewtopic.php?t=2135

Overmind
04-29-2003, 10:44 AM
Jonski, if you have an other opinion than me, you can also say that in a nicer way.



I don't want to give up, but I need anyone's help because I don't know anything of OpenGL. I have never programmed it before!!


What I wanted to say with my post was not that it is generally a bad idea to do other calculations with a GPU, but it is a bad idea trying to do it without any understanding of 3D graphics.

I don't think you can just learn how to use a GPU without learning OpenGL and 3D graphics programming. Learning to use a GPU for calculations isn't as simple as learning an assembler language.

nlopes:

My advice is, look for some tutorials on OpenGL (search for NeHe) and look into advanced 3D graphics (at the nVidia site are a lot of examples). Try to understand some simple GPU algorithms like bump-mapping, ...

Then, with a basic understanding of how a GPU works, it will be a lot easier to write other applications that compute something on a GPU.

[This message has been edited by Overmind (edited 04-29-2003).]

nlopes
04-29-2003, 11:49 AM
Originally posted by Overmind:

Learning to use a GPU for calculations isn't as simple as learning an assembler language.


Are you joking?? Assembler is really dificult, and you are saying assembler is easier than this?? Oh my God!.....

Overmind
04-29-2003, 12:20 PM
No, you got me wrong.

It is not exactly more difficult, but you have to learn more than a language. You have to learn an assembly language (that of the GPU) and you have to learn the way a GPU works, how to deliver data to it, how to process the output and so on.

A GPU is designed for graphics, that means you can't just write a program and let it process the data, you have to send the data as 3D coordinates and recieve pixel data as output. That doesn't mean the coordinates have to make sense in 3D space, neither does the image have to look like anything, but thats just the way a GPU works. It basically takes a set of coordinates and produces an image. The difficulty is not how to write a program for a GPU, but how to forulate an algorithm so that it is suitable for the way a GPU works.

A CPU executes a large stream of instructions on some random accessible data, a GPU executes a very limited stream of instructions on a huge sequecial accessible stream of data. Thats just completely different, and before you start designing your own algorithms for a GPU, you should learn how it works. And the easiest way to do this is by learning 3D graphics, because there are many existing GPU algorithms you can study.

Btw.: It is not absolutely neccessary to learn the assembly language of GPUs. Have a look into Cg at the nVidia homepage. It is a C compiler for GPUs. But as I said, the language only a part of the problem.

roffe
04-29-2003, 12:41 PM
Originally posted by nlopes:
Are you joking?? Assembler is really dificult, and you are saying assembler is easier than this?? Oh my God!.....

I can see how you might think so. Unexperienced programmers usually find an assembler like language to be hard to learn just because it uses a "difficult" syntax. But infact, any assembler language is fairly easy to learn once you understand the basic architecture of the cpu.

To implement an algorithm,usually targeted for the cpu, one would need to understand all aspects surronding it. Some of the key concepts would be:

-In depth knowledge of the algorithm in question. In mathematics we usually rewrite an original algorithm in many different ways to be able to "apply" different known techniques to solve it.
-For GPU implementation. Extensive knowledge of 3d graphics hardware/pipeline, so we know what part of an algorithm goes where. There is never just one solution. Many parts of the GPU can be used to derive/aid in obtaining the final answer.

If you know these two areas well, you are ready to go!

The tools available to you come in many flavors: DirectX,OpenGL,C/C++,assembler etc. You choose the tool most appropriate for the task. Many people on this board will help you for free, if you are ready to accept it.

HS
04-29-2003, 01:07 PM
I am only concerned that some "driver tweaks" for instance Gamma settings MAY affect the result of certain calculations.

Since this is something that may change from system to system the results are not guaranteed to be the same.

As far as assembly is concerned. Its the only "what you see is what you get" language and hence the only "reliable" language since a compiler is just a stupid piece of software.

roffe
04-29-2003, 02:24 PM
Originally posted by HS:
I am only concerned that some "driver tweaks" for instance Gamma settings MAY affect the result of certain calculations.

Since this is something that may change from system to system the results are not guaranteed to be the same.



True, but by doing all work in a linear space(CIE XYZ), and at the end map to a monitor specific RGB space followed by the non-linear gamma transformation, you could minimize this behaviour.

HS
04-29-2003, 02:34 PM
Right, but can you guarantee that every hardware will provide the same exact results independed of the system or settings?

Unless that is proven, its nice but useless for scientific calculations (floats are VERY inaccurate anyway).

roffe
04-29-2003, 02:58 PM
Originally posted by HS:
Right, but can you guarantee that every hardware will provide the same exact results independed of the system or settings?

With today's many different types of shaders and precision, I would say no. Unless you would limit yourself to only support a specific architecture. Let say NV30's fragment programs(FP32).



its nice but useless for scientific calculations (floats are VERY inaccurate anyway).
I wouldn't go so far as to call it useless, but it is of course a limitation. But if you are aware of the limitation and can accept it(and measure it), I dont see any problems.But what do I know.

HS
04-29-2003, 03:05 PM
Originally posted by roffe:
[QUOTE]I wouldn't go so far as to call it useless, but it is of course a limitation. But if you are aware of the limitation and can accept it(and measure it), I dont see any problems.But what do I know.

Hmmmmmm, good point, I guess I just projected my needs. However they are very few areas I know (besides graohics), I would consider floats beeing accurate enough.

freto
04-30-2003, 04:45 AM
The answer to your questions might be here: http://www.cs.caltech.edu/courses/cs101.3/

There are also some GPU oriented papers at SIGGRAPH this year: http://www.cs.brown.edu/~tor/sig2003.html

nlopes
04-30-2003, 10:29 AM
Instead of using OpenGL, can I use direct instructions to the gpu?
How can I do this?? In gcc how can I send commands to the gpu? And where can I learn about this??

Thanks all!

graphicsMan
04-30-2003, 12:53 PM
Originally posted by HS:
As far as assembly is concerned. Its the only "what you see is what you get" language and hence the only "reliable" language since a compiler is just a stupid piece of software.

So you never use any high level language? It's MIPS, SPARC, and x86 for you, eh?

If you're that worried about it, you shouldn't trust hardware either. Hardware does have the occasional flaw...

graphicsMan
04-30-2003, 12:57 PM
Originally posted by nlopes:
Instead of using OpenGL, can I use direct instructions to the gpu?
How can I do this?? In gcc how can I send commands to the gpu? And where can I learn about this??


You're basically talking about working on the driver level here. Not only is the coding far more complex, but getting your hands on driver code would be difficult.

Additionally, (alas) you still need to understand GRAPHICS to use this approach. If you are set on using GH to do computation, you're going to have to bite the bullet and learn an API (and, *gasp* maybe learn some graphics!).

dorbie
05-01-2003, 02:09 PM
nlopes, at one level can just about do this, but unfortunately for you to feed input to the card and get it back as well as set up registers in the conventional sense you must learn graphics. GPUs still operate with fairly restricted pipelined streaming structure:

state setup, texture & vertex streaming -> programmable vertex -> programmable fragment -> raster operations

Currently there is a clear distinction between each of these stages and two of them are entirely fixed function style OpenGL. Each programmable stage uses significantly different instruction sets, mainly related to data types and matrix support. In addition the business of getting your data in and out of the GPU and retaining persistent data verges on the mind bending.

Computational algorithms must often implement multiple programs & do things like store results to texture, read back from framebuffer, send data in textures or vertex arrays with complex packed formats do dependent texture reads, worry a LOT about precision and internal casting. These products are just not general purpose enough for someone to come along and simply program them in the way you hope. An instruction set knowledge and ability to program is useless without a thorough understanding of the architectural issues surrounding OpenGL and the place of the programmable components within the fixed function pipeline. In addition product specific knowledge beyond the OpenGL spec is desirable as it relates to restrictions on program lengths, supported texture counts and internal precision and performance characteristics of target platforms.

It's getting better all the time but things will probably never be so general purpose that you can just program them the way you seem to want to.

[This message has been edited by dorbie (edited 05-01-2003).]

HS
05-01-2003, 04:13 PM
Originally posted by graphicsMan:
So you never use any high level language? It's MIPS, SPARC, and x86 for you, eh?
If you're that worried about it, you shouldn't trust hardware either. Hardware does have the occasional flaw...


And that from someone who doesnt even know how a linker works or what it does (see his "loading extensions, etc..." thread).

Of course, I use "high level" languages for trival tasks (like windowing, files, sockets, etc...).

But I am very concerned when it comes to process "scientific data" like from an pole expedition. You can bet I handcode the algo's because I cant thrust a compiler that may take paths in the compiling that have never been tested/debugged before...

Ever read the "dragon book"?

As for hardware, results are comparable between them, if two dont match take a 3'td to find out which one is at fault (x86,USparc,PowerPC...).

http://www.opengl.org/discussion_boards/ubb/rolleyes.gif

[This message has been edited by HS (edited 05-01-2003).]

graphicsMan
05-01-2003, 07:20 PM
HS --

I didn't intend to offend. I was merely trying to indicate that trust needs to be placed in various parts of the software development pipeline. Also, I agree that some data is more sensitive, and if you require 100% assurance (minus alpha particles intercepting your memory) that your computation is correct, every step should be taken - including being very wary of your compiler. And yes, I've read SOME of the Dragon Book (it wasn't so riveting that I read the whole thing).

Again, sorry that I offended.

Brian

[This message has been edited by graphicsMan (edited 05-01-2003).]

Humus
05-01-2003, 07:35 PM
Originally posted by HS:
But I am very concerned when it comes to process "scientific data" like from an pole expedition. You can bet I handcode the algo's because I cant thrust a compiler that may take paths in the compiling that have never been tested/debugged before...

Seriously? A compiler is orders of magnitude more reliable than a human coding assembler.

kansler
05-02-2003, 12:26 AM
I totally agree with Humus. Writing software in c/c++/delphi is way safer than coding it in assembler. And I'm not even talking about debugging here...

V-man
05-02-2003, 05:24 AM
hmmmm, and I thought people coded in assembler for performance, but not trusting the HL compiler??

You can find information on the bugs a particular compiler has on the web and just code around them.

HS, I imagine you have some strict guidelines to follow, as in "no choice but to do this".

Zengar
05-02-2003, 05:47 AM
I had never any problems with debugging an assembly. Why should it be more difficult than debugging HL's?

Zengar
05-02-2003, 05:50 AM
Oh, forgot it. About gamma correction: I guess you can skip it if you use render-to-texture.

dorbie
05-02-2003, 07:00 AM
Gotta agree with Humus, and then there's the whole productivity thing.

Won
05-02-2003, 07:31 AM
What is a pole expedition, and why would it stress unusual compiler paths? In my experience, scientific code is one of the simpler kinds of code to deal with. Compilers can be very reliable nowadays and do a good job of generating efficient and correct code.

I did read the whole of the Dragon Book. (Aho, Sethi, etc?) It's old. It only covers the basics, and modern compilers are much more sophisticated. What in the Dragon Book taught you to mistrust the compiler?

-Won

nlopes
05-02-2003, 08:24 AM
Sorry, but here the discussion is about doing scientific calculations in a gpu. Not if a high level compiller is reliable or if is easier to debug an assember program.

Thanks everybody that helped me,
Nuno Lopes

HS
05-02-2003, 01:35 PM
Sorry for the late response and nlopes I will get to your question later..

If you dont have any objections, I answer all the posts in one shot...

"pole expedion like in "Northpole expedition" the data is invalueable since it cant be "regenerated" not even by a new expedition (the ice moved since that).

The "routes" a compiler takes to compile a file are infinite and therefore unpredicable:




for(int i = 0; i < something; i++)
{
blah(i);
}


by pure logic is the same as:




int i = 10;
for(i=0;i< something ;)
{
blah(i++);
}


The results "should" be same, right?
They arent, the compiler will take different "routes" to build the object code.

Thats why it takes years for a compiler to "mature".

Humus from what I read you have a very good understanding about 3d graphics, but your statement "that a compiler is better than the human brain" is hmmmmm mistaken?

Dont get me wrong, a compiler may generate "faster" code, but it may as well "optimize" the hell out of code to always generate the same result: "42".

I guess I am fighting against wind mills, but one day you may find that it wasnt non-sence I talked about. Compilers are "stupid" believe it or not.

As for using the GPU for "scientific" calculations...

I personly dont know ONE problem, I would be satisfied with (a) 32bit float result(s)...


[This message has been edited by HS (edited 05-02-2003).]

Humus
05-02-2003, 02:32 PM
Yes, compilers are stupid and may have bugs. But what makes you think that your handwritten code will be more correct and less buggy? You're not assuming you're not going to make mistakes, are you? Making mistakes is the very nature of human beings, much more so than machines.

HS
05-02-2003, 03:12 PM
Originally posted by Humus:
Yes, compilers are stupid and may have bugs. But what makes you think that your handwritten code will be more correct and less buggy? You're not assuming you're not going to make mistakes, are you? Making mistakes is the very nature of human beings, much more so than machines.

Over 15 years of coding assembly and the understanding of what I code, perhaps?

If you think both of it is true for any "compiler" you dont know what you are talking about.

Of course I make misstakes, but what makes you think a (human written) compiler doesnt (espesically if its written by someone who dosent know ANYTHING about the problem)?

Thats pretty short sighted if you ask me.

This is of course WAY off topic..

[This message has been edited by HS (edited 05-02-2003).]

Humus
05-03-2003, 09:13 AM
I have 8 years of experience of coding myself, and yes, I in general know what I'm doing while coding, but I still make mistakes pretty much every time I code something. It's the human nature, and the very reason we have debugging tools in our development environment. I sure understand my code, and yes, I can write and understand assembler just fine too, but I can garantuee that my C++ code is way more reliable than my assembler code.

Compilers has a long track record of proven reliability. The whole software industry lives of code generated by compilers. These bugs are soon found and fixed. So, yes, the compiler is way more reliable than the code that you feed it with.

Even when Partition magic reorders my partitions and a fault will cause all my HD data to get lost I'm not going paranoid about that the compiler generated code might be faulty. The chances of the original source code being faulty is way higher, so if I'm going to worry I'm going to worry about that.

The problem is, regardless of how much experience you have you're always going to make mistakes. Bugs will always be present in all software, even for small tiny apps. The chance of something breaking because of a programmer error is way higher than things breaking because of a compiler error. By avoiding the compiler you may escape the infinitesimal probability that the compiler generates faulty code, but you're increasing the chances of human errors several times over. And if you're going to be paranoid about the compiler, what garantuees are there that your assembler code will actually translate to the right binary code? CPU's have erratas, do you design your own processor to work around that? (something compilers can do automatically btw ...) What about the OS, will it read the correct HD block when you launch your app, and will your app actually get the HD block is requests when you load your data from the disk? Why trust the OS when you don't trust the compiler? The OS is more likely to fail than the compiler IMO.

nlopes
05-03-2003, 09:21 AM
Hi again,

Can we back to the main topic, please??

Thanks,
Nuno Lopes

graphicsMan
05-03-2003, 09:36 AM
Originally posted by nlopes:
Hi again,

Can we back to the main topic, please??

Thanks,
Nuno Lopes

Nuno -

Sorry, but I really think that you're out of luck trying to do arbitrary precision stuff on the card... Who knows, maybe in a couple of card generations, it will be more feasible, but right now I think that trying to do anything with better than 32 bit fp precision is going to be super difficult - and probably faster on a CPU.

If you can get away with using single precision floats, then go ahead and learn OpenGL or D3D, and put that ol GPU to work.

Brian

nlopes
05-03-2003, 02:06 PM
I downloaded nvidia's OpenGL SDK an I saw taht there is a folder named nv_math. Does this folder have anything to do with this topic??

graphicsMan
05-03-2003, 02:17 PM
Did you look at it?

nlopes
05-04-2003, 01:21 AM
I don't know anything about OpenGL or graphics programmation. I must start learting OpenGL ASAP.

Ysaneya
05-05-2003, 01:57 AM
You can bet I handcode the algo's because I cant thrust a compiler that may take paths in the compiling that have never been tested/debugged before...


Sounds quite ridiculous to me. Compilers have been tested by thousand, if not millions of people for years, are based on well-known technics, and are quite bug-proof nowadays. The rare bugs i've encountered were in the IDE, not the compiler itself. I've never seen incorrect code generated by my compiler, and i've been programming for 13 years.

And what do you mean by "hand-code" ? Are you coding directly in machine code ? If so, good luck. Because as far as i remember, assembler is also compiled.

Back on-topic: nv_math is a small library for maths, it does nothing with the graphics card. GPUs are fast because they are working asynchronously with the CPU. The only way to take advantage of the GPU is to use it asynchronously. If you submit some data to process to a graphics card (independantly on what card it is/ if it can do it, which is an entirely different problem), and if you need to get the result of this computation in order for the CPU to continue its work, then you've lost any advantage in this architecture, because the CPU will be idle, waiting for the GPU.

Rendering is a different matter, because the CPU doesn't (generally) care about what's currently displayed on screen to continue its processing. And when it happens, guess what? performance drops dramatically. Try reading the color or the Z-Buffer back, and look at your renderer go at software speeds.

Y.

iordannalbantov
05-05-2003, 12:07 PM
First of all sory about my broken english.
Seccond: You can use OpenGL for calculations.
1.You need to create OpenGL hardware accelerated rendering context. Be careful. You may create slow software rendering context.
2.You must learn something about OpenGL matrix operations.
3.Precision is not important about computer vision. 3D accelerators are very strong in matrix computations. In computer vision you may chose "matrix oriented" algoritms in first few stages. But very hard work for computer is in identification of objects. This a database or AI problem. Sory, my knowlage in computer vision is not very big. But if you are needing of fast matrix computations without big precision
you can find some examples on www.opengl.org (http://www.opengl.org) http://www.opengl.org/discussion_boards/ubb/wink.gif and learn some more from SDK help http://www.opengl.org/discussion_boards/ubb/wink.gif.

Macroz
05-08-2003, 11:49 AM
Happened to stumble upon this:
Using Modern Graphics Architectures for General-Purpose Computing: A Framework and Analysis (http://www.cs.washington.edu/homes/oskin/thompson-micro2002.pdf)

Just quickly browsed it and they seemed to compile with MSVC 6.0 only. It's a start but such use of GPUs is not for the faint of heart.

EDIT: where is the preview functionality?-)

[This message has been edited by Macroz (edited 05-08-2003).]

nlopes
05-26-2003, 09:02 AM
Just a note to tell that ATI develloper's relation as already answered me. They sayed that they will try to collect some information for me!

Great!! ATI seams to want to help me!!
Shame to nVIDIA that hasn't answered me...

Michael Steinberg
05-26-2003, 03:57 PM
"I want to write an english book, can anybody give me example sentences. I don't know english so far! Can i use english language to write a book?".

nlopes
06-06-2003, 11:25 AM
You have lots of fun!!!

LOLOLOL
http://www.opengl.org/discussion_boards/ubb/smile.gif http://www.opengl.org/discussion_boards/ubb/smile.gif http://www.opengl.org/discussion_boards/ubb/smile.gif