SGI computers

Does anyone here actually own one? What sorta graphics card do those babies have?

Originally posted by iNsaNEiVaN:
Does anyone here actually own one? What sorta graphics card do those babies have?

Work in a department with about 95 of them, ranging from o2’s to Onyx’s. We have recently decided to get rid of them and to replace them with PC’s + Nvidia combo’s, which are a lot faster and have more features to play with.

The entire system is pretty much devoted to Graphics, but memory allocations and branching code tends to be really really slow on them. You can get them to remain at a fairly constant 75fps if you make everything really linear for them. The floating point math is pretty damn quick (read better than PC), but the rest of the system is so pant’s it’s not worth bothering with. To be honest the graphics system is very limited, they tend to support openGL1.1, but not a lot in addition. There’s nothing fancy about them any more, they are a long way behind consumer hardware these days. It’s only the really high end systems that still fair well, but they are generally used for scientific visualisation/simulations or for server work on a render farm.

(I will be very glad to see the back of them…)

Originally posted by Rob The Bloke:
Work in a department with about 95 of them, ranging from o2’s to Onyx’s. We have recently decided to get rid of them and to replace them with PC’s + Nvidia combo’s, which are a lot faster and have more features to play with.

Although SGI’s are proven to be a hell of a lot more stable than PC’s (even if the PC is running Linux).

Hardware wise, you get what you’re paying for, i.e. extremely well made.


The entire system is pretty much devoted to Graphics, but memory allocations and branching code tends to be really really slow on them. You can get them to remain at a fairly constant 75fps if you make everything really linear for them. The floating point math is pretty damn quick (read better than PC), but the rest of the system is so pant’s it’s not worth bothering with. To be honest the graphics system is very limited, they tend to support openGL1.1, but not a lot in addition. There’s nothing fancy about them any more, they are a long way behind consumer hardware these days. It’s only the really high end systems that still fair well, but they are generally used for scientific visualisation/simulations or for server work on a render farm.

Although I have no experience with them (would love to have though ) and I cannot say anything about the speed of branching, etc.

You miss out the features, such as the Unified Memory Architecture of the O2’s, whereas any memory can be used as texture memory - you won’t get that on a PC.

It’s also a pity that SGI didn’t:

  1. help to increase the speed of the MIPS processor line whilst they owned them.

  2. Make lower end graphics workstations that could compete at the same level as PC’s, i.e. laptops and the like, this would have given the MIPS processor more of a mass market appeal and probably would’ve brought the price down on them.

Oh well


(I will be very glad to see the back of them…)

Well, I’ve mailed you…so if you want to let me have 'em I’d put them to good use.

Thanks,
Luke.

Originally posted by iNsaNEiVaN:
Does anyone here actually own one? What sorta graphics card do those babies have?

SGI create their own hardware and their graphics boards are proprietary.

If you’re looking for an SGI which is capable of taking a consumer level card, look at the Intel based ones. Although, they tend to look cheaper and are no way as cool looking as their MIPS based counterparts.

Luke.

SGI no longer make Intel based systems (IA 32 I mean) they were OK but stacked against a modern games cards they don’t cut the mustard unless you’re using the unified memory architecture on the older systems, which now look slow.

Newer SGI systems on the desktop are OK for geometry but they lack the programmability and multitexture features of PC cards. The older cards really show their age, so anything you can afford and second hand systems are only good for nostalgic reasons or if you have an IRIX app you need to run.

On the high end there’s Infinite Reality, again it lacks some features, but has a few of it’s own that interest SGI’s customers. It has a lot of fill if you have 4 Raster Managers and can do high performance full scene anti-aliasing at almost no cost, with 1 or 2 RMs you’re at GF4 levels of fill performance but the IR probably has more consistent fill performance across a range of primitive sizes, like small tris for example. It can’t do anisotropic texture in a single pass like PC cards. It’s biggest strength is the large MP single image IRIX systems you can connect them to. There’s a lot of scaleability there.

With Infinite Reality there’s a lot of stuff you can do that the system will do at full speed, pixel reads & writes to the framebuffer and texloads from host are pretty fast and that’s important for digital video folks and you have the system bandwidth to support it, independent i/o systems that scale all that sort of thing.

On the range of systems you have deep framebuffer formats for hardware accelerated accumulation buffers, and 12 bit RGBA or 16 bit luminance work.

The same is true on the newer IRIX desktop systems.

For just drawing triangles with rich state like bump mapping etc, even the high end SGI systems just don’t have the required features to compete, then again the professional apps don’t tend to use those features.

BTW the new Matrox sundog looks like it has impressive AA capabilities at high speed. At last this is finally arriving on PCs. I expect this is the first of many serious AA cards.

[This message has been edited by dorbie (edited 05-14-2002).]

I am not 100% certain on this, but SGI and MIPS aren’t entirely separate companies. Certainly MIPS chips come in a variety of speeds and don’t mess aorund, so I don’t know what you really mean by not pushing the envelope. No, they don’t rack up 2+Ghz, but they’re don’t need to :slight_smile:

Originally posted by john:
[b]I am not 100% certain on this, but SGI and MIPS aren’t entirely separate companies. Certainly MIPS chips come in a variety of speeds and don’t mess aorund, so I don’t know what you really mean by not pushing the envelope. No, they don’t rack up 2+Ghz, but they’re don’t need to :slight_smile:

[/b]

SGI spun off all their shares in MIPS and hence don’t own it anymore. MIPS do come in 2+Ghz now (see FastMIPS on MIPS’ main page).

I was saying that while SGI DID own MIPS they didn’t exactly do anything to speed the chips up, would’ve been nice.

As for not needing to go too fast, I disagree; if a chip manufacturer needs to stay competetive with other chips (even in embedded systems, where MIPS is used heavily) they need to ramp up the speed. Obviously, I don’t mean that Intel will put P4’s into an embedded system that requires low heat emission and low power consupmtion, but if they ever sorted that out and kept the speed, who would go for a MIPS? Sad really.

Luke.

Didn’t do anything to speed the chips up?

You idiot. SGI-MIPS developed many processors for YEARS after they switched from Motorola, The R4k, R8k, R10k, R5k, R14k, and that’s just a few the more recent versions. There were a whole bunch of embeded designs produced too. SGI-MIPS didn’t chase after clock speed at the expense of all else which is the right way to develop chips. SGI simply couldn’t invest at the levels needed and ended up cancelling a few projects like Alien and Beast, they probably shouldn’t have gone with two track architecture development, but all that is ancient history now. They DID do a LOT to improve performance and with the R8k were a clear FP performance leader, and the R10k they were at the cutting edge of performance too.

Originally posted by dorbie:
[b]Didn’t do anything to speed the chips up?

You idiot. SGI-MIPS developed many processors for YEARS after they switched from Motorola, The R4k, R8k, R10k, R5k, R14k, and that’s just a few the more recent versions. There were a whole bunch of embeded designs produced too. SGI-MIPS didn’t chase after clock speed at the expense of all else which is the right way to develop chips. SGI simply couldn’t invest at the levels needed and ended up cancelling a few projects like Alien and Beast, they probably shouldn’t have gone with two track architecture development, but all that is ancient history now. They DID do a LOT to improve performance and with the R8k were a clear FP performance leader, and the R10k they were at the cutting edge of performance too.

[/b]

Firtly, don’t call me an idiot! I know they produced chips and enhanced them. What I’m saying is that they’re still using chips that aren’t as fast as competitors. I also know that chasing “after clock speeds at the expense of all else” is better for development, but do you really want to see yet another decent architecture go down the pan because it’s behind all else??

They could’ve done better than they did. God!

Luke.

Anyone who says SGI-MIPS did nothing to improve performance of MIPS chips is an idiot in my book. It’s an insult to the hundreds of engineers SGI had working full time for years on this problem. SGI spent $ hundreds of millions doing exactly what you claim they didn’t do, moreover they produced real results. You’d have to have been living in a cave for 10 years to miss that.

If you’re changing what you said now that’s fine.

The fact is SGI’s revenues couldn’t sustain their level of investment in making MIPS chips faster, it was a substantial level of investment. And incase you don’t know, SGI STILL develops faster MIPS based chips for their workstations because MIPS won’t do it for them, so blaming SGI for a performance gap is asinine.

MIPS is now independent, the architecture has a better chance for success in future but it looks like it’ll be mostly embeded low power devices rather than high end workstation processors, that’s just where the market is.

howdy
I didn’t know that SGI sold their MIPS shares. what i meant (somewhat flippantly, I concede) about MIPS chips not needing to “be fast” was that they DON’T need a high clock speed to be comparable. THe pipeline of intel chips is incredibly more complicated than the pipeline of the MIPS architecture (which is very similar to the ‘idealised’ DLX architecture;-). Intel chips need the high clockrate to push instructions through the pipeline to generated n results per second; MIPS chips can also produce n results per second with a slower clock rate. That was all that I meant, in an ad hoc way.

I think its a pity that sgi sold mips. It seems that all the high-end RISC chips are dying from the intel/amd bemoth. :frowning:

cheers
jOhn

I think its a pity that sgi sold mips. It seems that all the high-end RISC chips are dying from the intel/amd bemoth.

Yes, it’s a real shame! I really believe that those high-end RISC chips would even be cheaper to design and manufacture than Intel’s/AMD’s monsterous architectures. If there only was a mass-market platform for them (PC), they would most likely be BOTH faster and cheaper than the x86 counterparts.

When it comes to CPU architecture, I believe in simplicity and brute force (i.e. RISC).

Howdy

sure, I agree. there are some new architectures coming out, tho’, that look pretty funky. I can’t remember what its called, but there’s a cpu that supports hardware threads. get this: it swaps the instruction stream in and out with amazing dexterity that memory access has effective ZERO latency. Rather than stalling the pipeline on a memory read, the cpu just swaps in another thread and continues on. it switches back to the thread with memory access when the data gets off the bus. its more complicated than that, but it looks really nifty.

you have to concede, tho’, that the intel and amd solutions to the legacy ISA is nothing short of amazing. complicated, power hungry, transistor expensive, yes; but VERY sophisitated and clever tricks to get around ye olde 80x86 instructions.

Hi,

I was working with SGI-Workstations and Hardware of different sizes. My experience was the following:

1.) As mentioned above you could let the O2 run for 3 weeks a simulation on whatsoever, and when you came back it was still running! (We tried it with a PC, very often it crashed after something like 1-2 weeks.)
2.) OpenGL on a SGI-machine is a blast! I was working with SGI Performer and pure OpenGL. If you had decent textures and models, the colors were better then real! (Angus, can you remeber? I was bugging you :wink: )
3.) SGI put alot of effort in improving their CPUs and I can’t say they did a bad job, actually they mad a darn good job.
4.) SGI offers a hardware and software support which tops everything I have seen so far! Okay, Dell is comming up with a pretty good support, too. But think back something around 5 years - there was a support desert on this world and SGI was the only oasis!
5.) SGI builds workstation for midrange and highend solutions, so if you want to have a cave or other stuff, to visualize scientific or engeneering data you don’t have a lot of choices on the market. From my point of view you have:
- E&S
- oh, let me think — yes - SGI!
SGI does not intend games to be developed for their workstations. I must admit Quake was a blast on our network with those nice 19" screens and O2!
For highend there are not really alot of choices on the market. Okay, fanatics may say take a Linux-Cluster, but how many companies can afford to hire a guy to take care of the cluster, everybody knows hardware has to be maintained! It’s easier to take an SGI solution and pay for a support contract!
6.) I believe SGI made mistakes in management and investment, but it is not up to me to judge what happend there. I loved the workstations(IRIX and Linux - Windoughs, too) and I was looking for working with the announced Linux-Cluster which afaik was canceled for some reason.

So now you know what I think,

Martin