Better sphere support

As a molecular modeller using OpenGL to visualise molecules, I find the lack of true sphere support very limiting.

A typical protein molecule has about 3000 atoms to draw as spheres. These need at least 20 slices/stacks to draw: not exactly real-time to display.

Way back, I used an old SGI IRIS4D machine with the original GL code - which supported the sphere primitive. Why has OpenGL still not got this?

How is the graphics hardware going to draw a sphere? Automagically generate a triangle mesh?

With some thought, I suppose it’s not unreasonable to devise a way to generate some sphere based primitive in the gfx hardware. For a given sphere you can work out its footprint based on its radius & projection and depth, and then use some circle drawing code to rasterise it… of course, you’d probably need some clever tricks to turn a 2D silhouette into a texture mapped shaded spehere thing…

but this is possibly all too hard, since you might as well just make a triangle sphere mesh or do some tricky billboarding thing to get a sphere… which is the approach opengl is saying. sure, it’ll be nice (possibly) to have such primitives in an API, but if opengl is just (JUST!!) an abstraction over the graphics hardware, and the graphics hardware does not support spheres, then it’s original design philosophy is to say it’s up the developer who knows more about the application.

my 2c worth

cheers,
John

I agree that it would be nice if the hardware should support sphere primitives. The ancient (1989 or so) SGI hardware I was using did support spheres. I really miss the glSphere(X,Y,Z,Radius) call!

Surely rendering a sphere does not require a triangle mesh (typically 1.2 million vertices for a protein image). It is this huge number of vertices that I am hoping to avoid: 3000 atoms represented as primitives should be rotated and rendered much more quickly than 3000 20x20 meshes of shaded triangles.

Each pixel on a sphere can be rendered using a very simple equation for its (X,Y,Z) and surface normal. Adding lighting effects and texture mapping are just as simple. I’m sure that rendering a single sphere instead of 200 or so shaded triangles would be quicker. Plus, the 400-fold reduction in 3D points would more than make up for the rendering in software of the sphere itself.

One way I’ve tackled the problem is to use a ray tracer to render a high quality sphere with Z information in the alpha channel, and display that for each atom using the OpenGL bitmap functions. Of course, this can’t account for different lighting directions, but works OK. And each element in the molecule needs its own sphere bitmap (conventionally, oxygens are red, nitrogens blue, hydrogens white etc.).

If sphere (and cylinder) primitives were added to the OpenGL spec, then surely the hardware manufacturers would follow without too much delay. The GLU library already provides the API calls, so existing code wouldn’t need to be altered to take advantage…

Originally posted by rj_gilbert:

And each element in the molecule needs its own sphere bitmap (conventionally, oxygens are red, nitrogens blue, hydrogens white etc.).

You don’t need separate bitmaps; just use a greyscale texture to modulate a different base colour for each type of molecule.


If sphere (and cylinder) primitives were added to the OpenGL spec, then surely the hardware manufacturers would follow without too much delay. The GLU library already provides the API calls, so existing code wouldn’t need to be altered to take advantage…

One: hardware manufacturers wouldn’t touch it with a bargepole, and rightly so. There’s a limited amount of silicon real estate on a gfx chip, and no designer is going to sacrifice gaming or CAD/CAM performance for features which are only really useful for molecular modellers. (Come on, who else draws nothing but spheres and cylinders?)

Two: GLU isn’t widely used for performance work; the GLU primitives are more suitable as a quickstart when learning OpenGL than for serious use. So very little existing code would benefit.

Three: OpenGL already has evaluators, which would allow hardware manufacturers to boost quadric rendering if they really wanted to. They don’t; there isn’t the market for it. NVIDIA’s original chipset, the NV1, was based on quadrics rather than triangles, and it almost killed them.

IMO, OpenGL’s goal of orthogonality is a good one - stick to features which will be widely used and which can’t easily be implemented in terms of other, existing features. By keeping the core feature set small we gain flexibility and hardware manufacturers don’t have to guess which bits to accelerate.

Originally posted by MikeC:
… no designer is going to sacrifice gaming or CAD/CAM performance for features which are only really useful for molecular modellers. (Come on, who else draws nothing but spheres and cylinders?)

Astronomers, physicists, engineers, architects, pool game developers…

My main point was that OpenGL itself should support a sphere primitive. It would then be up to the hardware guys to implement sphere acceleration if they thought it economic. If hardware sphere support is unavailable then the glSphere calls could get compiled into hundreds of glVertex calls as at present. The OGL library could implement this without too much pain.

Currently, scientists and engineers have to lay out serious money to get the graphics performance they require for their jobs. A PC + OpenGL card costing a few thousand pounds/dollars would still be cheaper than a workstation consting several tens of thousands. And scientific software developers would be able to port their code onto mainstream PCs, increasing their market size enormously.

Although the volume is tiny compared to the gaming market, I suspect that the sales to industry, research institutes, colleges and so on would make it commercially viable.

Well, I didn’t catch what hardware you are running on, but there are quite a few options open to you. First, you can quite easily keep doing your bitmapped spheres, just taking the previously mentioned suggestion of using the “modulate” texture mode to colorize them as they are drawn.

Another possibility (be warned that this is a bit technical) is to draw oversized, camera-facing quads instead of each of your spheres (you were already probably doing this when you were bitmapping). Tweak the normals of these quads so that while the faces are front-facing the vertex normals are back-facing. Thus there will be a great many pixels on the surface that have back-facing interpolated normals. You can use a spherical or cubic environment map to mask away these pixels (setting their alpha to zero and then doing an alpha test - or blending if you want antialiased edges). If you set everything up correctly there should be a remaining circle of pixels with properly interpolated normals (usable for OpenGL lighting). I believe that there is example code of this on nVidia’s developer page, and if there isn’t I could probably be pressed to clarify things…

If OpenGL supported spheres most implementations would simply draw them inefficiently. You have an opportunity to optimize for your application space here. Hardware doesn’t draw spheres efficiently. The only platform I’m familiar with which did was Pixel Planes 5 and it approximated with a parabolic cross section which unfortunately always intersects linearly in eye space.

Even if you got what you wished for you’d be posting here wondering why spheres were so slow or why the only hardware which drew fast spheres was so expensive.

The situation is this; any IHV in the world is free to add a sphere primitive if they choose to, and if there were enough of your type of developer prepared to pay a premium for that hardware and who would actually use that sphere extension some IHV’s would support spheres. Infact they still might if you ask/persuade them.

Perhaps they feel that they can’t improve on what some developers already manage to do with existing hardware.

Hey,

Klaus (from Switzerland, no less=) wrote to me about a new sphere model he invented. Uh, well, yer… I tried to reply, but my email bounced. :stuck_out_tongue: You may want to check that, Klaus. =D

I’m interested in checking out your demo, btw.

cheers,
John

i think we all are

any hardcore gamers here played DUNE 1 ? it had a nice pixel-perfect sphere, software rasterized. it was designed to run on a 386 processor (or maybe 486)

i dont know about z, though… i just remembered that game.
and yes, you should push IHVs to make an extension, since glut does some calculations and then OpenGL calls. it wouldnt draw the primitive by itself

timfoleysama could you please explain that quad-with-backfacing-normals idea? i didnt quite get it

[This message has been edited by MrMADdood (edited 09-29-2000).]

John,

I sent you the files by email on Friday.

Regards
Klaus