Terrain generation doable with cG

Hello you experts,

my idea is to implement a terrain generation engine on a modern graphics card (using the cG language resp. GL extensions), so that the (minimum) input at runtime is just Camera Pos&Orientation…

I have posted my ideas in a very brief
PDF (it’s just easier in pictures…):
www.nocke.de/master/MasterFeasibility.pdf

Some facts look encouraging (well, to me :slight_smile: but I am not so sure how to ‘crack’/abuse this stream-programming one-vertex-at-a-time scheme… although other solutions (i.e. for collision-detection must be doing it, too)

I am an intermediate OpenGL programmer and (obviously) still a beginner with cG/OpenGL extensions…

So, guys, what do you think? Is this doable in cG?

THANX A LOT FOR ALL FEEDBACK!!

Frank

I have thought about doing that too, when I found out about noise. I figured I could make a flying game that randomly generates different terrain for ever. Are you planning to use noise? I think the noise function might be too expensive for realtime evaluation at each vertex, even if you figured out exactly which vertices need to be rendered to the screen. unless of course the terrain is low poly, but with random generated terrain you will want to add detail since it’s so easy to.

I don’t know anything about CG, but it looks like the best way to do it to me.

If you download the Cg SDK from NVIDIA, website you’ll find a demo called “Procedural Terrain Demo”. Start looking at it.

Hi you two,

thanks for feedback. But my problem is not so much deforming some vertices based on noise, fractal noise or in any other form.

I want to genuinely place terrain piece after piece. With the above techniques I could make hills and valleys but not a lamppost or a flag or anything like that on a field…

cG can do so much. Ist there no way to trick out this one-vertex-at-a-time thing a bit more?

Frank :frowning:

Originally posted by Frank No:
cG can do so much. Ist there no way to trick out this one-vertex-at-a-time thing a bit more?

That is the way vertex programs work. Using Cg or ARB extension does not matter, the hardware uses the vertex programs to just “filter” vertex data.

One possibility is to have a bunch of vertices at position (0,0,0) and use them as a stockpile to do different things.

And the CPU can add a new “vertex buffer” to the GPU when it is empty.

Well, I am not too advanced on vertex programs, so it may even not work at all. But it is what I learned on the advanced GL forums…

Thanks, zbuffer, your clearing up things. Even though in a wrong direction, but, well, not your fault :slight_smile:

One possibility is to have a bunch of vertices at position (0,0,0) and use them as a stockpile to do different things.

I thought about stacking “spare” vertices, too. In one place or as a somewhat tesselated sphere…

But trouble is, I have little/no influence on the edge definitions, do I?

So all I could do, is to deform the grid or ball a little bit here and there based upon my controller map/fractal, just like the demo terrain shader, but insufficent for putting complex element in place (“calling in”), not to mention premade with multitextures and all.

But well, then I suppose all those vertex-shader particle system are incabable of creating new “child” particles. And new ones i.e. in the center of a fountain can only start once they died their “previous life” somewhere else?!?
Sadly, I think I got the concept. So despite having all info I need on the graphics card I cant generate as I want, to?!? :-((
If someone has a tip to save the day :-), I would much appreciate :slight_smile:
One person told me, that with the latest fx cards it is possible to treat (ie pre-process…) geometry as vertex arrays and thereby in similar ways like maps… well, that probably just meant, you can do matrix transformations to them, but I am not quite sure myself what I am talking about…

If anyone has an extra idea… PLEASE

After all, all terrain building blocks are already stored on the graphics card, if there was a way to multiply call up these vertices (the vertices for the respective building block would then move themselves in each go to the right position)

[This message has been edited by Frank No (edited 12-18-2003).]

But trouble is, I have little/no influence on the edge definitions, do I?

If you use GL_TRIANGLES vertex definitions, you won’t have problems with edges topology. Of course, you will have (about) 3 times more vertices, and maybe sligth cracks will appear between adjacent triangles.

One person told me, that with the latest fx cards it is possible to treat (ie pre-process…) geometry as vertex arrays and thereby in similar ways like maps…

Maybe you mean SuperBuffers (or überBuffers) ? A sort of generalisation of VBO. I think it will not come before OpenGL 2.0 …

After all, all terrain building blocks are already stored on the graphics card, if there was a way to multiply call up these vertices (the vertices for the respective building block would then move themselves in each go to the right position)

That should do the trick, at least for the terrain itself.

Good luck !

What you’re trying to do is too complex for the current hardware. Following your example you would like to access a buffer containing the geometrical and material (shader) description within a vertex program. This is impossible inside a vertex program today and probably in the future, because depending on the complexity and state you’d need to load a different program to draw and color that object.

What you can do today is to store all your terrain building blocks in individual objects in your application. How you do that is completely your decision.
You can put the stuff to draw into display lists, that is as far away from user side as it gets. Drawing the terrain building blocks would be a set of glCallLists with the suitable transformations and display list IDs applied.

Thank, relic. I got the point.
GLCallList will be good enough after all for what I have in mind…

Topic Clodsed. Thanx again. Frank