View Full Version : Draw on-screen vertices index efficiently

Narann

01-23-2012, 02:34 AM

Hi OpenGL community, I need your advises. :)

I have a geometry mesh. At each vertice, I want to write the vertice index on the screen (I use a fontmap for font).

So I need to:

- Find the index position in screen space.

- Generate square at this position for each number (348 number will need 3 square, 10 will need 2 squares, etc...) and give then good uvs from the font map.

- Send this to the GPU.

- Draw.

The problem for me seems to be the first step... :(

I'm afraid matrices calculation on the CPU for each vertex to find on screen coordinate could be big...

Same to generate squares vertices for each number with different uvs. It could be long...

Any of you have a better Idea? :D

And if anyone have good infos/link to do this kind of stuff efficiently, don't hesitate!

Draw on screen vertex indices efficiently?

Thanks in advance!

Kopelrativ

01-23-2012, 06:07 AM

Supposing you have the world coordinates, you need to transform each position for view and projection. You can create a single matrix with the combined Project*View in advance. That means you will need one transformation for one corner of each square, and the other corners can be computed easily (assuming all squares have equal size and orientation on screen).

I would recommend the glm package for easy and quick matrix manipulations, and I don't think a couple of 100 transformations should be a problem. If you are uncertain, make a simple test program just to measure time. Notice also that much of the drawing will be ingoing in parallel with your calculations.

So you will draw ~400 boxes on the screen, with information in each? Sounds quite crowded to me. If you are not showing all boxes at the same time, there there are algorithms that can help you minimize the list of necessary calculations. It also depends on what timing requirements you have on the drawing operation.

Narann

01-23-2012, 06:52 AM

Thanks for your answers! :)

Supposing you have the world coordinates, you need to transform each position for view and projection. You can create a single matrix with the combined Project*View in advance. That means you will need one transformation for one corner of each square, and the other corners can be computed easily (assuming all squares have equal size and orientation on screen).

Ok, this was how I was thinking to do.

Compute a projection matrix and project every vertex on it. Once I have every vertices projected I can easily compute squares vertex positions.

But this method is still long to compute no? CPU (even multithreaded) have to project (doing matrix computation) each vertices to know the position on the screen and generate squares coordinates...

Are we talking about the client side (I am)? If not, I really don't know how do this on server side and any help to do this will be really appreciate. :)

Actually, I was thinking about compute vertex projection (and so, square vertices coordinates) on server side in a map ((four square corner * number of vertices)*pixels in a 1D map).

So stop draw once the vertex shader is compute (because I don't need fragment shader). Then "redraw" using the real shader the whole and final draw.

I would like to use the ability to video card to quickly compute projection and store them in a map and use this map just after. This will avoid CPU-GPU transferts: Vertex buffer could be generated in a map by the GPU in a non-draw operation and used just after to definitely draw indices on screen.

Maybe I'm completely wrong but it surprise me if this wasn't possible to do...

Anyone have an idea on this?

I would recommend the glm package for easy and quick matrix manipulations, and I don't think a couple of 100 transformations should be a problem. If you are uncertain, make a simple test program just to measure time. Notice also that much of the drawing will be ingoing in parallel with your calculations.

Yes, but it's a CG app and it can potentially display a lot of triangles. That's why I would like a efficient and fast way to do this.

So you will draw ~400 boxes on the screen, with information in each? Sounds quite crowded to me. If you are not showing all boxes at the same time, there there are algorithms that can help you minimize the list of necessary calculations.

Yes, if I need to use the "CPU way" for projection, I could easily remove vertices/square/indices display from computation. But I could like to see if a "GPU only way" is possible.

It also depends on what timing requirements you have on the drawing operation.

I don't really have formal "timing requirements" (this is not video game).

Kopelrativ

01-23-2012, 09:09 AM

I did a test, on a fast Intel Core I7, and could execute almost 7 millions "glm::mat4 * glm::vec4" per second, not using constant variables. This was with "gcc -O0", using one thread.

There is a good chance using CPU computation should suffice for you.

Narann

01-24-2012, 01:09 AM

Owh! Really! o_O

Ok, I will try this.

Thanks! :)

Powered by vBulletin® Version 4.2.2 Copyright © 2015 vBulletin Solutions, Inc. All rights reserved.