Opengl based on MFC,high GPU usage of DMW

Hi everyone:
I want to draw at least 20 million point spirits on a MFC window. I create a hidden window and i draw these point on it.

In the geometry shader,I extend these points into quadrangle and rotate these quadrangles.

I tried to render these points to a framebuffer,but it didn't work, the DWM still have high [b]GPU[/b] usage(about 9%, almost the same with my program) and it make me hard to darg the windows.
with the increase of number of points,  even the cursor have blur.

ps:someone told me that GDI conflict with OpenGL.i don’t know how to register a window without GDI

There is a lot of peculiar facts in your post. But, let’s start from the beginning.

Why do you want to draw 20M point sprites when 1080p full HD screen has just 2M pixels? OK, maybe you have 4K or higher resolution screen…
Still this is too high number of sprites. Anyway, you could benefit a lot if some kind of culling technique is used.

The geometry shader (GS) is very expensive! Even rendering 20M vertices is a problem for the real-time display. 80M vertices is a 4x bigger problem. But using GS to emit 80M vertices is … hm … literally meaningless (at least for the real-time graphics).

How do you measure GPU usage by the DWM? You could see how long was the time-slot occupied by the DWM while communicating to the graphics driver, but not GPU usage. This is a very peculiar fact.
By the way, 9% of the GPU usage means it is almost idle. If it is a real usage, it means there is a severe bottleneck in the system. The usage of nearly 100% should be achieved if your application is well designed and there is no bottlenecks. Once again, you cannon measure the GPU usage by a single application.

How come? Maybe it means that there is a problem if you mix GDI and OpenGL drawing in the same window. Yes, there is a problem because of asynchronism of these drawings, but not a conflict.

[QUOTE=Aleksandar;1279453]There is a lot of peculiar facts in your post. But, let’s start from the beginning.

Why do you want to draw 20M point sprites when 1080p full HD screen has just 2M pixels? OK, maybe you have 4K or higher resolution screen…
Still this is too high number of sprites. Anyway, you could benefit a lot if some kind of culling technique is used.

The geometry shader (GS) is very expensive! Even rendering 20M vertices is a problem for the real-time display. 80M vertices is a 4x bigger problem. But using GS to emit 80M vertices is … hm … literally meaningless (at least for the real-time graphics).

How do you measure GPU usage by the DWM? You could see how long was the time-slot occupied by the DWM while communicating to the graphics driver, but not GPU usage. This is a very peculiar fact.
By the way, 9% of the GPU usage means it is almost idle. If it is a real usage, it means there is a severe bottleneck in the system. The usage of nearly 100% should be achieved if your application is well designed and there is no bottlenecks. Once again, you cannon measure the GPU usage by a single application.

How come? Maybe it means that there is a problem if you mix GDI and OpenGL drawing in the same window. Yes, there is a problem because of asynchronism of these drawings, but not a conflict.[/QUOTE]

thank you for your answering,i think i realized it in a wrong way.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.