I am trying to plot scientific data using OpenGL. Typical data vectors size is in the range of 5000-16000.
For the first draft, I used the VTF technique described in this tutorial (https://en.wikibooks.org/wiki/OpenGL...GL_Tutorial_02). This works really well, it allows me to do analysis within the vertex shader itself like keeping extreme values when the graph is displayed (required -- for example, if you display the graph in a 200px window, you do not want to let OpenGL decide if it shall discard the data point or not -- more details below). What I really like here is that this processing is done on the GPU instead of the CPU.
Now I am porting this test program to OpenGL ES. Unfortunately, this technique won't work at all because GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS is 0 on the embedded platform (Tegra). Currently, I see 2 alternatives:
- Do pre-processing with the CPU in order to keep only the significant data before uploading XY coordinates to the GPU (like tutorial 1: https://en.wikibooks.org/wiki/OpenGL...GL_Tutorial_01)
- Upload ALL XY coordinates (like tutorial 01 above) to the GPU -- but then, how can I make sure that the rasterizer won't skip my data point?
Typical use case: Let's say you have data 5000 points. All of them have 0 as value, except one in the middle which have 1. When displayed in a 200px window, a vertical line in the middle of the graph shall always be visible.
Thanks for your input!