how can i iterate with loop in sampler2D

I have some data encoded in a floating point texture 2k by 2k. The data are longitude, latitude, time, and date as R,G,B,A. Those are all normalized but for now that is not a problem. I can de-normalize them later if i want to.

What i need now is to iterate through the whole texture and find what longitude, latitude should be in that fragment coordinate. I assume that the whole atlas has normalized coordinates and it maps the whole openGL context. Besides coordinates i will filter data with time and date but that is an if condition that is easy to be done. Because pixel coordinates that i have will not map exactly that coordinate i will use a small delta value to fix that issue for now and i will sue that delta value to precompute other points that are close to that co.

Now i have some driver crashes on iGPU (it should be out of memory or something similar) even if i want to add something in 2 for nested loops or even if I use a discard.

The code i now is this
NOTE f_time is the filter for the time and for now i have a slider so that i will have some interaction with the values.


 precision mediump float;
	precision mediump int;

	const int maxTextureSize = 2048;
	varying vec2 v_texCoord;
	uniform sampler2D u_texture;

	uniform float f_time; 
	uniform ivec2 textureDimensions;
	
	void main(void) {
		float delta = 0.001;// now bigger delta just to make it work then we tune it
		
		// compute 1 pixel in texture coordinates.
		vec2 onePixel = vec2(1.0, 1.0) / float(textureDimensions.x);
		vec2 position = ( gl_FragCoord.xy /  float(textureDimensions.x) );
		
		vec4 color =  texture2D(u_texture, v_texCoord);
		vec4 outColor = vec4(0.0);
		
		float dist_x = distance( color.r, gl_FragCoord.x);
		float dist_y = distance( color.g, gl_FragCoord.y);
		//float dist_x = distance( color.g, gl_PointCoord.s);
		//float dist_y = distance( color.b, gl_PointCoord.t);
		for(int i = 0; i < maxTextureSize; i++){
			if(i < textureDimensions.x ){
				break;
			}
			
			for(int j = 0; j <  maxTextureSize ; j++){
					if(j < textureDimensions.y ){
						break;
					}
				 // Where i am stuck now how to get the texture coordinate and test it with fragment shader
					// the precomputation
				vec4 pixel= texture2D(u_texture,vec2(i,j));
 				if(pixel.r > f_time){
						outColor = vec4(1.0, 1.0, 1.0, 1.0);
						// for now just break, no delta calculation to sum this point with others so that 
						// we will have an approximation of other points into that pixel
							break;
					}
				}
		}

		// this works 	
		if(color.t > f_time){
			//gl_FragColor = color;//;vec4(1.0, 1.0, 1.0, 1.0);
		}
		
		gl_FragColor = outColor;
	}

What you need now is to convert the data from what you have to something resembling a sane format.

From what you describe, a texture isn’t a suitable structure for your data and a fragment shader isn’t a suitable execution environment for your algorithm.

That is a requirement for the project.

I have to store data in a sparse texture not data cube and use fragment shader to iterate through the texture and draw points on the screen. I really dont know what to do now if this is not possible ;(

EDIT: If you have any suggestions on how to proceed using this data structure i would appreciate that a lot .

[QUOTE=lummxx;1280500]That is a requirement for the project.

I have to store data in a sparse texture not data cube and use fragment shader to iterate through the texture and draw points on the screen. I really dont know what to do now if this is not possible ;(

EDIT: If you have any suggestions on how to proceed using this data structure i would appreciate that a lot .[/QUOTE]

Well, the obvious thing to do would be to sort the data so that you can find points close to a given lat/lon position without having to iterate over the entire texture. Having 4 million iterations in a fragment shader isn’t something which should be expected to work.

So if you have 4m points, the first row should hold the 2k points with the smallest latitude, the next row the next 2k points, and so on. Each row should be sorted in order of longitude.

Then to find points within a given distance of a particular lat/lon coordinate, you can perform a binary search in the Y direction until you find a row containing points with approximately the correct latitude, then start examining the rows above and below. If a row contains any point whose latitude is too high or too low, there’s no need to go any farther in that direction because all points in the next row will have an even greater difference in latitude. And for each row, you perform a binary search in the X direction until you find the point which is closest in longitude, then iterate to the left and right until the points go out of range.

For sorting data using a GPU (either using an actual compute shader, or a “poor man’s compute shader”, i.e. render-to-texture and a fragment shader), look up “bitonic sort”. It’s an O(n.log2(n)) static sort (meaning that the pairs of elements on which compare-and-swap operations are performed are fixed and don’t depend upon the data, which is desirable for a GPU as it allows for parallelism without the need for locking). Even-odd mergesort has similar properties, but bitonic sort seems to do better in practice.

One issue I foresee is that if you’re limited to 8-bit textures, then you only have 256 possible latitude or longitude values. With 4m points, on average you will have 64 points with any given lat/lon pair, and in some cases that figure could be much higher. Even 64 iterations may be too many for some implementations.

Thanks a lot for the help.

I thought of doing more or less what you sad. doing binary search but a bit different. I would go to the end of each X or just go check the first pixel of X and see if the values fall in that range. Now data is sorted and I am using a 16bit floating point texture and i have normalized the data because i have negative lat/lon values. Also i normalized the data because i thought that will help when i want to see if that lat/lon is on a particular fragment coordinate because both are from [0,1] (thought of doing something like 1-to-1 mapping)

For searching i found also this http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter27.html but since i have X or lat sorted I will skip and try to make it work in a simplest way possible for now. Also I can sort them because i have the java code and the SQL queries to do whatever i want with the data before i write them into binary and load those into floating point texture.

I have one problem that i can not figure out now. How do i iterate though the texture (PS i had a very very bad day and for some reason i cant see what I am doing wrong) .

EDIT: If i see that i have a different color than black then I will discard that pixel because those points are too close and can be included into one pixel due to the resolution constrains. That will simplify the 4 million points problem mentioned at the first line. Also now i am using a smaller texture 256x256. If the solution works for the small one it should work also for the bigger ones, if not the one with 4 million points.Again in short I need to map the lat/lon at the fragment coordinate that are approximately the same, this can also make the search shorter if i sort the data because once i pass one point then i wont need to go back because values will be smaller.

My loops for iterating through the texture are :


vec2 onePixel = vec2(1.0, 1.0) / float(textureDimensions.x); // size of a pixel 
for(int i = 0; i < maxTextureSize; i++){
			if(i < textureDimensions.x ){
				break;
			}
			
			for(int j = 0; j <  maxTextureSize ; j++){
					if(j < textureDimensions.y ){
						break;
					}
				 // Where i am stuck now how to get the texture coordinate and test it with fragment shader
				 // the precomputation
                                 //vec4 pixel_onePoint= texture2D(u_texture,vec2(onePixel.x* float(i),onePixel.y* float(j)));

					vec4 pixel= texture2D(u_texture,vec2(i,j));
					 float dist_x = distance( pixel.r, gl_FragCoord.x);
				
						if(pixel.r > f_time){ // i have range slider to be able to interact easier with the condition [0,255], later this will have 1h precision [0,23] hours
							outColor = vec4(1.0, 1.0, 1.0, 1.0);
					}
				}

The commented line is correct, i.e. you need to divide the indices by the size of the texture. To avoid rounding issues, I suggest adding 0.5 after converting to float but before multiplying by onePixel.

So far this i my texture. I am doing something wrong when i am looking up in the texture .


	precision highp float;
precision highp int;

const int maxTextureSize = 2048;
const float deltaOffset = 0.1; // for now big then we increase the precision

varying vec2 v_texCoord;
uniform sampler2D u_texture;

uniform float f_time;
uniform ivec2 textureDimensions;
// R and G are X and Y coordinates
// Bis time

vec4 outColor = vec4(0.0);

void main(void) {
    // compute 1 pixel in texture coordinates.
    vec2 oneTextPixel = ( vec2(1.0, 1.0) / (float(textureDimensions.x)/*+0.5*/) ); // size of a pixel 
    // normalize frag coordinates
	  vec2 normFragCoord = vec2( gl_FragCoord.st / (float(textureDimensions.x)/*+0.5*/) );
   // vec2 coord = ( floor(gl_FragCoord.xy) + 0.5) * float(textureDimensions.x);

    //vec4 color = texture2D(u_texture, v_texCoord);

    for (int i = 0; i < maxTextureSize; i++) {
        if (i < textureDimensions.x) {
            break;
        }

        for (int j = 0; j < maxTextureSize; j++) {
            if (j < textureDimensions.y) {
                break;
            }
					
				vec4 pixel_onePoint = texture2D(u_texture, vec2(oneTextPixel.x * float(i),oneTextPixel.y * float(j) ));
        float dist_x = distance( pixel_onePoint.r, normFragCoord.x);
        float dist_y = distance( pixel_onePoint.g, normFragCoord.y);
            
            //float xOffset =(oneTextPixel.x+0.5) * float(i);
            //float yOffset = (oneTextPixel.y * float(j);

            //vec4 pixel = texture2D(u_texture, vec2(xOffset, yOffset));
            //vec4 pixel = texture2D(u_texture,vec2(i,j));
            //float deltaX = pixel.r - (gl_FragCoord.x - 1.0);
            //float deltaY = pixel.g - (gl_FragCoord.y -1.0);
        if(dist_x < 0.5){
           outColor = vec4(1.0);//color; 
        }
            if (pixel_onePoint.r < normFragCoord.y) {
                outColor = vec4(1.0);//color;
                break;
            }
        }
    }
    
   /*
        float dist_x = distance( color.r, normFragCoord.x);
        float dist_y = distance( color.g, normFragCoord.y); 
           
        if(dist_x < 0.01){
           outColor = color; 
        }
        
        if(dist_y < 0.01){
           outColor = color; 
        }*/
    
    
    

	  // this is working just as a test and to see how frag is working here or if i'm doing smth wrong
    //if (gl_FragCoord.x < float(textureDimensions.x)-100.0) {
    /*if (normFragCoord.x < 0.6) {
        if (color.r < normFragCoord.x) {
           // outColor = color; //;vec4(1.0, 1.0, 1.0, 1.0);
        }

        if (color.g < normFragCoord.y) {
           // outColor = color; //;vec4(1.0, 1.0, 1.0, 1.0);
        }
    }*/

    gl_FragColor = outColor;
}

Ok change of idea. Now i have to make a 1D texture, a small one for now with 256 pixels and do binary search on that texture. Now i have to see if I can do that and for each pixel just put some color based on hoe many values I have.

I will write some code and put it here in case i cant figure something our.

OK, so the idea now is to try and do binary search on 1D texture and return the count of the items that fall within that range.

I will count the items like this:
With binary search I will find the item that is closest (if I can’t find that value) to low and high value, where low and high are the pixel position in 1d texture. Then I will do high - low to get the count in that region. I will need 2 binary search for this. One to find the low and one to find the high offset.

I did implement this in Java fairly easy bit in fragment shader it is just too difficult for me now or I am just missing something small.

For now I will map the whole 1D texture to the canvas and just see if I can make it work and without dynamic loops it is very challenging for me now.

I am sorry if I am asking a lot
I have shared a link to a picture the professor and me drew today https://www.dropbox.com/s/32slnni1tg2wjz4/DSC_0214.JPG?dl=0
The code that I have now is this



precision highp float;
precision highp int;
 
const int maxTextureSize = 32; // This means that the texture should not be bigger than 32x32
const int maxTextureLength = 512; // X axis in the texture should not be longer than 512

uniform vec2 textureDimensions;
uniform vec2 canvasDimensions;
uniform vec2 partitionData;

varying vec2 v_texCoord;
uniform sampler2D u_texture;

uniform float f_time;

float binarySearch2(float keyLow, float keyHigh){
		// this will give us the value on the texture on that pixel
	float pixel =  texture2D(u_texture, vec2(keyLow, 1)).r;
 
	
	// RED is your x axis and that is all we need now
	//high = texture2D(u_texture, vec2(low, 1)).r;
	
	// I need to figure out how to do binary search with constant for loops
	/*	
	for(int i = 0; i < maxTextureLength; i++) {
			
			if( i > int(high)) {
				break;
			}
		
			float middle = (low - high)/2.0;
		 
			if (texture2D(texture, vec2(middle, 1)).r == key)  {
            //return true;
        }	else if (texture2D(texture, vec2(middle, 1)).r > key )  {
            high = middle - oneTextPixel;
        } else    {
            low = middle + oneTextPixel;
        }
		// return false;
		}
 
	// return the difference 
	*/
	// my Java implementation
	/*public static boolean binarySearch(int[] array, int value)  {
    int start = 0;
    int end = array.length -1;              
     
    for (int i = 0; i < array.length; i++)   {
        int middle = (end - start)/2;
        if (array[i] == value)  {
            return true;
        }
        else if (array[middle] > value)  {
            end = middle - 1;
        }
        else    {
            start = middle + 1;
        }
    }
    return false;
}*/
	return 0.0;
}

float binarySearch(float key){
	float keyL = key + (key/2.0);
	float keyH =  key - (key/2.0);
	
	float b2 = binarySearch2(keyL,keyH);
	
	return 0.0;
}

void main(void) { 
	vec4 outColor = vec4(0.0);
	// compute 1 pixel in texture coordinates. We will have always the same Width and Height 
	// thus the texture pixel size is the same for both 
	float oneTextPixel =  1.0 / float(textureDimensions.x)/*+0.5*/ ; // size of a pixel 
	// normalize frag coordinates
	vec2 normFragCoord = vec2( gl_FragCoord.st / (float(canvasDimensions.x)/*+0.5*/) );
	float oneFragPixel =  1.0 / float(canvasDimensions.x)/*+0.5*/ ; // size of a pixel
	
	// fixed values for now
	for ( float i = 0.00390625; i <= 1.0; i += 0.00390625){
		binarySearch(i);
	}
	//vec4 color = texture2D(u_texture, v_texCoord);
	
	float nf_time = f_time / canvasDimensions.x;
	float key = gl_FragCoord.s /  canvasDimensions.x;
	
	if(key < nf_time ){
			outColor = vec4(1.0);
	}
	/*if(gl_FragCoord.x < f_time ){
		outColor = color;
	}*/
	
	gl_FragColor = outColor;//+ texture2D(u_texture, v_texCoord);// outColor;
}

I was working on this problem but it seems to make no sense. Here is again the problems described better

I have been asked by a professor to try and do binary search in WebGL using fragment shader. His class is a research class that’s why problems are hard and I took the hardest one :frowning:

The idea is coming from imMens paper as the fastest system for iterative visualization that uses Data Cubes. The whole idea is to use a sparse texture to save data instead of Data Cubes, but things didn’t go well so far because the professor made me change the whole project 4 times now and I have 1 week till the end of the semester. I made the fragment shader work and scan incrementally through 4.2 million points but it is just super slow and crashes the GPU driver( my 980m and my r9 290x at home) .

Now the new requirement is to try and make binary search on fragment shader on WebGL without dynamic loops and without while loop. This is what I cannot program.

I have to use a 1D texture to store data sorted by longitude that is saved on Red channel. Then for each pixel in the screen I have to do binary search and find out how many pixels from 1D texture have longitude that is on that pixel on the screen.

Lets say we have a screen with 4x4 and let’s take only the first row. Since the coordinates are normalizes we have the following ranges 0-0.25, till 0.75-1.0. What I need to do is find all the longitudes in 1D texture that fall between 0 and 0.25. So he asked me to do binary search and if I can’t find exactly 0 and 0.25 I will find the closest one, then I return the position of them in the texture and I calculate the offset from them.

I hope this is clearer because even the professor had problems coming up with this.

A binary search takes at most ceil(log2(N)) iterations, where N is the number of elements in the list. E.g.for a list of up to 256 elements, a binary search requires at most 8 steps. Thus you can replace the while loop with a for loop whose upper bound is based upon the size of the list, and be certain that you’ll have finished the search before the loop terminates.

What are you going to do if the answer is “all of them”?

Even though you can perform a binary search in O(log(N)) time, unless you can place an upper bound upon the number of results, actually doing anything with those results requires an loop whose upper bound is the size of the list. You can “break” out of the loop after processing all found elements, but that may just result in the GPU performing the remaining iterations with stores disabled.

Ultimately, WebGL is largely unsuitable for this sort of GPGPU computation due to the prohibition on unbounded loops.

Yes, Binary search is efficient for that. I can make it work on Java but I will try to make it work on fragment shader tonight after i write my report. This was a very challanging problem that will end up without a solution (at least for now).

[QUOTE=GClements;1280543]

Ultimately, WebGL is largely unsuitable for this sort of GPGPU computation due to the prohibition on unbounded loops.[/QUOTE]

Yes that is what i tried to tell the professor via email but he is still convinced that it might work. I can not push my self now on the finals week and try to solve this problem.

I will work tonight and see with what i can come up and post any findings here including the fragment shader.

Thanks for help guys :smiley:

I believe I have a working version or a partial working version. I still need to figure out 2 things.

How to do correct indexing with floating points and make sure I am at the correct texture position after dividing. Probably I have to use module for that or so some maths trick to make sure I am iterating at correct normalized texture coordinates while searching through texture.

The second issue is, if the value is not found how I return the index that is the closes so that I will be able to do the count later.

Thanks for help and feedback. :smiley:

The fragment shader now.
Its 2am now. I have to go and take some rest from this and see what else i can come up with.


precision highp float;
precision highp int;
 
const int maxTextureSize = 32; // This means that the texture should not be bigger than 32x32
const int maxTextureLength = 512; // X axis in the texture should not be longer than 512
//const int maxBinarySearchLoop = int(ceil(log2(float(maxTextureLength)))); //maxTextureLength;

uniform vec2 textureDimensions;
uniform vec2 canvasDimensions;
uniform vec2 partitionData;

varying vec2 v_texCoord;
uniform sampler2D u_texture;

uniform float f_time;

float binarySearch(sampler2D data, float key){
	float oneTextPixel =  1.0 / textureDimensions.x; // size of a pixel 
	float keyL = 0.0; 
	float keyH = 1.0; // we have normalized texture values and the max texture size is 1.0
	float keyM = (keyH + keyL)/2.0;
	
	int maxBinarySearchLoop = int(ceil(log2(float(maxTextureLength))));
	
	// I dont know how to use non constantd in loops in WebGL 
	for(int i = 0; i < maxTextureLength; i++){
		if(i > maxBinarySearchLoop) {
			break;
		} else {
			if( texture2D(data, vec2(keyM,1)).r < key ){
				keyL = keyM + oneTextPixel;
			} else if( texture2D(data, vec2(keyM,1)).r == key ) {
					return keyM;
			} else {
				keyH = keyM - oneTextPixel;
			}
			keyM = (keyH + keyL)/2.0;
			// termination condition for now sicne i can not use uniforms ;(
			/*if(keyL > keyH){
				break;
			}*/
		}
	}
	// I return the position of the value in the texture of 
	// if not found I return the closes position one 
	return keyM;
}

void main(void) { 
	vec4 outColor = vec4(0.0);
	vec4 color = texture2D(u_texture, v_texCoord);
	// normalize frag coordinates
	vec2 normFragCoord = vec2( gl_FragCoord.st / (float(canvasDimensions.x)/*+0.5*/) );
	//float oneFragPixel =  1.0 / float(canvasDimensions.x)/*+0.5*/ ; // size of a pixel
	
    // this adds the offset e.g. if the canvas is 4 pixels then the
	// first pixel will be .0125 and the range is 0.0 and 0.25
	float keyL = normFragCoord.x - (normFragCoord.x/2.0); 
	float keyH = normFragCoord.x + (normFragCoord.x/2.0);
	
	float b1 = binarySearch(u_texture, keyL);
	float b2 = binarySearch(u_texture, keyH);
	// The difference between positions is the count
	float br = (b2-b1);
	 
	gl_FragColor = vec4(br);
}

I also got a suggestion

To retrieve a texel with a given integer index (starting at zero), add 0.5 to the index and divide by the size of the texture. That gives you the normalised texture coordinate for the centre of the texel.

A binary search ends when you’ve narrowed the range down to a single element. So you return its index or its index plus or minus one, depending upon whether it’s too high or too low, whether you’re searching for the upper or lower bound, and whether the data is sorted in ascending or descending order.