PDA

View Full Version : direction vector of pixels



romeoneverdies
06-09-2002, 10:19 PM
will we be able to get the direction of pixels in an image,by means of a direction vector or something ???

Deiussum
06-10-2002, 04:49 AM
Direction of a pixel? Interesting concept, individual pixels aren't generally thought of having a direction. You'll have to provide a bit more detail of what it is you mean, I think.

romeoneverdies
06-10-2002, 08:49 PM
something like the orientation of the pixel... not very sure if such a thing will be available from an image

Bob
06-11-2002, 01:28 AM
An orientation for a pixel makes no sense to me. A pixel is a infinitely small sample, and does not have any kind of direction.

How do you plan to use this "orientation" once you have it?

romeoneverdies
06-11-2002, 02:59 AM
actually im trying to develop a algorithm which would help me in my work,which deals with detecting a road network from a satellite image and im using openGL for it and im relatively new to openGL.

so,i thought if there is anything like orientation,then i could use it to find the neighbouring pixel of the same or similar orientation and thus map the road stretch.

can u suggest me any other way to go abt the total process ???

mikael_aronsson
06-11-2002, 04:09 AM
Hi !

Then you would need a heightmap instead, you need to know the height of a pixel compared to the others, I guess you could use the color of a pixel to indicate the height of course.

Mikael

Deiussum
06-11-2002, 04:38 AM
Sounds like an intersting problem. Not sure that anyting in OpenGL can help you with this specific problem. My first thought would be, does the color of the pixels from the road stand out enough from the other areas of the satellite image that you could use a flood-fill type of algorithm to find all pixels of a similar color?

Robbo
06-11-2002, 04:41 AM
This is a problem in computer vision. What you need to do is find the `average', `minima' and `maxima' pixels in the candidate area. From these you have an `axis' that can give you the angle of orientation of the object. Try it in your mind with an ellipse - it actually works http://www.opengl.org/discussion_boards/ubb/wink.gif

Gavin
06-12-2002, 12:43 AM
Explain a little more rome....

romeoneverdies
06-12-2002, 04:40 AM
the problem is this...

a IKONOS satellite image of 1 meter resolution is given as input to the system. the user cliks a valid point on the road and the program registers this point. what i planned was to find the color values of the pixel and its depth...then parse thru all its immediate eight neighboring pixels to see if they are of similar color and depth,cos a road is almost uniformly colored.
this will continue till the road edge is reached or the image edge is reached.all the similar colored pixels can be stored in an array and later can be marked with a distinct color to show the road patch.

i brought in the depth concept,becos in few cases the pixel on the road and a pixel on the edge (a barrier or road marking) gave me the same RGB values even if they were visibly of different colors. so ,i thought maybe ,if the depth was the same ,then two distinct colored pixels wouldnt give similar RGB values.

as im jus starting up with openGL,i am stuck without knowing how to proceed after getting the pixel RGB values. any help ???

Robbo
06-12-2002, 04:54 AM
Thing is, this isn't really an OpenGL problem. You are essentially performing a `flood fill' within some RGB thresholds.

Gavin
06-12-2002, 06:16 AM
Hmmm, nice idea, but doubt the algorithm is particularly robust. Obviously depends on what the images in are like. Well the opengl bit that you are asking is simple enough. Create a window, use gldrawpixels to draw your sat pic, then glreadpixels to get rgb from the screen, or access your data directly may be easier.
oh, well you would need to change your sat pic to a usable opengl format. To draw you can either change your data in memory, or you can use begin(GL_POINT) with a point size of 1 to colour in a pixel. so long as you have an ortho display the same size as your sat pic.

Robbo
06-12-2002, 06:21 AM
Even so, this kind of thing would be best done on the CPU in an `image array', rather than specifically with OpenGL. It would be much quicker that way as well.

I think what he essentially wants to do is find edge boundaries based on some seed point and then to retrieve the orientation of the feature. This is a difficult problem ANYWAY in computer vision. A road that is more or less a straight line (at any orientation) is the simplest case. More complicated cases include Junctions, Bends and roads in urban sprawl (distinguishing betwee the grey concrete and the grey roads!).

Might I suggest asking about this on comp.graphics.algorithms. There are people there who specialise in this kind of image processing application.

Robbo
06-12-2002, 06:23 AM
Oh, as a starter, as long as you are scanning radially away from your seed point, it is possible to measure and detected gross changes in gradient and hence to determine different `sections' of a road (ie: the straight bit, a bendy bit, etc.). But as I say, this is a non-trivial problem.

Gavin
06-12-2002, 06:36 AM
Well the point is visualisation. Using an interactive piece of software is the best way to test your algorithms, especially in vision. Like I say it will depend greatly on the image that he gets from the sat. For all we know it coule be a black road in a white dessert. I would doubt if speed is an issue seen as he hasn't even designed his algorithm. I still think there is nothing wrong with what he is doing.

romeoneverdies
06-12-2002, 08:23 PM
speed is not a very big issue for now cos its just the starting stage...i jus need to show my professor that it works to some extent after which i get some breathing time to make it work faster.now that the deadline is closing in ,it would be great even if i could manage a flood fill over the road area...

can robbo get me a flood fill algorithm from the computer.graphics.algorithms he mentioned ??

romeoneverdies
06-16-2002, 10:52 AM
can someone get hold for me an alogorithm or code to parse thru neighboring pixels of a particular chosen pixel,which would proceed until the image edge is reached or a particular boundary is reached ???

Gavin
06-16-2002, 11:11 AM
Not too sure how you are hoping to complete a project on tracking/image processing if you can't fingure out an algorithm to check neighbouring pixels.

x-1,y+1 x, y+1 x+1, y+1
x-1, y x, y x+1, y
x-1, y-1 x, y-1 x+1, y-1

if 0, 0 is bottom left. obviously do a check to see if x,y is a top, left, bottom or right pixel

romeoneverdies
06-18-2002, 01:35 AM
i already have my algorithm for picking the neighbouring pixels,WORKING. but the thing is i gotto include the depth thing for each pixel in the procedure,becos of the problem i told before. thats where i m in a fix.

dorbie
06-18-2002, 02:50 AM
You could shade a 3 component vector to the pixel as an rgb triplet just the same way you apply a bump map. Now, bump maps are typically in tangent space and even object space maps would present problems under matrix operations. I wonder if you could use fragment dot products to reorient the texture vector fragment into world (or eye) space?

Gavin
06-18-2002, 03:02 AM
Rather than look at depth data, why not try and fix your other problem. You say that you are getting incorrect values back fot the rgb of pixels, or rather 2 pixels that differ in colour are coming back with the same value. You must be doing something wrong here.

romeoneverdies
06-19-2002, 07:33 PM
but the thing is ,i doubt whether this problem of two different pixels of different colors giving the same RGB value may happen becos they are at various depths..and thats why im trying to find the depth value now.

is this a reasonable thing to do ???

Gavin
06-20-2002, 01:22 AM
Its not unreasonable, but correct the other problem first. You are either accessing pixels incorrectly (how do you find x, y) or your algorithm is failing somewhere. Remember that pixel values go from 0-(width-1) 0-(height-1) just little thinkgs like this may be making a difference.

romeoneverdies
06-24-2002, 04:17 AM
Im jus detecting the mouse clik location to get my x, y values as in regular picking algorithms.

then this x,y values i use in the statement

glReadPixels(x, y, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, (void *)pixels);

to find the RGB values of the particular pixels.

do u think using a 24 bit BMP may also cause such a problem due to its redundant pixel ?? ...becos one colored pixel may underlie another pixel in a BMP image.

Robbo
06-24-2002, 05:03 AM
What do you mean `redundant pixel'? There is precisely one pixel at any given x, y position in the frame buffer.

Jeffry J Brickley
06-24-2002, 06:49 AM
After reading this several times, I think I finally understand what you are trying to do, but I am not sure you have the resolution to do it. Satelite imagery at 1 meter has no "depth" no elevation information implied or given, none, nothing. In most cases it is monochrome, just greyscales, but as I understand in your case it is full RGB color. But I think you misunderstand how that color works. 24bit color means one pixel has ALL three values of Red Green and Blue at 8 bits each.

To get your "orientation" you could combine the RGB value with an elevation from DTED or other sources. BUT, to get one per pixel you would need 1 Meter elevational grid for the same area which is rare indeed. The Shuttle mapping mission (as I understood it) generated elevational grid maps at 80 Meter posting (will be released as 100Meter or there abouts). That means one grid scare of elevation data contains 80x80 pixels of image data. You could, by calculating the normal of the two tris in the grid, get a direction vector for the face of the tri, but that would still be 3200 pixels per tri, two tris per grid square, and ALL of those would have the same value for a direction vector. DTED level 2 data, available from some commercial, industrial, and military sources is 30 Meter posting (approximate). But even that means there are still 450 pixels with the same orientation vector.

If you are trying to do feature extraction, I think you need to do a similar pixel search. Has the value changed xx percentage within the same "grey-space" (luminance value) and has the color not drastically changed (still a black-top paved road, or a gravel road). This too breaks down as you hit shadows on gravel roads, and reflections on black-top as well as paving material changes. I am NOT the expert to ask on that type of selection, though I have done some VERY minor work on it; however I don't think adding a direction vector to your pixel values are going to gain you much as there is no where NEAR the resolution available in elevation as there is in imagery.

Hope this helps!