Detecting edges is a common image processing problem. For example, digital cameras often feature face detection. Some robotic competitions require the robots to find a ball using a digital camera, so the robot needs to be able to βseeβ a ball. One way to look for an edge in a picture is to compare the color at the current pixel with the pixel in the next column to the right. If the colors differ by more than some specified amount, this indicates that an edge has been detected and the current pixel color should be set to black. Otherwise, the current pixel is not part of an edge and its color should be set to white (Figure 1).
The following method implements this simple algorithm. Notice that the nested for loop stops earlier than when it reaches the number of columns. That is because in the nested loop the current color is compared to the color at the pixel in the next column. If the loop continued to the last column this would cause an out-of-bounds error.
1. Notice that the current edge detection method works best when there are big color changes from left to right but not when the changes are from top to bottom. Add another nested loop that compares the current pixel with the pixel below it and sets the current pixel color to black as well when the color distance is greater than the specified edge distance.
Picture Lab A9: Improve the edgeDetection method by adding another nested loop that compares the current pixel with the pixel below it and sets the current pixel color to black as well, when the color distance is greater than the specified edge distance.