Free Newsletter
Register for our Free Newsletters
Advanced Composites
Amorphous Metal Structures
Analysis and Simulation
Asbestos and Substitutes
Associations, Research Organisations and Universities
Automation Equipment
Building Materials
Bulk Handling and Storage
CFCs and Substitutes
View All
Other Carouselweb publications
Carousel Web
Defense File
New Materials
Pro Health Zone
Pro Manufacturing Zone
Pro Security Zone
Web Lec
Pro Engineering Zone

Blob analysis and edge detection in the real world

Matrox VITE : 16 December, 2006  (Company News)
The purpose of analysis is to determine whether the results obtained from an operation are accurate, logical, and true. Engineers use analysis tools to monitor a given process. In machine vision, this process of monitoring is performed using image analysis tools. Thanks to faster CPUs, these tools have become more robust and more powerful, allowing machine vision to perform in more complex applications than ever before.
Image processing software comprises complex algorithms that have pixel values as inputs. Todayís image analysis software packages include both old and new technologies. Most significant is the relationship between the old blob analysis method and the new edge-detection technique.

Blob Meets World
For image processing, a blob is defined as a region of connected pixels. Blob analysis is the identification and study of these regions in an image. The algorithms discern pixels by their value and place them in one of two categories: the foreground (typically pixels with a non-zero value) or the background (pixels with a zero value).

In typical applications that use blob analysis, the blob features usually calculated are area and perimeter, Feret diameter, blob shape, and location. The versatility of blob analysis tools makes them suitable for a wide variety of applications such as pick-and-place, pharmaceutical, or inspection of food for foreign matter.

Since a blob is a region of touching pixels, analysis tools typically consider touching foreground pixels to be part of the same blob. Consequently, what is easily identifiable by the human eye as several distinct but touching blobs may be interpreted by software as a single blob. Furthermore, any part of a blob that is in the background pixel state because of lighting or reflection is considered as background during analysis.

A reliable software package will tell you how touching blobs are defined. For example, you can define touching pixels as adjacent pixels along the vertical or horizontal axis as touching or include diagonally adjacent pixels.

Setting the rules for touching pixels is important because it can affect the outcome of the application. In Figure 2, for example, the group of pixels would be considered one blob if the specified lattice includes the diagonals but separate blobs if it does not.

The performance of a blob analysis operation depends on a successful segmentation of the image, that is, separating the good blobs from the background and each other as well as eliminating everything else in the image that is not of interest. Sounds easy, but without considering variables such as lighting conditions and noise in the image, you could include blobs in your results that you donít want.

Segmentation usually involves a binarization operation. If simple segmentation is not possible due to poor lighting or blobs with the same gray level as parts of the background, you must develop a segmentation algorithm appropriate to your particular image.

The acquired image may contain noise or spurious blobs or holes that may be caused by noise or lighting. Such extraneous blobs can interfere with blob analysis results. If the image contains several extraneous blobs, you probably should preprocess the image before using it. Preprocessing refers to any steps made to clean up the image before analysis and can include thresholding or filtering.

Opening operations for non-zero valued blobs or holes or closing operations for zero valued blobs or holes help remove most noise without significantly altering real features. Preprocessing can be avoided by acquiring images under the best possible circumstances. This means ensuring that blobs do not overlap and, if possible, do not touch.

It also means ensuring the best possible lighting and using a background with a gray level that is very distinct from the gray level of the blobs. These steps add to the overall time required for analysis that, depending on the application, can be in the order of milliseconds.

For example, in a blueberry inspection system, the berries fall off a conveyor belt and are analyzed according to average hue, brightness, size, and roundness. This system offers only a 20-ms window for the analysis before the rejection stage. Ice chunks or twigs are rejected based on size and roundness as well as color through the blowing action of a bank of air jets programmed with their positions.

New Technology on the Edge
The introduction of edge-based processing into commercial off-the-shelf development packages brought a series of high-performance, highly robust tools. While blob analysis concerns itself with regions of connected pixels based on their pixel value, edge detection is established from the intensity of transitions in an image.

Edge-detection tools are useful in applications that perform complex measurements, locate defects, or recognize and analyze shapes. At Matrox Imaging, we define edges as curves that delineate a boundary. Edges are determined from differential analysis and extracted by analyzing intensity transitions in images.

Strong transitions typically are caused by the presence of an object contour in the image although they also can come from many physical phenomena such as shadows and illumination variations. The main intensity transition in the image is observed in the perpendicular direction relative to the object border.
Bookmark and Share
Home I Editor's Blog I News by Zone I News by Date I News by Category I Special Reports I Directory I Events I Advertise I Submit Your News I About Us I Guides
   ¬© 2012
Netgains Logo