Machine vision (MV) is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance in industry. The scope of MV is broad. MV is related to, though distinct from, computer vision.
Maps, Directions, and Place Reviews
Definition
Machine vision as a systems engineering discipline can be considered distinct from computer vision, a form of basic computer science. Machine vision attempts to integrate existing technologies in new ways and apply them to solve real world problems.
The components of a machine vision system include lighting, a camera, a processor, software, and output devices.
Machine Vision Cameras Video
Applications
The primary uses for machine vision are automatic inspection and industrial robot guidance. Initial applications were in two dimensions (for example, objects moving on a conveyor belt) but as 2016 systems capable of working in industrial settings in three dimensions were coming online.
Methods
Machine vision methods are defined as both the process of defining and creating an MV solution, and as the technical process that occurs during the operation of the solution. Here the latter is addressed. As of 2006, there was little standardization in the interfacing and configurations used in MV. This includes user interfaces, interfaces for the integration of multi-component systems and automated data interchange. Nonetheless, the first step in the MV sequence of operation is acquisition of an image, typically using cameras, lenses, and lighting that has been designed to provide the differentiation required by subsequent processing. MV software packages then employ various digital image processing techniques to extract the required information, and often make decisions (such as pass/fail) based on the extracted information.
Imaging
While conventional (2D visible light) imaging is most commonly used in MV, alternatives include imaging various infrared bands, line scan imaging, 3D imaging of surfaces and X-ray imaging. Key divisions within MV 2D visible light imaging are monochromatic vs. color, resolution, and whether or not the imaging process is simultaneous over the entire image, making it suitable for moving processes. The most commonly used method for 3D imaging is scanning based triangulation which utilizes motion of the product or image during the imaging process. Other 3D methods used for machine vision are time of flight, grid based and stereoscopic.
The imaging device (e.g. camera) can either be separate from the main image processing unit or combined with it in which case the combination is generally called a smart camera or smart sensor. When separated, the connection may be made to specialized intermediate hardware, a frame grabber using either a standardized (Camera Link, CoaXPress) or custom interface. MV implementations also have used digital cameras capable of direct connections (without a framegrabber) to a computer via FireWire, USB or Gigabit Ethernet interfaces.
Though the vast majority of machine vision applications are solved using two-dimensional imaging, machine vision applications utilizing 3D imaging are a growing niche within the industry. One method is grid array based systems using pseudorandom structured light system as employed by the Microsoft Kinect system circa 2012. Another method of generating a 3D image is to use laser triangulation, where a laser is projected onto the surfaces of an object and the deviation of the line is used to calculate the shape. In machine vision this is accomplished with a scanning motion, either by moving the workpiece, or by moving the camera & laser imaging system. Stereoscopic vision is used in special cases involving unique features present in both views of a pair of cameras.
Image processing
After an image is acquired, it is processed. Machine vision image processing methods include
- Stitching/Registration: Combining of adjacent 2D or 3D images.
- Filtering (e.g. morphological filtering)
- Thresholding: Thresholding starts with setting or determining a gray value that will be useful for the following steps. The value is then used to separate portions of the image, and sometimes to transform each portion of the image simply black and white based on whether it is below or above that grayscale value.
- Pixel counting: counts the number of light or dark pixels
- Segmentation: Partitioning a digital image into multiple segments to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze.
- Inpainting
- Edge detection: finding object edges
- Color Analysis: Identify parts, products and items using color, assess quality from color, and isolate features using color.
- Blob discovery & manipulation: inspecting an image for discrete blobs of connected pixels (e.g. a black hole in a grey object) as image landmarks. These blobs frequently represent optical targets for machining, robotic capture, or manufacturing failure.
- Neural net processing: weighted and self-training multi-variable decision making
- Pattern recognition including template matching. Finding, matching, and/or counting specific patterns. This may include location of an object that may be rotated, partially hidden by another object, or varying in size.
- Barcode, Data Matrix and "2D barcode" reading
- Optical character recognition: automated reading of text such as serial numbers
- Gauging/Metrology: measurement of object dimensions (e.g. in pixels, inches or millimeters)
- Comparison against target values to determine a "pass or fail" or "go/no go" result. For example, with code or bar code verification, the read value is compared to the stored target value. For gauging, a measurement is compared against the proper value and tolerances. For verification of alpha-numberic codes, the OCR'd value is compared to the proper or target value. For inspection for blemishes, the measured size of the blemishes may be compared to the maximums allowed by quality standards.
Outputs
A common output from machine vision systems is pass/fail decisions. These decisions may in turn trigger mechanisms that reject failed items or sound an alarm. Other common outputs include object position and orientation information from robot guidance systems. Additionally, output types include numerical measurement data, data read from codes and characters, displays of the process or results, stored images, alarms from automated space monitoring MV systems, and process control signals.
Market
As recently as 2006, one industry consultant reported that MV represented a $1.5 billion market in North America. However, the editor-in-chief of an MV trade magazine asserted that "machine vision is not an industry per se" but rather "the integration of technologies and products that provide services or applications that benefit true industries such as automotive or consumer goods manufacturing, agriculture, and defense."
As of 2006, experts estimated that MV had been employed in less than 20% of the applications for which it is potentially useful.
Source of the article : Wikipedia
EmoticonEmoticon