At first there was an eye
Anybody who will say that eyesight is the most important of all the senses is probably not making a mistake. The lack of the eyesight is also a strong disfunction. This sense from the dawn of history allows human to identify objects or living beings, orientate himself in space, walk, avoid threats and communicate.
We assess our world primarily with the help of sight. Not everybody realizes that seeing is the result of a combination of various factors and is limited by the natural capabilities of human body. We cannot really say how really the world looks like – it is only our interpretation, prepared by eyes, brain and other organs of human body.
Therefore, human eyesight is not a perfect sense. In order not to look far, there are hundreds, if not thousands, of species of animals that have a better sight than humans. Some see better at a long distance, other are better at close quarters, a few more at nights and under difficult lighting conditions – all of them developed significantly different than our mechanisms for processing light rays.
For example, birds can see a spectrum of colours: red, green, blue and ultraviolet (UV). Some crustaceans (such as shrimps that have as many as 16 photoreceptors, while humans have only three) use a set of filters to separate ultraviolet light into more subtle colours. Snakes see infrared through their sensors placed in the heads. Rattlesnake sensors are at least 10 times more sensitive than the best artificial infrared detectors. Bees identify ultraviolet patterns in the centre of the flower. The colours of the petals and the light signals in the form of UV patterns inform bees about the amount of nectar and pollen in the plant. Scarabs and bats, however, use polarized light.
It is not surprising that the human species have always been looking for ways to artificially improve the vision – from primitive magnifying glasses, telescopes, binoculars, tele- and microscopes up to modern, electronically supported optical systems. And on that last thing we want to put a focus in this article.
Several decades ago, the first partial automatic vision systems were created (also known as machine vision). The systems are composed of cooperating with each other electronic devices whose function is automatic visual analysis of the surroundings – like human does.
Machine vision – in the shortest – is the computer’s ability to see. A camera (or video camera) and a system that converts an analogue signal into a digital one replaces the human eye here. Digital data processing system functions serve here as a nervous system.
Two important parameters of each video system are sensitivity and resolution. Sensitivity means the ability to see under different lighting conditions and / or the ability to detect weak pulses in invisible electromagnetic wave ranges.
Resolution reflects the quality of the object’s recognition. The sensitivity and resolution depend on each other – in a simplified way it can be assumed that increasing the sensitivity decreases the resolution and vice versa.
Human eye has the sensitivity to capture waves of 390 up to 770 nanometres (nm), while cameras can have an incomparably higher sensitivity.
Man created devices with capabilities better than his eye. It opened new possibilities of observing and analysing of objects and phenomenon that are completely invisible to the naked human eye.
For what do we use vision systems?
Vision systems consist of information acquisition devices (a single camera/sensor or camera system), a device for collecting and processing data and a data analysing device (CPU or whole computer). The industrial vision systems are most often used to check the physical features of objects, such as: dimensions, shape, colour or texture of the surface. The data obtained always help in the process of making decisions (e.g. about the next stage of the manufacturing process).
Although current industrial vision systems are most often used to control the quality of the products, they are more and more often implemented at the manufacturing stage – to supervise it by assessing and informing whether production parameters are approaching the limit values or not. This allows owners of such manufacturing to prevent the production of products which will be defective.
Computer vision > machine vision
A machine vision is a technology that is strongly present in the industry – it has been used around the globe for about 30 years. The computerization and development of digital technologies since that time allowed further evolution of vision systems towards a wider and more advanced category – which is now called image recognition or computer vision. Basically, it is image processing by the machine with the use of external sensors (e.g. camera, scanner) into a digital description of such image for further processing.
It can be said that computer vision somehow contains machine vision, but at the same time it goes much further.
While the result of machine vision is usually simple information (such as “in this picture there are five apples” or “the cap in this image has a defect” – in a certain simplification of course) the computer vision provides not only the image, but also its digital interpretation. A computer vision can process various parameters of two- and three-dimensional images, often non-standardized, in which objects and their actions can be unpredictable. It gives incomparably greater opportunities.
From monochromatic cameras up to hyperspectral sensors
The heart of vision systems are cameras – sensors that are still evolving. We briefly present their main types below.
The monochromatic camera, still common in industrial applications, is the most basic solution in vision systems.
In monochromatic cameras a single pixel provides information only on the intensity of the light. For many applications in industrial vision systems this solution is sufficient. Thanks to the usually higher resolutions of such systems (achieved due to the lack of colour filters) sensitivity, contrast and faster processing of monochromatic images in combination with a lower price is often quite optimal.
The next level is a colour camera that captures three spectral data points in the RBG model – from the first letters of the colour names: R – red, G – green, B – blue.
A higher place in the hierarchy of vision systems is occupied by multispectral imaging. It is the creation of images in which each pixel contains more than three spectral points, usually from 4 up to 20 (where – opposite to the hyperspectral images, which will be described below – they do not have to be bands that are next to each other). Multispectral images are generated by sensors that measure energy reflected within several specific sections (also called bands) of the electromagnetic spectrum.
The original multispectral cameras captured four ranges of data: RGB and NIR (near-infrared) bands. Today it is a full colour space in the range of visible light, as well as microwaves, far and near-infrared and ultraviolet. The domain of hyperspectral sensors is to measure the energy simultaneously and in much narrow bands. Hyperspectral images can contain from several dozen up to several hundred neighbouring spectral bands. Numerous narrow ranges of hyperspectral sensors ensure continuous measurement in the assumed range of the electromagnetic spectrum and therefore are more sensitive to subtle changes in reflected energy. Images produces on the basis of readings from hyperspectral sensors contain much more data than images from multispectral sensors and have a greater potential to detect specific differences and details. For example, multispectral images can be used for imaging forest areas. Hyperspectral imaging will provide another level of details, which may be mapping tree species in the forest.
Thanks to their rich “information density”, hyperspectral images are the best source material for computer vision systems – whether in industry or in remote sensing. This amount of data also generates problems – the wrong choice of all channels may result in too much data. Many bands might be simply unnecessary or even difficult to read. Some downside are also significantly higher costs of such solutions and a high degree of their technical complexity. This shows how important is the skilful selection of tools for a specific task.
How does it work?
Briefly about… Polish space sector.
Scanway represents Polish space sector.
What does it mean? How big is the Polish space sector, what does it do and how it develops?
Read the post