- Joined
- Jan 14, 2020
- Messages
- 9,609
- Likes
- 84,139
DRDO Image Fusion Techniques for Electro-optical Systems
Image fusion is a subset of the more general field of data fusion. Image fusion combines multiple images of the same scene with complimentary or redundant information to generate a new composite image with better quality and more features. It can provide a better interpretation of the scene than each of the single sensor images can do. The objective is to reduce uncertainty and minimize redundant information in the multisensory output while maximizing information present in a scene. It has received significant attention in defense systems, medical imaging, robotics and industry, etc.
Image fusion system requires number of processes to convert two or more input images into a fused image. Images from the individual sensors are pre-processed (image enhancement) and aligned with one of the reference frames. The FOV compensation and spatial alignment is performed through image registration process.
Finally these images are fused, as per the fusion methods and fusion rules to generate the fused imagery. Figure shows the generic processing requirement for the generation of fused imagery to create the single output image from multiple image frames of individual sensors.
There are multiples steps required for image fusion which are as follows.
Mechanical alignment of imaging sensors: Sensors installed on a platform or as a part of surveillance suite, are to be aligned so that their optical axes are made parallel to each other
Image registration: This step is required to compensate different resolution, sensors’ FOV so that corresponding pixels of two/ more sensors are overlapped at same position. It is achieved by electronic processing method.
Image fusion: Once, registration is completed successfully, pixel level fusion algorithms are applied to make composite image from multiple imaging sources to enhance the information.
Image Fusion
Various Image fusion algorithms, i.e., weighted average, HPF, Waveletbased and Laplacian Pyramid-based image fusion have been designed, simulated and their performance and computation requirements have been analyzed. FPGA and DSP-based image fusion hardware have been developed. Image synchronization, Affine Transform and bilinear interpolationbased image registration algorithm, weighted average, HPF and LaplacianPyramid (LAP) based image fusion algorithms have been implemented in hardware. Image Registration and fusion of video from EO Sensors has been implemented in real time.
Figure 2 Shows the image fusion-based surveillance system developed at the laboratory.
Image Results
Figure 1 shows the image acquired from CCD Camera, thermal imager and fused image. The first image is colour CCD image showing smoke due to fire and a person walking behind smoke is captured by LWIR camera. The Fused image captures information from both the sensors. Fused image clearly conveys that there is fire as well as a person present in the scene. Multi-sensor image fusion has received significant attention in defense systems, geosciences, medical imaging, remote sensing, robotics, industrial engineering, etc. In defence, target detection, tracking and classification system identification, concealed weapon detection, battlefield monitoring, night pilot guidance, improved situational assessment/ awareness, improved robustness, can utilize fused imagery for better situational awareness.
Recently, Indian Railway has shown interest to install this system in front of train engine so that traffic signal and any animal/person/vehicle/object on track can be seen and output is shown in single composite image. It will reduce the accidents due to poor visibility and in night conditions. The developed techniques are being used in systems like Drivers Night Sight (DNS) for AFV, etc. being developed at the laboratory.
There could be following applications in near future.
Surveillance suite in which multiple monitor cannot be accommodated.
Transmission of multi-sensor suite information to command headquarter, which can be clubbed in single image/ video stream.
Single video can be archived instead of multiple videos to save storage.
It can reduce operator’s fatigue to monitor multiple displays.
Image fusion is a subset of the more general field of data fusion. Image fusion combines multiple images of the same scene with complimentary or redundant information to generate a new composite image with better quality and more features. It can provide a better interpretation of the scene than each of the single sensor images can do. The objective is to reduce uncertainty and minimize redundant information in the multisensory output while maximizing information present in a scene. It has received significant attention in defense systems, medical imaging, robotics and industry, etc.
Image fusion system requires number of processes to convert two or more input images into a fused image. Images from the individual sensors are pre-processed (image enhancement) and aligned with one of the reference frames. The FOV compensation and spatial alignment is performed through image registration process.
Finally these images are fused, as per the fusion methods and fusion rules to generate the fused imagery. Figure shows the generic processing requirement for the generation of fused imagery to create the single output image from multiple image frames of individual sensors.
There are multiples steps required for image fusion which are as follows.
Mechanical alignment of imaging sensors: Sensors installed on a platform or as a part of surveillance suite, are to be aligned so that their optical axes are made parallel to each other
Image registration: This step is required to compensate different resolution, sensors’ FOV so that corresponding pixels of two/ more sensors are overlapped at same position. It is achieved by electronic processing method.
Image fusion: Once, registration is completed successfully, pixel level fusion algorithms are applied to make composite image from multiple imaging sources to enhance the information.
Image Fusion
Various Image fusion algorithms, i.e., weighted average, HPF, Waveletbased and Laplacian Pyramid-based image fusion have been designed, simulated and their performance and computation requirements have been analyzed. FPGA and DSP-based image fusion hardware have been developed. Image synchronization, Affine Transform and bilinear interpolationbased image registration algorithm, weighted average, HPF and LaplacianPyramid (LAP) based image fusion algorithms have been implemented in hardware. Image Registration and fusion of video from EO Sensors has been implemented in real time.
Figure 2 Shows the image fusion-based surveillance system developed at the laboratory.
Image Results
Figure 1 shows the image acquired from CCD Camera, thermal imager and fused image. The first image is colour CCD image showing smoke due to fire and a person walking behind smoke is captured by LWIR camera. The Fused image captures information from both the sensors. Fused image clearly conveys that there is fire as well as a person present in the scene. Multi-sensor image fusion has received significant attention in defense systems, geosciences, medical imaging, remote sensing, robotics, industrial engineering, etc. In defence, target detection, tracking and classification system identification, concealed weapon detection, battlefield monitoring, night pilot guidance, improved situational assessment/ awareness, improved robustness, can utilize fused imagery for better situational awareness.
Recently, Indian Railway has shown interest to install this system in front of train engine so that traffic signal and any animal/person/vehicle/object on track can be seen and output is shown in single composite image. It will reduce the accidents due to poor visibility and in night conditions. The developed techniques are being used in systems like Drivers Night Sight (DNS) for AFV, etc. being developed at the laboratory.
There could be following applications in near future.
Surveillance suite in which multiple monitor cannot be accommodated.
Transmission of multi-sensor suite information to command headquarter, which can be clubbed in single image/ video stream.
Single video can be archived instead of multiple videos to save storage.
It can reduce operator’s fatigue to monitor multiple displays.