Speaker
Description
Deep neural networks (DNNs) have achieved unprecedented success in a variety of pattern recognition tasks, including medical imaging, speech recognition, image colorization, and satellite imaging. An extremely rapid development of remote sensors made the acquisition of hyperspectral image data (HSI), with up to hundreds of spectral bands over a given spatial area, much more affordable. However, efficient analysis, segmentation, transfer and storage of such imagery became a big issue in practical applications, and it is currently being faced by the machine-learning and image-processing communities worldwide. It is especially important in hardware-constrained environments (e.g., on-board of a satellite), where resource frugality of classification engines as well as memory and transfer efficiency are the big and important real-life concerns. In this talk, we will focus on techniques which are aimed at reducing the size of HSI data, and verify how they affect the HSI segmentation accuracy using both conventional machine-learning approaches (including support vector machines and decision trees) and DNNs. The latter models will include spectral and spatial-spectral DNNs which either benefit exclusively from spectral information about the pixel being classified, or both spatial-spectral characteristics. Finally, we will verify whether multispectral imagery (simulated using hyperspectral data) is enough to accurately segment regions of interest. Our rigorous experiments were performed using benchmark datasets (Pavia University and Salinas Valley) acquired ROSIS and AVIRIS sensors. The results obtained using our methods were compared with the current state of the art, and underwent thorough statistical tests.