Mapping Dutch agriculture using multi-source images and deep learning

GIMA
M-GEO
M-SE
STAMP
Topic description

Crop mapping plays an important role in agronomic planning and management both for farmers and policymakers. Satellite remote sensing sensors have become efficient tools for mapping croplands. Particularly, optical data have been extensively used for crop mapping and gathering information supporting agriculture. However, the atmosphere and cloud coverage greatly impact the quality of these data.
New generation Earth observation sensors provide different types of data. This offers the possibility of data fusion and application of multi-sensor (multimodal) data sets in a joint manner [1]. For example, the European Space Agency's Copernicus Program includes two Earth observation satellite constellations “Sentinel-1” equipped with synthetic aperture radar (SAR) sensors and “Sentinel-2” carrying optical multispectral sensors. Multispectral data from Sentinel-2 can deliver valuable information on the composition of surface materials but suffer from the presence of clouds and cloud shadows. On the other hand, Sentinel-1 day-night and all-weather-conditions SAR data, provide information on the textural and physical properties of the surface. However, SAR images are grayscale and often contaminated with a big amount of speckle noise, thus difficult to interpret. Hence it makes sense to fuse data coming from these sensors.
Deep learning methods are increasingly used by the geospatial community. Particularly, generative models such as a conditional generative adversarial network (GAN) have shown to be powerful tools for generating artificial optical data from SAR input data [2-5] (e.g., when optical data are corrupted because of weather conditions or clouds). This study aims to explore the application of GAN for fusing Sentinels SAR and optical data and mapping of agricultural crop fields using generated GAN-features. The study area is located in the Netherlands, and the SEN1-2 dataset [2], which comprises 282384 pairs of corresponding SAR-optical image patches, and collected from across the globe, will be used to train the GAN model.

Topic objectives and methodology

Developing and implementing deep learning algorithms for the fusion of the Sentinels images to improve the croplands maps in the Netherlands

References for further reading
  • [1] Ghamisi, P., Rasti, B., Yokoya, N., Wang, Q., Hofle, B., Bruzzone, L., ... & Benediktsson, J. A. (2019). Multisource and multitemporal data fusion in remote sensing: A comprehensive review of the state of the art. IEEE Geoscience and Remote Sensing Magazine, 7(1), 6-39.
    [2] Schmitt, M., Hughes, L. H., & Zhu, X. X. (2018). The SEN1-2 dataset for deep learning in SAR-optical data fusion. ISPRS Technical Commission I Symposium.
    [3] Grohnfeldt, C., Schmitt, M., & Zhu, X. (2018, July). A conditional generative adversarial network to fuse sar and multispectral optical data for cloud removal from sentinel-2 images. In IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium (pp. 1726-1729). IEEE.
    [4] Ley, A., Dhondt, O., Valade, S., Haensch, R., & Hellwich, O. (2018, June). Exploiting GAN-based SAR to optical image transcoding for improved classification via deep learning. In EUSAR 2018; 12th European Conference on Synthetic Aperture Radar (pp. 1-6). VDE.
    [5] Fuentes Reyes, M., Auer, S., Merkle, N., Henry, C., & Schmitt, M. (2019). Sar-to-optical image translation based on conditional generative adversarial networks — Optimization, opportunities and limits. Remote Sensing , 11 (17), 2067.