DoRIAH: Domain-adaptive Remote sensing Image Analysis with Human-in-the-loop

Aim

DoRIAH aims to investigate the analysis of remote sensing images from a human-in-the-loop perspective. Its goal is to allow the semi-automatic detection of various small-size objects in remote sensing images of any kind, from historical aerial images to modern-day satellite images, which is a common goal in many different application domains: for instance, detecting bomb craters in aerial images from WW2 is a major task for estimating the risks of UneXploded Ordnances (UXOs). 

related webpage

Duration
-
Funding

FFG Grant #880883

BMKFFG

Status
ongoing

Analyzing remote sensing images on a large scale requires to balance two major constraints: accuracy of results and the time it takes to process the images. Using human analysts to accomplish the goal usually provides highly accurate results, but is often not feasible in large-scale scenarios due to the sheer amount of image data to be processed. Consequently, fully automatic image analysis approaches are widely considered but often lack the accuracy needed for the specific problem domain. Therefore, for a wide generalization across different domains it is inevitable to combine modern image analysis methods with human supervision to ease the domain transfer.
As a solution, the DoRIAH project (Domain-adaptive Remote sensing Image Analysis with Human-in-the-loop) aims to investigate the analysis of remote sensing images from a human-in-the-loop perspective. Its goal is to allow the semi-automatic detection of various small-size objects in remote sensing images of any kind, from historical aerial images to modern-day satellite images, which is a common goal in many different application domains: for instance, detecting bomb craters in aerial images from WW2 is a major task for estimating the risks of UneXploded Ordnances (UXOs). In modern-day images, the detection of vehicles provides a rich information source for traffic monitoring or parking lot analysis.
The unified approach of DoRIAH involves two basic steps: (1) Georeferencing and 3D reconstruction from remote sensing imagery and (2) interactive detection of objects of interest. Both steps will be equipped with feedback loops to introduce the human cognitive power into the process. While human feedback tells the system about the accuracy and correctness of results, visual feedback of (improved) system results allows for meaningful interpretation on the user side.

Publications

Davide Ceneda, Christopher Collins, Mennatallah El-Assady, Silvia Miksch, Christian Tominski, Alessio Arleo, "A heuristic approach for dual expert/end-user evaluation of guidance in visual analytics", IEEE Transactions on Visualization and Computer Graphics, vol. 30, pp. 997-1007, 2024.
Ignacio Pérez-Messina, Davide Ceneda, Victor Schetinger, Silvia Miksch, "Persistent Interaction: User-Generated Artefacts in Visual Analytics", EuroVis Workshop on Visual Analytics (EuroVA), 2024. paper
Ignacio Pérez-Messina, Davide Ceneda, Silvia Miksch, "Guided Visual Analytics for Image Selection in Time and Space", IEEE Transactions on Visualization & Computer Graphics, vol. 30, pp. 10, 2024. paper
Fabian Sperrle, Mennatallah El-Assady, Alessio Arleo, Davide Ceneda, "A Wizard of Oz Study of Guidance Strategies and Dynamics", IEEE Transactions on Visualization and Computer Graphics, 2024. paper
Wolfgang Aigner, Silvia Miksch, Heidrun Schumann, Christian Tominski, "Visualization of Time-Oriented Data", , 2023. paper
teaser image Ignacio Pérez-Messina, Davide Ceneda, Silvia Miksch, "A Methodology for Task-Driven Guidance Design", EuroVis Workshop on Visual Analytics (EuroVA), pp. 6, 2023. paper
Aoyu Wu, Dazhen Deng, Min Chen, Shixia Liu, Daniel Keim, Ross Maciejewski, Silvia Miksch, Hendrik Strobelt, Fernanda Viégas, Martin Wattenberg, "Grand Challenges in Visual Analytics Applications", IEEE Computer Graphics and Applications, vol. 43, pp. 83-90, 2023.
Ignacio Pérez-Messina, Davide Ceneda, Mennatallah El-Assady, Silvia Miksch, Fabian Sperrle, "A Typology of Guidance Tasks in Mixed-Initiative Visual Analytics Environments", , vol. 41, pp. 465-476, 2022. paper