The image-interpretation-workstation of the future: lessons learned

Conference paper


Sebastian Maier
Florian van de Camp
Jens Hafermann
Boris Wagner
Elisabeth Peinsipp-Byma
Jürgen Beyerer


Proc. SPIE 10190, Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR VIII, 2017.




SPIE 10190, Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR VIII, Anaheim, United States, April 9 - May 0, 2017

In recent years, professionally used workstations got increasingly complex and multi-monitor systems are more and more common. Novel interaction techniques like gesture recognition were developed but used mostly for entertainment and gaming purposes. These human computer interfaces are not yet widely used in professional environments where they could greatly improve the user experience. To approach this problem, we combined existing tools in our imageinterpretation-workstation of the future, a multi-monitor workplace comprised of four screens. Each screen is dedicated to a special task in the image interpreting process: a geo-information system to geo-reference the images and provide a spatial reference for the user, an interactive recognition support tool, an annotation tool and a reporting tool. To further support the complex task of image interpreting, self-developed interaction systems for head-pose estimation and hand tracking were used in addition to more common technologies like touchscreens, face identification and speech recognition. A set of experiments were conducted to evaluate the usability of the different interaction systems. Two typical extensive tasks of image interpreting were devised and approved by military personal. They were then tested with a current setup of an image interpreting workstation using only keyboard and mouse against our image-interpretationworkstation of the future. To get a more detailed look at the usefulness of the interaction techniques in a multi-monitorsetup, the hand tracking, head pose estimation and the face recognition were further evaluated using tests inspired by everyday tasks. The results of the evaluation and the discussion are presented in this paper