Scene Understanding from Aerospace Sensors: What can be Expected?


S. Herbin , F. Champagnat, J. Israel, F. Janez, B. Le Saux, V. Leung, A. Michel

PDF icon AL04-06_0.pdf6.25 MB

Automated scene understanding or interpretation is a fundamental problem of computer vision. Its goal is to compute a formal description of the  content and events that can be observed in images or videos and distribute it to artificial or human agents for further exploitation or storage. Over the last decade, tremendous progress has been made in the design of  algorithms able to analyze images taken under standard viewing conditions. Several of them, e.g., face detection, are already used daily on consumer products. In contrast, the aerospace context has been confined to professional or military applications for a long time, due to its strategic stakes and to the high cost of data production. However, images and videos taken from sensors embedded in airborne or spatial platforms are now being made publicly available, thanks to easily deployable UAVs and web based access data repositories. This article examines the state of the art of automated scene interpretation from aerospace sensors. It will examine how the general techniques of object detection and recognition can be applied to this specific context, as well as what their limitations are and what kind of extensions are possible. The interpretation will be focused on the analysis of movable objects such as  vehicles, airplanes and persons. Results will be illustrated with past and ongoing projects.