The automatic generation of building footprints from satellite images presents a considerable challenge due to the complexity of building shapes. In this project, we have firstly proposed improved generative adversarial networks (GANs) for the automatic generation of building footprints from satellite images. Then, a novel gated graph convolutional network with deep structured feature embedding is proposed to improve the accuracy of the semantic segmentation.
Viele Anwendungen wie die Interpretation von Straßenräumen für das autonome Fahren erfordern die Lösung verschiedener Aufgaben auf Grundlage digitaler Bilder. Die zu extrahierenden Informationen spiegeln dabei typischerweise verschiedene Aspekte identischer Objekte der physischen Welt wieder. Dieses Wissen um bestehende Korrelationen zwischen den Tasks kann ausgenutzt werden um die einzelnen Tasks effizienter und mit höherer Genauigkeit lösen zu können.
The generation of depth maps is essential for numerous applications, such as autonomous driving or augmented reality. Typically these are generated from stereo image pairs or by making use of active sensors (e.g. LiDAR or RGB-D cameras). Based on the monocular depth perception of humans, this project investigates the estimation of depth maps from single images using artificial neural networks .
This project investigates the usability of image sequences from Micro Aerial Vehicles (MAVs) for generating complete and high resolutional 3D building information models (BIMs). Beside the modelling of the building exterior in a global reference frame, the interior should be reconstructed from independent indoor flights as well. An automatic alignment of the reconstructed indoor and outdoor building models offer the generation of LOD-4 building models.
This project aims at adopting classification methods based on Convolutional Neural Networks for Land Use and Land Cover classification. Publicly available geodata (OpenStreetMap) and multi-spectral Sentinel-2 imagery is used as training data.
Safe autonomous driving systems rely on autonomous vehicles that are able to drive anticipatory. This project aims at simulating the human anticipation of future scenarios by predicting traffic events from generated video frames. We build up on state of the art methods in Computer Vision and Machine Learning to generate prospective frames of a video based on the latest observed video sequence. Features learned by the model are further used for modeling and analyzing traffic scenes and activity patterns of traffic participants.
Modeling the traffic infrastructure based on aerial and ground images becomes ever more important, especially since autonomous driving systems seem to be a part of the near future. This project aims to develop algorithms to process images captured by dash cams mounted on top of a car in order to detect traffic relevant objects, such as traffic participants and all infrastructure elements. In this context, we also use aerial images to analyze group behavior and predict traffic actions.
Land Cover Classification approaches traditionally concentrate on spectral and textural features. However, some classes (e.g. crops) distinguish themselves with characteristic spectro-temporal behaviour, which can be utilized for classificaton. We employ Long Short-Term Memory neural networks for multi-temporal vegetation modeling and crop identification.
Three dimensional urban models play an important role in traffic, terra, mine, survey, and other fields, especially in city planning, construction, and environmentology. Modern space-borne SAR sensors such as TerraSAR-X X, Tandem-X and COSMO-SkyMed can deliver meter-resolution data in high temporal resolution. This project aims to develop novel algorithms for an InSAR data based reconstruction of urban areas.
The project aims at developing a monitoring and early warning system for cyanobacterial blooms and at researching factors influencing harmful algae bloom formation, toxicity and collapse. Optical data obtained during field campaigns are used for inversion studies and validation of remote sensing data.
The S-5P mission is dedicated to monitoring the atmospheric composition. Equipped with TROPOMI with four spectrometers covering the UV-VIS-NIR-SWIR part of the solar spectrum, a number of trace gases and atmospheric components can be retrieved. The high spatial resolution of TROPOMI also serves to increase the frequency of cloud-free pixels and thus allows to calculate tropospheric trace gases for larger areas.